1

I am running a download server in AWS t2.micro instance & I have configured max heap of 512 MB & min heap of 256 MB for my java process. I am performing a migration kind of process in a single thread which downloads files (Sizes < 50MB)from google drive. But when I run it, I get the following error

error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 33558528 bytes for committing reserved memory.

Here is extracts from hs_err_pid13942.log

VM Arguments:

jvm_args: -Xms256m -Xmx512m -XX:PermSize=32m -XX:MaxPermSize=64m -XX:+HeapDumpOnOutOfMemoryError

Here is my meminfo

/proc/meminfo:
MemTotal:        1016324 kB
MemFree:           58792 kB
Buffers:             344 kB
Cached:            15984 kB
SwapCached:            0 kB
Active:           899232 kB
Inactive:          14664 kB
Active(anon):     897692 kB
Inactive(anon):      332 kB
Active(file):       1540 kB
Inactive(file):    14332 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                72 kB
Writeback:             0 kB
AnonPages:        897608 kB
Mapped:             6284 kB
Shmem:               416 kB
Slab:              22276 kB
SReclaimable:      10960 kB
SUnreclaim:        11316 kB
KernelStack:        1408 kB
PageTables:         7460 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      508160 kB
Committed_AS:     881084 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        4664 kB
VmallocChunk:   34359727628 kB
HardwareCorrupted:     0 kB
AnonHugePages:    591872 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       28672 kB
DirectMap2M:     1150976 kB

Here is the cpu info

/proc/cpuinfo:
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
stepping        : 2
microcode       : 0x25
cpu MHz         : 2394.552
cache size      : 30720 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm xsaveopt fsgsbase bmi1 avx2 smep bmi2 erms invpcid
bogomips        : 4789.10
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

Memory: 4k page, physical 1016324k(58792k free), swap 0k(0k free)

I can see there is 58 MB free & also i see sufficient memory free while monitoring with 'free -h' command in the instance. So I don't understand why did this error occur in the first place. Can someone explain to me the reason & also what am i doing wrong wrt to memory configuration ?

Aarish Ramesh
  • 111
  • 1
  • 1
  • 4

2 Answers2

2

It look that the application allocate memory off the heap by direct memory access (Native memory allocation (mmap)).

You can run the application with option -XX:MaxDirectMemorySize=55m (default in Java 7&8 is 0).

hoshoh
  • 56
  • 1
  • Thanks. If the default value is 0 then how was jvm able to use allocate perm gen mmap memory earlier & the process gets killed after running for a while? This is confusing. Also should 55M be sufficient ? – Aarish Ramesh Jun 02 '17 at 09:12
  • Permanent generation is Off-Heap memory area but its' not like Direct Memory Access is fully managed by the Java Runtime and allocate and deallocate memory dynamically with help of GC.I make the Max Direct Memory 55m just to make sure that there isn't another third-party lib using also Direct Memory Access,why you give it a try and let us know. – hoshoh Jun 02 '17 at 09:43
  • I tried it but the error just occurred. Now I am just trying with decreasing max heap size & increasing MaxDirectMemorySize to 128MB – Aarish Ramesh Jun 02 '17 at 10:41
  • This time I increased the migration ran for a larger number of files before failing. – Aarish Ramesh Jun 02 '17 at 11:01
  • You don't have enough free memory in first to set MaxDirectMemorySize to 128MB , but did it work for single file when decreasing the max heap ? – hoshoh Jun 02 '17 at 11:29
  • It worked in the sense the migration ran for a large number of files when decreasing max heap to 412mb & MaxDirectMemorySize to 128 MB – Aarish Ramesh Jun 02 '17 at 11:45
  • I am not sure why is the process tha DirectMemory intensive & why was 256 mb DirectMemory allocation not possible. This is my jvm setting java -Xms256m -Xmx412m -XX:MaxDirectMemorySize=256m . So can this only be resolved only by increasing RAM to 2 GB or this can be made to workout with proper tuning of heap ? – Aarish Ramesh Jun 02 '17 at 11:49
  • 1
    This because java process is not only limited to heap(Xmx) you have to add Stack (#Thread*Xss) ,Permgen,Code Cache,Direct Memory,JNI and others memory area that must be counted for in sizing the machine.Generally if not explicitly specified those are about 1/2 GB.You must adjust the JVM options based on available space and your need,I recommend setting Xms=Xmx. – hoshoh Jun 02 '17 at 12:01
0
Possible solutions:

  Reduce memory load on the system
  Increase physical memory or swap space
  Check if swap backing store is full
  Use 64 bit Java on a 64 bit OS
  Decrease Java heap size (-Xmx/-Xms)
  Decrease number of Java threads
  Decrease Java thread stack sizes (-Xss)
  Set larger code cache with -XX:ReservedCodeCacheSize=
Vishrant
  • 223
  • 3
  • 9