1

I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing.

Why am I running out of lowmem and how do I increase it?

Some details:

$ cat /proc/sys/vm/lowmem_reserve_ratio
256     256      32
$ free -lm
             total       used       free     shared    buffers     cached
Mem:         32086      24611       7475          0          0      24012
Low:           464        407         57
High:        31621      24204       7417
-/+ buffers/cache:        598      31487
Swap:         2047          0       2047
magneticMonster
  • 133
  • 1
  • 4
  • And why don't you just upgrade to 64bit OS and 64bit Java? PAE isn't particularly good technology — it has it [gothas](http://codemonkey.org.uk/2009/07/10/x8632-pae-gotchas/). – Tometzky Feb 08 '12 at 09:57

4 Answers4

2

The problem is that a lot of the kernel data structures such as the page descriptors (one struct for every 4KB page in the system) need to be in low memory. So as the total memory in the machine goes up, more and more low memory is also needed, and eventually low memory becomes a very scarce resource.

IIRC the usual rule of thumb is that 16 GB total is about the upper sane limit for a 32-bit kernel. There's not very much you can do about it.

You can try to boot with less memory (mem= command line parameter to the kernel). But the real solution is to switch to a 64-bit kernel.

janneb
  • 3,761
  • 18
  • 22
  • Ugh, holy zombie question! Why did this turn up on the front page? – janneb May 15 '12 at 06:39
  • 1
    Actually there is one thing you can do about it: rebuild your kernel with a different kernel/user split. Instead of the default 1G for kernel and 3G for user, you can split it 2G/2G. This will give you more low memory, but also means that an individual process won't be able to use as much memory ( only 2G instead of 3G ). – psusi Jan 18 '13 at 14:29
0

Disable oom killer and see what the end result is. Also publish information from the process memory usage as applicable. Have a look at the pmap output as well to help decipher. I've run very large Java heaps under RHEL5 64-bit and never seen this issue.

Justin
  • 141
  • 3
  • The Java heap is not large ... the Java process never gets above 400MB. The Java process does create some large files (it seems to fail at the point when one of the files passes 18GB). – magneticMonster Dec 29 '10 at 23:09
0

Well, I'm not sure if I'm correct, size of low memory is one of kernel's parameter. I think than one process can not grow over size of low memory because of PAE, but check this http://www.makelinux.net/ldd3/chp-15-sect-1.shtml

Ency
  • 1,201
  • 1
  • 19
  • 26
0

The Management of Memory in this regard is rather bad on Linux. It will take the first 4G of memory and split them 3/1. 1GB being LowMem. With 32GB of memory on the system it will already need a substantial part of this 1GB for addressing purposes. During the 2.4 days there was a discussion about putting some effort into this to make this limit configurable, or to integrate the 4G/4G patch, none of which has happened though, as Linus didn't see any need for this, and things were already ugly as they were, not to mention, that 4G/4G is not pretty either. There still is a 4g patch around for 2.6, but it got written for 2.6.6 originally, which is very much outdated today. By 2.6.7 it was pretty clear that it would never be merged, its performance overhead was gigantic anyway, so the decision was made, that the VM System was good enough as it is. So on 32bit there is probably no way around this issue, as the memory system is simply not meant to scale to such amounts of memory.

On 64bit on the other hand addressing has changed considerably, so you won't find this issue, there.

juwi
  • 573
  • 5
  • 14
  • The 3/1 split should be configurable by changing the definition of PAGE_OFFSET in the source and recompiling... Thats how you did it before 64bit on systems with 3-4G where "large memory support" had performance problems due to chipset/BIOS issues. – rackandboneman Jun 14 '12 at 23:57