1
1
I have a program that requires a lot of memory. So much that the malloc
fails as the system goes out of memory.
I don't care if the overhead for swapping pages in and out of the disk is too high - I just want the program to run. So, is it possible that by sufficiently increasing the swap partition size the out-of-memory error can be prevented by effectively making more memory is available?
Okay, I thought that the amount of virtual memory that can be allocated by a program is limited by the total available memory (Physical memory + swap space). i.e. even if the amount of memory allocated to the memory heap maybe larger, malloc may still fail because there is no real memory left to allocate ! – AnkurVj – 2011-09-01T08:22:03.420
On linux, you could use the getrlimit(RLIMIT_AS, p) function to find out the maximum size of a process' total available (virtual) memory. If this limit is exceeded, the malloc() and mmap() functions shall fail with errno set to [ENOMEM]. You might want to also look at the setrlimit() man page. – sawdust – 2011-09-01T08:56:37.363
But even if the limit is not exceeded, can malloc fail because the system has no memory to allocate ? – AnkurVj – 2011-09-01T09:08:24.100
1
Here's something to read about a process running out of memory versus Linux running out of swap space: http://linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html "The conclusion is that OOM happens for two technical reasons: 1. No more pages are available in the VM. 2. No more user address space is available. 3. Both #1 and #2."
– sawdust – 2011-09-01T09:31:54.637The link is indeed superb .. I'd recommend adding it to the answer – AnkurVj – 2011-09-01T09:53:11.703