6
1
When writing program, there are times when a runaway program slurps half of my RAM (generally due to practically infinite loops while creating large data structures), and bringing the system to become really slow that I can't even kill the offending program. So I want to use ulimit to automatically kill my program automatically when my program is using an abnormal amount of memory:
$ ulimit -a
core file size (blocks, -c) 1000
data seg size (kbytes, -d) 10000
scheduling priority (-e) 0
file size (blocks, -f) 1000
pending signals (-i) 6985
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 10000
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 6985
virtual memory (kbytes, -v) 100000
file locks (-x) unlimited
$ ./run_program
but why is my program still using more RAM than the given limit (yes, I'm starting the program in the same bash shell)?
Have I misunderstood something about ulimit?
As you can see, there are limits on several different kinds of memory. Figuring out towards which limits a particular allocation count is sometimes tricky. Try to get a “runaway” process and post the contents of
/proc/12345/status
where 12345 is the process ID (just the lines beginning withVm
are enough). – Gilles 'SO- stop being evil' – 2010-11-02T23:43:20.983@Gilles: I've tried putting additional constrains on "max memory size", "virtual memory", "core file size", "data seg size", basically everything I can see in ulimit that is related to memory (I don't use much files). The problem with collecting from /proc/ is that my computer locks up in 2-3 seconds after the runaway started, and I have to struggle really hard to be able to kill the offending process (many times, I'd just use the power button). I'll try acquire one though. – Lie Ryan – 2010-11-03T01:28:56.140