I'm trying to establish why a process keeps unexpectedly stopping, and one possibility is that the server is running out of memory, but I'm not sure I understand memory and Linux properly.
Take the following output from the free
command (taken from the same machine with the issue I'm trying to fix):
total used free shared buffers cached
Mem: 991 827 163 0 107 361
-/+ buffers/cache: 358 633
Swap: 0 0 0
The top line suggests that this machine is indeed using a lot of memory (only 163 Mb free) and so if a few more processes fire up (which they do) then we could have an out of memory situation with various processes being killed.
However, I've always been led to believe that as Linux makes great use of buffers and caches in memory which can be used if needed, that the figure I should be paying more attention to is the second line which suggests that ~633 Mb is free, in which case I think it's unlikely that this machine is running out of memory.
So can you clarify my understand of Linux and memory, and help me understand when Linux actually runs out of memory?
PS - this machine is a single purpose machine - it runs the background process for a large web application, that's all it does. No web server, no database, just one huge Ruby on Rails app running as a background process. Occasional cron jobs fire up for specific application tasks which would temporarily create another instance of the rails app in memory.