In several contexts, I've seen a behavior on Linux systems in which large volumes of filesystem write operations (e.g., many gigabytes of writes, very quickly) will overwhelm memory, apparently waiting for I/O operations to complete and buffered data (that's been written) to be flushed to disk to free up memory for subsequent writes. When this circumstance occurs, if I look at "vmstat -s", I can see the amount of free memory becoming less and less, until it reaches zero. I most frequently see this problem when writing to very slow disks (such as USB-attached external drives that have a filesystem on them), but I've also seen it with more "regular" SATA disks when large volumes of data are written very quickly. At best, this seems to cause write operations to eventually block, waiting for memory to become available. At worst, if a high volume of writes continues to occur once the system is in this state, the pressure on memory becomes so great that the OOM Killer runs and randomly kills off processes to free up memory. It apparently doesn't even need to be multiple users doing writes to make this happen, as I've created this situation myself (without even anyone else using the system) when attempting to write very high volumes of data to a filesystem very quickly.
My guess (and I stress, it's only a best guess) is that the system isn't particularly aggressive in flushing buffered output to disk and freeing up the associated memory. But I'm not sure what I can tune, or even look at, to determine if this is truly the case, and perhaps make the flushing of write buffers more aggressive.
Am I on the right track here, as far as a guess at what's going on? If so, is there anything I can tune to try to make the system more aggressively flush pending I/O to disk and free up the buffer memory?