Correct me if I am wrong, but to my understanding slab reclaimable holds cached kernel objects which can be freed if needed. So if application needs to allocate more space, even if the 'free' memory is low, OS will drop some pages from slab reclaimable and privide application with the requested amount of memory (unless its not possible).
This is how my memory looks: Mem graph and /proc/meminfo output:
MemTotal: 8171852 kB
MemFree: 825892 kB
MemAvailable: 6273852 kB
Buffers: 227448 kB
Cached: 1261944 kB
SwapCached: 15324 kB
Active: 2582260 kB
Inactive: 499232 kB
Active(anon): 1460764 kB
Inactive(anon): 131340 kB
Active(file): 1121496 kB
Inactive(file): 367892 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 524284 kB
SwapFree: 440372 kB
Dirty: 372 kB
Writeback: 0 kB
AnonPages: 1579556 kB
Mapped: 40500 kB
Shmem: 4 kB
Slab: 4113080 kB
SReclaimable: 4061308 kB
SUnreclaim: 51772 kB
KernelStack: 6992 kB
PageTables: 70692 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4610208 kB
Committed_AS: 2644508 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
DirectMap4k: 14200 kB
DirectMap2M: 2082816 kB
DirectMap1G: 8388608 kB
First thing I noticed is that the slab and cache are the exact copy of memory used, meaning is contant.
To the problem:
Sometimes when free memory reaches values around 100 Mb, OOM-killer is invoked, killing vital processes (php, clamd, ...). How is that possible? Shouldnt OS free slab reclaimable before invoking OOM?
Things I tried
I tried setting
vm.vfs_cache_pressure=10000
thinking it will force kernel to drop more caches, but the graph didnt change, even after 24H.
Perhaps its a bug in kernel itself https://bugzilla.kernel.org/buglist.cgi?quicksearch=oom&list_id=904801