-1

I am playing around with ZFS on Proxmox and have noticed that swappiness never seems to kick in. The swappiness value is currently set to 50 but never swaps unless I reach 100% RAM usage acting as if swappiness is set to 0.

How can I manually force swappiness to run? The only way I can current do this is by installing something like https://github.com/julman99/eatmemory to eat the systems memory to beyond 100%

cat /proc/meminfo

MemTotal:       528099208 kB
MemFree:        33819676 kB
MemAvailable:   30995036 kB
Buffers:           65056 kB
Cached:           368868 kB
SwapCached:      4978016 kB
Active:         383870632 kB
Inactive:       71255296 kB
Active(anon):   383654260 kB
Inactive(anon): 71140760 kB
Active(file):     216372 kB
Inactive(file):   114536 kB
Unevictable:      160824 kB
Mlocked:          160824 kB
SwapTotal:      1875374420 kB
SwapFree:       1576041808 kB
Dirty:               128 kB
Writeback:             0 kB
AnonPages:      450155280 kB
Mapped:           185764 kB
Shmem:             92400 kB
KReclaimable:    1316628 kB
Slab:            7796824 kB
SReclaimable:    1316628 kB
SUnreclaim:      6480196 kB
KernelStack:       49616 kB
PageTables:      1746424 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    2139350296 kB
Committed_AS:   1255929500 kB
VmallocTotal:   34359738367 kB
VmallocUsed:     6192420 kB
VmallocChunk:          0 kB
Percpu:          1302144 kB
HardwareCorrupted:     0 kB
AnonHugePages:  176281600 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:      72
HugePages_Free:       72
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:          147456 kB
DirectMap4k:    49376056 kB
DirectMap2M:    406616064 kB
DirectMap1G:    81788928 kB
Toodarday
  • 185
  • 1
  • 7
  • 1
    I suspect your expectations of what swappiness does is not accurate. What are you aiming to achieve? – Matthew Ife Sep 29 '21 at 19:30
  • set swapiness to 100 but ensure your having ssd or nvme ;) Linux does not use swap if it has enough ram it's not windows which uses all the time the page files – djdomi Sep 29 '21 at 19:48
  • @MatthewIfe I basically want there to always be a certain amount of RAM free. My machine has 256GB RAM I am trying to force memory to swap out pages to reach that but manually. I have also tried ```vm.zone_reclaim_mode=1``` without any help. – Toodarday Sep 29 '21 at 21:02
  • @djdomi NVMe drive is dedicated to swap. This is a Proxmox node running multiple linux/windows virtual machines (KVM). – Toodarday Sep 29 '21 at 21:03
  • To add my system current is using 230/256GB of RAM with no swap being used yet. – Toodarday Sep 29 '21 at 21:42
  • @toodardwy and what is the issue you are facing? No over commitment, no swap used. – djdomi Sep 30 '21 at 08:46
  • There is no reason not to use the memory if its free and you dont offer a reason as to why it should be free / what you want to use the memory for. – Matthew Ife Sep 30 '21 at 13:00
  • @MatthewIfe Reason being if I create another virtual machine when the RAM is at 99% it wont boot. I need there to always be at least 10% RAM free is there another way to achieve this? I was hoping there would be an easy command to run – Toodarday Sep 30 '21 at 21:33
  • 1
    @Toodarday Please provide an example `/proc/memfino` when the system is using up all the memory. I get the impression you're looking at cached pages and if possible provide the example of a system not booting due to full memory. – Matthew Ife Oct 01 '21 at 07:05
  • Added another nodes output above – Toodarday Oct 01 '21 at 20:57

1 Answers1

0

swappiness does not force use of swap space. Nor will it save you from not having enough memory.

Higher values of swappiness encourages reclaim of anonymous pages, not just page cache. But this does not do much for ZFS on Linux, which does not use Linux's page cache.

I basically want there to always be a certain amount of RAM free. ... To add my system current is using 230/256GB of RAM with no swap being used yet. ... if I create another virtual machine when the RAM is at 99% it wont boot.

Do some capacity planning to not oversubscribe memory. Less of a magic command to tell the hypervisor to keep free memory around, and more your discipline of not starting more guests than you have resources for.

Your 230/256GB is 90% utilized, much higher than this can get into memory pressure, not good for performance. Which may call for capping guest memory, 56 x 4 GB guests, to make up some numbers. Whether the couple dozen GB remaining is enough to run the hypervisor kernel and still have some reserve is something you can discover in testing.

Edit: From meminfo, your 500 GB host is under some memory pressure and is swapping out.

  • MemAvailable at 5.8% of total is low. 29 GB to work with on a 500 GB host is not very much.
  • SwapTotal minus SwapFree shows 285 GB of swap space use. 1788 GB of total swap means it is not going to run out any time soon. Remember that most persistent storage is orders of magnitude slower than DRAM.
  • 0.4 GB Cached is quite low in absolute numbers. Consistent with ZFS on Linux use which isn't using the usual Linux VFS page cache. As a result, swappiness tunable does almost nothing in this environment. If you are dropping caches manually, don't do that, it likely hurts performance.

Swapping out is in sets of pages at a time when needed. The host will not suddenly free up an entire 100 GB guest when guest memory demands are lower. That would be very expensive.

I am skeptical of memory oversubscription in general, and ballooning in particular, and do not recommend them. Encouraging low memory can be risky to performance, as in the worst case reclaim introduces latency and might anger the OOM killer. See your attempts to start guests at high utilization, beyond a certain point the kernel won't grant the memory allocations.

Confirm the host has > 100 GB RAM (not counting swap) available before starting a 100 GB guest. Shut down guests before reducing their memory size. Not oversubscribing is more expensive in memory costs, but has more consistent performance and is easier to maintain.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32
  • The problem I have is this https://i.gyazo.com/83a5033c9b74c8319227d804862f0976.png https://i.gyazo.com/cb1a8f56f44f39d0a3b0c7be54188a20.png This virtual machine drops cache every hour but as you can see the system is reserving 15GB of RAM. The way proxmox works if I spin up a virtual machine allocate it 100GB RAM use it all the return below 1GB and drop cache while in the GUI it will show less than 1 GB of RAM now being used the system will reserve the whole 100GB until power down. I need to swap that out. Ballooning is not an option for me. – Toodarday Oct 01 '21 at 20:54
  • See my edit. I doubt that memory is easy to reclaim, not without powering down the guest and reducing its size. – John Mahowald Oct 04 '21 at 14:05