0
I have a relatively small server with a quad core CPU (Intel i5-7400) and 16 GB RAM (DDR4 though), running a couple of virtualised guests using libvirt
. I'm not using any other intermediate layer such as Proxmox. The OSes in use are about 90% Linux, 5% macOS (Mojave and up) and 5% Windows (10/2016). I never use desktop environments on Linux. The host (Ubuntu Bionic) uses ZFS with a raidz1 config to store the virtual disk files. When creating guests I always use virt-install
with the proper --os-variant
flag.
For all guests disk performance was extremely low, barely ever going up to 10 MB/s write speeds (even with VirtIO drivers). This also occurred regardless of virtual disk type; QCOW2, raw, QCOW2 with a 4k cluster size, and an entirely preallocated QCOW2 disk all had the same issue. When writing about 200 MB to a file the guest would simply lock up and I have to wait a couple of minutes after Ctrl+C'ing the command for it to become usable again. After doing some further research/testing I found that the writeback
cache mode significantly improves performance, at least for the Linux guests. No more lock-ups and they can even write 1 GB to a file in just a couple of seconds, even when using a brand-new sparse/thin QCOW2 disk on a SATA bus.
However, the GUI guests still have extremely slow boot times, and when they finally do boot they're pretty much unusable (mouse pointer moves maybe only once per 5 seconds, keyboard input is severely delayed, opening an application takes forever, etc). I can wait an hour for Windows to boot and it'll still be stuck on the black boot screen with Windows logo and a loading icon below it, even after I managed to install VirtIO drivers before the actual Windows installation. MacOS will usually boot after 30 minutes or so, but that's using a SATA bus because I can't even install VirtIO drivers. Linux guests boot in a matter of seconds, for comparison.
For macOS I once managed to SSH into it from my own computer and run a disk speed test from there, and even with the writeback
cache mode it barely reaches 10 MB/s write speeds.
All problems occur even if e.g. macOS is the only guest currently running, so I don't think it's a bottleneck with CPU or RAM. Memory isn't overcommitted anyways because in my experience that only results in issues. I tried giving the guest both a dual and quad core vCPU as well, with no noticeable changes. Also, the full qemu-system-*
command-line properly contains -kvm
flags so it is not doing virtualisation purely in software.
It's probably some stupid configuration thing somewhere, because even on my ancient virtualisation rig (rocking DDR2 memory) using ESXi I could boot Windows 7 guests in a reasonable period of time.