Disclaimer
This is indeed a continuation of this thread. I've opened a new question as the target has slowly shifted, therefore I think was good to open a new post. I will merge back if required.
Setup
- HP DL160g9 server with B140i soft-raid controller
- 4 LFF 7.2k in RAID-5
- Centos 7.2 on it with kvm and qemu coming from centos virt sig. Specs are unchanged wrt previous post.
- all VMs storage is local on the server
- all NICs and provided VHDD are passed through virtio (virt-scsi), cache is none, io is native.
- all VMs have 2 VNICS: one exposed to the LAN, one internal to server (the default libvirt NAT network)
Tested systems
- The linux host (as a reference)
- local linux VM
- local Win8.1 and Win10 VMs (64bit)
- remote win10 64bit pro workstation
- remote linux VM running on such WS via virtualbox
- remote win8.1 64bit pro VM running on such WS via virtualbox
Tested storage types
In order to understand why Win10/8.1 pro 64bit sucks so much under KVM, we have tried a number of tests. Disk performance is measured by:
iozone -i 0 -i 8 -t 1 -s 4m
The test is run from within a folder in the target storage device, of course. The following different configurations have been tested:
Prepared an LVM think volume to be passed to both linux and windows VM as virtio scsi driver.
- Run iozone on it: both local linux and windows VMs are tested.
Used the same LVM thick volume as block device for an iscsi target offered on both virtual and physical LANs. This time VMs are booted and we have manually attached the iscsi target from within the running VM without passing it through kvm/libvirt. In this case kvm should mediate only on network.
- In this case we have also benchmarked the external Windows workstation and the linux VM running on it.
Results
Results are quite meaningful. All combinations of storages (local virtio scsi disk and iscsi target) perform great if:
we test them from within the host (logging in the iscsi target via the LAN ip not loopback)
we test the virtio-scsi/iscsi target on any local linux VM
we test the iscsi volume from the windows workstation or the virtual machines running on Virtual Box: both linux AND Win.
When I say great a mean that, given or taken throughput is almost native, in order of 1 to 1.7 millions KB/sec (native local LVM is 2 millions)!
All combinations of storages (local virtio scsi disk and iscsi target) perform bad if:
- we run from the local win 8.1/10 VMs on KVM: such like 0.5 million KB/sec.
Considerations
It is arguable that virtio-scsi performance is bad due to virtio drivers sucking with vscsi storage.
On the other hand, in the case of the iscsi target this should mean that even virtio network drivers suck, as linux and Win work great in other scenarios (native or Virtual Box)!
Help
To be honest I do not think virtio drivers for windows are such bad even on network. Please, can you share any experience to understand where to tune VMs?!