I'm not able to get anything close to usable performance using a guest file located on a mdadm RAID5 array. I've believed I optimized all the parameters of the array and filesystem for best R5 performance:
- set bitmap=none
- set stripe-cache=32768 (tried from 256-32768)
- EXT4 stride=128 / stripe-width=384 (chunk 512K, FS block 4K, 3 data-disk)
The array performance very good on the host (105MBs using no cache, 470MBs with cache). It's made of 4 x HDDs, relative slow.
- It doesn't make any different if the image file is raw or qcow2
- Tried both Virtio SCSI and Virtio SATA
- Tried all cache combination, also in the guest itself (Windows 10 and Linux)
Does KVM/QEMU just not work very well with RAID5 mdadm arrays?
It's seem to be a latency problem (similar to what I've seen on ESXi with local drives).
Latency almost 17 seconds and on average write performance of 1-10MBs
An example from the virt XML:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='writeback'/>
<source file='/mnt/R5_DATA/data2.raw'/>
<target dev='sdd' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='3'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>