I recently setup a new environment consisting of:
- QSAN Storage with 10Gib network
- Mellanox switches 10Gib
- 4 x Physical nodes connect to LAN and SAN 10Gib
The physical hosts are connected using MPIO to the SAN storage, performance tests were done on all physical servers to the SAN and show 8K random write 200MB/s for a single SSD (which is present as a CSV in the cluster). The test was conducted using diskspd.
Now I created a Hyper-V machine on the Cluster Shared Volume and tested diskspd inside the virtual machine: 8k random write: 0.5MB/s
When checking latency to disk inside the Hyper-V guest I see values like 10 seconds.
I'm quite at a loss why that is happening. I guess it's not the SAN storage, nor ISCSI or MPIO setup as I get the results I would expect when doing the test on the physical host. So there must be something wrong with the Hyper-V configuration.
I'm doing the test on the C: drive in the Hyper-V guest, which is a fixed size IDE drive (as SCSI will not be able to boot). The SAN volume is formatted using 64k...
CSV is owned by the same host as the Hyper-V guest,...
Update: The Guest-VM is Generation 1 unfortunately.