I have an HS22 blade configured with two 600GB 10K 6Gbps SAS drives configured in RAID1 using the onboard LSI Logic controller. Running on it is VMWare ESXi 4.0u2, and on top of that are a couple VMs. (Yes, I'm aware that we should be providing storage via SAN, but this was a budgetary constraint) I'm seeing poor read/write performance
- Host A: RHEL 5.5, 8GB RAM, 2 vCPUs
- Host B: CentOS 5.5, 1GB RAM, 2 vCPUs
Both kernels are configured to boot with elevator=noop
Result of ~8GB dd
on Host A to a 350GB thin-provisioned disk, ext3 formatted:
# dd if=/dev/zero of=fullram bs=1K count=8388608
8388608+0 records in
8388608+0 records out
8589934592 bytes (8.6 GB) copied, 467.934 seconds, 18.4 MB/s
The maximum write performance I've seen is ~30MB/s (monitored through vSphere client)
Result of ~8GB dd
on Host B to a 40GB thin-provisioned disk, ext3 formatted:
# dd if=/dev/zero of=fullram bs=1K count=8388608
8388608+0 records in
8388608+0 records out
8589934592 bytes (8.6 GB) copied, 478.192 seconds, 18.0 MB/s
The maximum write performance I've seen for this VM however is about 50MB/s (monitored through vSphere client)
I've tested read performance on Host A in the following way:
dd
a 1GB filedd
a second file to the same size as RAM (8GB)- Read the 1GB file with
dd
Result was:
# dd if=testfile of=/dev/null bs=1K
2097152+0 records in
2097152+0 records out
2147483648 bytes (2.1 GB) copied, 190.255 seconds, 11.3 MB/s
I'm at a loss for what could be causing this issue