I'm encountering some odd behavior and wondering if anyone has ideas what would be causing it.
Setup: 2x 2.4GHz Opteron quad cores, 8GB RAM, 2x 500GB 7200 RPM SATA2 drives, with Clean minimal CentOS 7 install running no workloads (yet) and nothing except updates installed.
Here's the average write speed I'm seeing using dd (variance between min and max in brackets);
Single Disk:
1GB @ 4K = 22.5 MB/s (1.2 MB/s)
800M @ 8K = 36.8 MB/s (0.7 MB/s)
1.6GB @ 16K = 57.3 MB/s (0.1 MB/s)
1G @ 1G = 85.6 MB/s (0.6 MB/s)
RAID0 w/ EXT4:
1GB @ 4K = 22.5 MB/s (0.4 MB/s)
800M @ 8K = 36.5 MB/s (0.7 MB/s)
1.6GB @ 16K = 55.7 MB/s (0.6 MB/s)
1G @ 1G = 89.3 MB/s (2.6 MB/s)
RAID1 w/ Ext4:
1GB @ 4K = 16.3 MB/s (0.4 MB/s)
800M @ 8K = 27.83 MB/s (0.1 MB/s)
1.6GB @ 16K = 43.0 MB/s (1 MB/s)
1G @ 1G = 56.25 MB/s (2.3 MB/s)
RAID0 w/ XFS:
1GB @ 4K = 23.6 MB/s (0.1 MB/s)
800M @ 8K = 41.75 MB/s (0.4 MB/s)
1.6GB @ 16K = 60.8 MB/s (1.2 MB/s)
1G @ 1G = 82.2 MB/s (5.7 MB/s)
RAID1 w/ XFS:
1GB @ 4K = 16.2 MB/s (0.4 MB/s)
800M @ 8K = 27 MB/s (1.5 MB/s)
1.6GB @ 16K = 43.8 MB/s (0.1 MB/s)
1G @ 1G = 54.3 MB/s (0.9 MB/s)
I'm using
dd if=/dev/zero of=[mount point of raid being tested] bs=[4K-1G] count=[1,100000,250000] oflag=direct
fio shows similar bandwidth results but shows near doubling of IOPS in RAID.
Read performance is about 96 MB/s single disk and 114 MB/s RAID0 w/ EXT4.
Chipset seems to be ServerWorks HT2100/HT1100 which claims to be the first SATA2 spec @ 1.5G/s but support for NCQ and the other SATA2 goodies.
Would gladly accept and ideas to make this go faster.