Disks: 10xSSD Ultrastar SS200 960GB 12GB/s Raid 0, 6, 10.
Controller: LSI Syncro 9380-8e
Filesystem: ext4 without LVM
System: Centos 7
2x E5620 @ 2.40GHz
32GB RAM
fio-2.1.10: --iodepth=32 --ioengine=libaio --rw=randrw --bs=4k --size 10000M -numjobs=10
At the beginning of testing i have aroud 60k IOPS in raid 0, after 2-3 minutes the counter falls to 2-5k IOPS
Start:
Jobs: 10 (f=10): [mmmmmmmmmm] [10.6% done] [123.6MB/123.6MB/0KB /s] [31.7K/31.7K/0 iops] [eta 13m:40s]
After:
Jobs: 10 (f=10): [mmmmmmmmmm] [14.5% done] [4839KB/4723KB/0KB /s] [1209/1180/0 iops] [eta 14m:18s]
Top in this moment:
top - 09:19:06 up 4:45, 2 users, load average: 10.41, 5.78, 2.41
Tasks: 282 total, 2 running, 280 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 3.5 sy, 0.0 ni, 42.8 id, 52.9 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 32769764 total, 214292 free, 334168 used, 32221304 buff/cache
KiB Swap: 16515068 total, 16515068 free, 0 used. 31963788 avail Mem
I think it is the low perfomance for 10 ssd (60K iops rw) even every disk can handle 30-40 iops.
I tried 2 different controllers, 3 types of raid, windows and linux systems - i have the same result of testing. What is my problem? How to understand why perfomance is too low and why i have huge perfomance drops? I hear about SSD spare area, but i still do not understand how to reconfigure it.