5

I'm running CentOS 7 (XFS filesystem) on a dell server with a PERC H700 raid controller. Inside this server I have 6 x Samsung 850 Evo 250GB SSDs (yes they are consumer drives however, this is a home server. In any case, I performed a DD test and am getting speeds of around 550MB/s which would be the approximate write speed of a single SSD yet these drives are in RAID 10.... where one would expect more.

Output of a write test:

[root@localhost] sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.95942 s, 548 MB/s

Output of a read test:

[root@localhost]# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.171463 s, 6.3 GB/s

Would anyone be able to shed some light on this situation as to whether this is an acceptable write speed? I'm rather puzzled as to what to do here. Appreciate your help :)

  • The read speed you're getting is probably being done from cache, since those drives can't operate that 6.3 GB/sec. Have you verified that CentOS is seeing the disks as one logical disk? I have had issues with some Linux based OS's not actually seeing the logical disk, but still seeing each disk as individual disks. use `lshw -class disk` to see if it's one logical disk, or multiple separate disks. – KeyszerS Jan 10 '16 at 13:11
  • 1
    Is your server pathetic enough that the test is relevant? 1.1gb copy operations on a modern server or a decent raid controller are something that is handled in cache. Depending on configuration (i only use adaptec) you may end up having the whole write tested. I would NOT do any IO test that has no (a) cachin disabled nad (b) uses a minimum of 3+ times the size of all memory combined (system, caches on all levels). Heck, if you distribute 1.1gb on multiple decent SSD - the SSD write cache will handle that alone. – TomTom Jan 10 '16 at 13:34

2 Answers2

1

I could close this as a duplicate because there are a lot of factors that impact storage performance in Linux.

I think people have the wrong idea when they attempt to benchmark SSD performance. You should use SSDs for better random I/O performance. You're testing big-block sequential performance, which doesn't match any sort of use case except for, um, copying large files.

  • Throughput: Maximum bandwidth (likely sequential) of the array.
  • IOPS: How many I/O operations per second the array is capable of.
  • Latency: How quickly the storage subsystem can service your I/O requests.

The last two are what matter in most cases. Add to this the fact that you're using a RAID controller, there is an element of caching at play. Also, XFS and Linux cache I/O, so you need to know what you're testing.

I'd suggest using a purpose-built tool like fio, iozone or even bonnie++ to run a proper set of benchmarks.

Also see: HP P410 RAID + Samsung 830 SSDs + Debian 6.0 - What performance to expect?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
0

RAID 1 is 1/2 as fast write as a regular disk Raid 0 is 2x as fast write as a regular disk.

(1/2) * 2 = 1

if you have 4 disks in Raid 10 you will get 1x the write speed and 4x the read speed. These are general numbers and not technical numbers as random/sequential and other factors come into play (Although not so much with SSD).

Nick Young
  • 668
  • 4
  • 14
  • 1
    Not quite... a RAID 1 should be providing write speeds *equal* to a single drive, as the write is performed on both drives simultaneously. So that would be 4x read speed and 2x write speed compared to a single drive in this case. – JimNim Jan 13 '16 at 16:10
  • Not quite... a RAID 1 should be providing write speeds *equal* to a single drive, as the write is performed on both drives simultaneously. So that would be 4x read speed and 2x write speed compared to a single drive in this case. – JimNim Jan 13 '16 at 16:10