5

I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it.

Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec

I thought I would get more speed (while reading data) from two drives. I don't know what is the problem.

I'm using kernel 2.6.31-18

hdparm -tT /dev/md0

/dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec

hdparm -tT /dev/sda

/dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec

hdparm -tT /dev/sdb

/dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec

Edit: Raid 1

user9517
  • 114,104
  • 20
  • 206
  • 289
Jure1873
  • 3,692
  • 1
  • 21
  • 28

3 Answers3

2

Take a look at the following article at nixCraf, HowTo: Speed Up Linux Software Raid Building And Re-syncing.

It explains the different settings in /proc that can be adjusted to influence the software raid speed. (Not just during building/syncing as the title suggests.)

David
  • 3,519
  • 21
  • 17
1

What kind of RAID?

Any combination of 0 and 1 will give no great improvement to a non-concurrent benchmarks for latency or bandwidth. RAID 3/5 should give better bandwidth but no difference in latency.

C.

symcbean
  • 19,931
  • 1
  • 29
  • 49
  • sorry I forgot to specify it's Raid1. I was under the impression that raid 1 should give at least 20% more speed because it can read from both drives in parallel. – Jure1873 Jul 07 '10 at 16:46
  • If it were hardware raid then it may be able to optimize seeks much more effectively than software raid. That's not to say that its impossible with software raid - but it'd need to be very smart, and it would only really be of benefit for machines running small numbers of tasks concurrently. I expect you would see a difference if you were running two (or more instances) of the benchmark concurrently (it'd probably show slower stats than running one at a time - but not twice as slow). – symcbean Jul 08 '10 at 14:24
1

The problem is that, in spite of your intuition, Linux software RAID 1 does not use both drives for a single read operation. To get a speed benefit, you need to have two separate read operations running in parallel.

Reading a single large file will never be faster with RAID 1.

To get the same level of redundancy, with the expected speed benefit, you need to use RAID 10 with a "far" layout. This strips the data and mirrors it across the two disks. The disks are each separated into segments. With two segments, stripes on drive 1, segment 1 are copied to drive 2, segment 2. Drive 1, seg 2 is copied to drive 2, seg 1. Detailed explanation.

As you can see with these benchmarks RAID 10,f2 gets read speeds similar to RAID 0:

   RAID type      sequential read     random read    sequential write   random write
   Ordinary disk       82                 34                 67                56
   RAID0              155                 80                 97                80
   RAID1               80                 35                 72                55
   RAID10,n2           79                 56                 69                48
   RAID10,f2          150                 79                 70                55

f2 simply means far layout with 2 segments.

Furthermore, in my personal tests, I found that write performance was suffering. Notice that the above benchmarks suggest that with RAID10,f2 the write speed should be nearly equivalent to a single disk. I found I was getting almost a 30% decrease in speed. After much experimentation I found that changing the IO scheduler from cfq to deadline fixed the issue.

echo deadline > /sys/block/md0/queue/scheduler

Here is some more information: http://www.cyberciti.biz/faq/linux-change-io-scheduler-for-harddisk/

With this setup, you should be able to get sequential reads about about 185-190 MB/s.

Swoogan
  • 2,007
  • 1
  • 13
  • 21