8 hard drives, very same model, software RAID0, terrible performance

2

I have 8 hard drives in my PC, assigned as storage.

  • Three of them are attached to the mainboard.
  • Four of them are attached to a controller [ SiL 3114 ]
  • One of them attached to an another controller [ SiL 3114 #2 ]

These hard drives are the very same, Samsung Spinpoint models. Same RPM, same size.
I wanted to make a raid from ALL of them, using Windows Server 2008R2. However, the performance is just outright terrible. It's hardly faster than one hard drive.

I checked the SMART values, they are all correct.
Any ideas, what might be the problem?

Ps.: There is a spare 320gb WD disk in the machine, and a 320gb WD as system drive.

Apache

Posted 2012-05-01T12:24:26.557

Reputation: 14 755

3Can you quantify "horrible" with some benchmark results? Also, this might be better suited for superuser as software raid over a bunch of controllers is something you might find more frequently in a enthusiast rig than a production server. Also what RAID (i.e. Raid 5 or 10) are you using? – Kyle Brandt – 2012-05-01T12:27:56.470

I tried to ask a raid specific question there (just about how to measure performance). But received on upvotes or answers. So I thought Serverfault may be more experienced with RAID builds. – Apache – 2012-05-01T12:29:44.223

Performance test: http://i.imgur.com/iEqSY.png | Same test, on two WD Black Caviar drives in RAID0: http://i.imgur.com/aq2Zl.jpg

– Apache – 2012-05-01T12:31:00.650

2How big is the stripe size on your array - a 64k stripe size on 8 7200RPM disks would give you a theoretical sequential read performance of around 230MB/sec if you get one stripe per revolution of the disks. SW RAID performance on windows is pretty crap at the best of times, so it's certainly within the bounds of possibility that this is a legit result for the platform. – ConcernedOfTunbridgeWells – 2012-05-01T13:39:34.493

As a corollary, H/W RAID controllers can be had quite cheaply on ebay. If you want more performance, try looking at what an adaptec, LSI or 3Ware controller would cost. Adaptec ones will go up to 1MB stripe sizes. – ConcernedOfTunbridgeWells – 2012-05-01T13:43:55.440

@ConcernedOfTunbridgeWells I'm using 64K stripe size. But the hard drives are not doing anything at this moment, so I can try any ideas. – Apache – 2012-05-01T14:02:56.870

If you have a 64 bit PCI-X or a PCIe x4 slot in your machine you could hunt around on ebay and get a suitable HW RAID card for a few hundred dollars. If you need a high-spec machine on the cheap with fast I/O, check out what you can get a HP XW9400 or a Tyan S2916 motherboard for. You can transplant the motherboard into any case with space for EATX boards. Memory is fairly cheap off ebay as well, but make sure you get the right type (DDR2 registered ECC). – ConcernedOfTunbridgeWells – 2012-05-01T14:11:38.337

@ConcernedOfTunbridgeWells I've got a very cheap desktop Motherboard and CPU in the machine. I'll try making arrays per card/motherboard. If the machine won't have a fast speed, the owner will just throw it out. So uhm...yeah. Buying anything is not an option. Sadly. But, it's not my money, project. – Apache – 2012-05-01T17:19:16.173

Answers

5

You clearly have a bus bottleneck problem. Your motherboard can't manage more than one transfer at once.
You can test it:
- boot a Linux lice-cd
- run a dd test againt 2 disks

sync; echo 3 > /proc/sys/vm/drop_caches
(dd if=/dev/sda of=/dev/null bs=1M count=5000 &); dd if=/dev/sdb of=/dev/null bs=1M count=5000  

You probably will see a cumulated transfer of about 120 Mb/s.

Gregory MOUSSAT

Posted 2012-05-01T12:24:26.557

Reputation: 1 031

2Exactly. Half the I/O is to the controller with half the drives on it. That controller is on a 32-bit, 33MHz bus with a maximum theoretical rate of 133MB/s and a realistic transfer rate of 100MB/s. So 200MB/s is his I/O limit. (Worse if the two SIL controllers share a PCI bus, which they probably do.) – David Schwartz – 2012-05-01T20:20:16.087