Is it safe to setup raid 10 with 6 standard desktop drives? So just regular 7200 rpm sata 3 disks? I understand that these disks might fail sooner than enterprise storage disks, but whatother things should I be aware of?
-
also, you be interested in http://serverfault.com/questions/123034/how-many-disks-is-too-many-in-this-raid-5-configuration and http://serverfault.com/questions/15038/guidelines-for-the-maximum-number-of-disks-in-a-raid-set – Hubert Kario Jan 07 '12 at 19:05
-
2I think the clue is in the name - Redundant array of INEXPENSIVE disks – symcbean Jan 08 '12 at 12:44
-
indeed.......... – Sirex Mar 02 '12 at 15:42
4 Answers
Enterprise disks won't last longer, they are of comparable quality. Only the feature-set and interfaces are different.
Yes, it is safe to set up RAID10 with 6 disks. The only situation when you have to watch out is RAID5 with disks with low non-recoverable read error rate and high capacity (Why RAID 5 stops working in 2009). It may work all right with Linux MD RAID (as Linux is quite persistent with re-trying to get data off the disk), but I wouldn't risk with hardware RAID or faux-hardware RAID.
- 6,351
- 6
- 33
- 65
-
-1. This is wrong. Enmterprise discs wont last longer, but tehey fail reads faster. Error -> fail. Desktop discs retry... qite long. Result can be devastating if the Raid controller then decides the wdisc is not responsivle and takes it out of the raid - while the enterprise disc will get the fail, and regenerate the sector from the other discs. so, Desktop discs may kill the raid because they try to save it. Add them to a SAS bus and the retry / take down bus behavior gets REALLY NASTY. – TomTom Mar 02 '12 at 15:56
-
1@TomTom: Yes, you're right. Enterprise (or "RAID Edition") disks have lower retry limits (in the range of 2-3 seconds), unlike desktop drives which have much longer (in the range of dozens of seconds, up to minutes). That's what I had in mind when I wrote that their feature-set is different. This has nothing to do with the thing OP asked about: their *failure rate*. It's quite bizarre I am expected to pay extra for simple bit swap in disk firmware... Thing that is meaningless when I use Linux MD RAID, ZFS or btrfs. – Hubert Kario Mar 02 '12 at 16:56
-
Yes, but it is not so meaningless when you use a real raid controller for performance reasons ;) I go seriously pissed by some of my drives onan Adaptec. – TomTom Mar 02 '12 at 18:44
-
@TomTom I won't deny that in *some* workloads hardware RAID controllers are faster than software RAID (there are also workloads in which software RAID excels). But then, I don't think anyone considering desktop drives is running IO-intensive workload... – Hubert Kario Mar 05 '12 at 13:13
-
Hmpf ;) NOT sure I quite agree. I can see second line storage - but then this is "desktop drive enterprise version". How would you put a Velociraptor Enterprsie? I use quit a lot of them for databases - escellent sweet spot between price and performance. – TomTom Mar 05 '12 at 13:23
-
@TomTom: True, use them myself. That still leaves the issue of 7200rpm OP mentioned... ;] To sum it up: on one hand there are so many different workloads and requirements that any combination is "best" choice in at least one of them, on the other hand there's much overlap between optimal workloads for those combinations so there's huge field for personal preferences. Having that in mind, in SMBs you usually don't have the budget for cold-spare HW RAID controllers that are *clearly* better than SW RAID... – Hubert Kario Mar 05 '12 at 21:20
One of the other issues you may run into is IOPS. Desktop drives may not support as much throughput as Enterprise drives. Depending on vendor the Enterprise drives come with a 5 year warranty whereas Desktop drives may come with "only" 3 years or some such.
As far as interfaces go, SATA is SATA whether it's consumer or enterprise class drive.
If this is for a business other than your own you may want to advise the client that you recommend enterprise drives but they can settle for consumer grade drives to save some money.
- 1,480
- 2
- 14
- 23
Posting in case others have similar question:
67TB using Commodity Off The Shelf (COTS) SATA controllers, consumer 1.5TB x 45 HDDs & softRAID. Truly inspiring.
- 1
-
This is not really a good example. BackBlaze uses an expected failure model (as in they expect those to fail on a regular basis, but the low cost means they can create a lot of redundancy), which is a luxury the OP presumably doesn't have. – HopelessN00b Nov 12 '12 at 17:48