4

Which is more standard in large enterprises: RAID5 or RAID10?

Hugh Perkins
  • 1,065
  • 7
  • 9

8 Answers8

15

They both are. It depends on the application using the array, the number of disks in the raid group and the IO requirements of the applications sitting on them.

For example a file server probably doesn't need RAID 10 as the bulk of the data is just sitting there with a few users opening and closing files through out the day.

An OLTP database may need a combination of RAID 5 and RAID 10 while a data warehouse will probably need all RAID 10.

mrdenny
  • 27,074
  • 4
  • 40
  • 68
6

I work on directly on a customer base of about 2500 medium to very large businesses (think Sony-size), and currently RAID 5 is more common from what I see in their configs. That is, however, likely due to a few factors, like people still using older RAID controllers that don't support RAID10, people still thinking disk space is expensive, and the fact that with caching and such, web content often doesn't need to be sitting on a RAID 10 when a RAID 5 will do.

The most common configs that I see are something like RAID 5 for web content, RAID 1 for OS drives, RAID 10 for database data files, and sometimes a RAID 0 for something like tempDB.

RAID 5 vs RAID 10 isn't really an either/or thing. You need to look at the application and figure out what is best for the use case.

phoebus
  • 8,370
  • 1
  • 31
  • 29
  • "In my ideal world it would all just be RAID 10 I think". I'm glad it is *your* ideal world. Depending on the nature of the application, RAID10 can have a performance hit compared RAID5. Take IBM's(Rocket now) U2 database servers. RAID5 can have up to 50% increase in read performance for a correctly configured RAID5 as opposed to the same disks as RAID10. **Horses for Courses**. Do some research on how the disks will be used and which RAID configuring matches your goals best. – Dan McGrath Dec 02 '09 at 08:35
  • Hi Dan, I think you misunderstood me. I was more trying to make a tongue-in-cheek comment about being lazy and slapping configs together. I can see how the tone isn't really conveyed though, so I'll remove it. – phoebus Dec 02 '09 at 08:57
  • I feel this answer gives me exactly the information I was looking for in order to decide which raid type to use for a specific task, and some history on why current configurations are the way they are. – Hugh Perkins Dec 03 '09 at 02:49
  • RAID 0 shouldn't ever be used on a database serer, even for tempdb. If you loose a single disk the SQL Server will be down until someone goes and puts hands on the server to replace the disk, and put the array back together. – mrdenny Dec 03 '09 at 23:33
  • I was telling you what I see, not what I recommend. Incidentally, failover can be implemented for tempdb pretty easily. – phoebus Dec 04 '09 at 00:35
4

It shouldn't matter -- you shouldn't pick technologies because they're popular, you should pick them because they work. On the other hand, if you're trying to do a rigorous study on RAID levels, a question on serverfault isn't exactly the right way to go about it.

Having said all that, I would say that without a doubt RAID 5 is the more popular of those two choices. Plenty of hardware RAID cards don't support RAID 10 (although thankfully these are becoming far less common than they used to be), and lots of people don't like the idea of wasting half their disk space (because they're stuck in the days when disk platters were actually expensive).

womble
  • 95,029
  • 29
  • 173
  • 228
  • (and lots of applications perform better on RAID5). RAID10 isn't superior, RAID5 isn't superior. They both have their uses depending on many issues. See Mr Denny's answer – Dan McGrath Dec 02 '09 at 08:40
  • "many issues" -- like whether or not you actually want to keep your data. 2TB SATA drives take a *long* time to rebuild in a RAID5 array. – womble Dec 02 '09 at 20:48
3

I believe RAID 5 is increasingly considered nigh-on-useless, due to the time taken to rebuild a large array after a single disk fails, and the risk of a second failure (i.e. a catastrophe!) during this time.

We just switched to RAID 6 -- I considered RAID 10, but depending which second disk fails, that still feels prone to significant loss if a second disk fails during a rebuild...

James Green
  • 895
  • 1
  • 9
  • 23
  • +1 for "but depending which second disk fails, that still feels prone to significant loss..." I stress this in my storage presentations. – mrdenny Dec 03 '09 at 23:31
1

At the institute I am working at we have Petabytes of RAID 6 or double parity RAID 5. While we only have Terabytes of RAID 10.

James
  • 2,212
  • 1
  • 13
  • 19
1

"More Standard" isn't a clear enough question, certainly you'll find MORE R5 in large organisations simply because it's been around longer and is supported in more, and older, array controllers. R10/01 is however becoming more prevalent and important to those looking for consistent database performance.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
0

I've a personal preference for RAID 10 but RAID 5 does have advantages. OK, so storage isn't expensive but the number of drive slots in your server is limited, and 5 will just give you more storage for any given number of drives.

Otherwise, what everyone else said.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36
0

Much of my experience shows RAID 10 for the drives in the server, but then connected to some form of SAN for "actual" storage.

Or skip RAID entirely (from the "machine's" perspective) when looking at virtualization on SANs.

warren
  • 17,829
  • 23
  • 82
  • 134