3

We have a new storage array consisting of 24 x 600GB 10K SAS disks arriving next week, and I'm trying to decide how best to carve up the available space for our 3-node VMware vSphere cluster which will be accessing the array over 8Gb FC with fully redundant multipathing.

We have two main workloads - in-house MySQL and Exchange 2010 servers which I'll class as high-IO and Windows domain controllers and a fileserver, which I'll class low-IO.

My initial plan was to split the array with 6 disks in RAID10, with the other 18 in RAID50, using either three 6-disk RAID5 silos, or six 3-disk silos. The enclosure doesn't have a "hot spare", but we've ordered an extra disk as an on-site "cold spare".

Now, this works in principle, but I'm unsure how safe it will be in practice - I've read several articles and although the increased space efficiency is swaying me in the direction of RAID50, several posts I've seen essentially say that RAID50 (along with RAID5) has been deprecated in industry due to unreliability and failure risk.

Am I being paranoid unnecessarily, and if not, should I use RAID10 silos instead of RAID50?

Craig Watson
  • 9,370
  • 3
  • 30
  • 46
  • 1
    What type of storage array is this? – ewwhite Oct 01 '14 at 17:13
  • @ewwhite it's an Infortrend ESDS 3000 series, I don't have the exact model to hand unfortunately – Craig Watson Oct 01 '14 at 17:15
  • 1
    Relevant: [What counts as a 'large' raid 5 array?](http://serverfault.com/q/591777/33417) and [What are the different widely used RAID levels and when should I consider them?](http://serverfault.com/q/339128/33417) – Chris S Oct 01 '14 at 17:36

1 Answers1

3

I'll rarely use nested RAID levels like RAID50 and RAID60 these days. And if I do, it's usually part of a software RAID solution like ZFS. A lot of this is due to having better methods to avoid high spindle counts and the availability of larger disks.

Controller capabilities:

This is the biggest factor, as many controllers don't support RAID50 or RAID60. It appears as thought the Infortrend does. However, that doesn't mean it handles them well.

Also, many controllers have limits on the number of drives that can comprise a single RAID volume (e.g. LSI MegaRAID 16-disk limits), so that kinda makes the decision for you in some cases.

VMware:

Virtualization I/O is pretty mixed random read/write. It's typically low on throughput, and assuming you're on 8Gb Fibre, most RAID levels can work comfortably within the configuration you describe.

General config tips:

  • I try to use RAID 1+0 wherever I can.
  • If I use RAID 5, it will be on 8 or fewer enterprise SAS disks (10k or 15k, up to 900GB).
  • If using larger nearline or SATA drives, I recommend RAID 10 or RAID 6.

If I were doing this, I'd break things up into appropriately-sized RAID 1+0 groups (2 groups) or a RAID 1+0) and a RAID5 with hot spare(s), with the caveat of no more than 8 disks in the R5 group.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Follow-up question: what difference/risk is there using a single RAID50 with a single VMFS partition over using multiple RAID5 arrays with a VMFS on each? Seems to me like they're "the same but different" - unsure which is "better"? – Craig Watson Oct 02 '14 at 09:53
  • @CraigWatson I don't understand. Are you asking about the difference between RAID 50 and RAID 5? If so, the failure modes are different, the performance profiles are different, and I suspect that the controller you have wouldn't handle it very well. – ewwhite Oct 02 '14 at 09:55
  • I guess I'm asking pros/cons of 1 x R50 (with 4 x 5-disk sub-R5s) vs 4 x 5-disk R5s - possibly beyond the scope of a comment ;) – Craig Watson Oct 02 '14 at 11:07