26

I've seen both of these listed, both being striped and mirrored across multiple drives, but is the a difference between them that I'm not picking up on?

3 Answers3

37

It has to do with the order that the operations are performed in, and it only appies to arrays that are 6 disks or larger (if you have 4 disks, they're both pretty much the same).

RAID 1+0 (10): Disks 1 + 2, 3 + 4, 5 + 6 are mirrored to create a RAID-1 array, and a RAID 0 array is created ontop of the arrays.

RAID 0+1 (01): Disk 1 + 2 + 3 are striped to create a RAID 0 array, and then disks 4 + 5 + 6 to create RAID 1 redundancy.

With RAID 0+1, a single disk loss from one side of the array (1,2,3 or 4,5,6) will degrade the array to a state where you are essentially running RAID 0 (which is bad).

With RAID 1+0, you can lose a single disk from each pair (1,2 or 3,4 or 5,6) and the array will stay functional. The only way this array can be brought offline is to have both disks in a pair fail.

Unless your circumstances are exceptional, you should never use 0+1.

Bart Silverstrim
  • 31,092
  • 9
  • 65
  • 87
Mark Henderson
  • 68,316
  • 31
  • 175
  • 255
  • 3
    +1 avoid 0+1 it should only be used in very special case – radius May 26 '10 at 03:30
  • Care to enlighten us on one such "exceptional" case for 0+1? I'm curious :) – Earlz May 26 '10 at 06:51
  • 7
    I can't think of any, hence why if you did have one, it would be exceptional – Mark Henderson May 26 '10 at 07:06
  • If you're using a cheap card, check the documentation to be sure of the order. I've seen cheap cards that say "RAID 10" and actually implement "RAID 01" – Chris S May 26 '10 at 13:31
  • 4
    Just one quick note, in either case the array is considered degraded after the loss of one disk, and that disk will need to be replaced ASAP as the loss of one more disk can result in data loss. The chances are 1/7 for RAID1+0, and 4/7 for RAID0+1. But in either case, 1 more failure can take down the whole array – Andrew Lowe Jul 12 '10 at 09:18
19

Raid 0+1 vs Raid 1+0 (Probability of Failure)

Here's a little bit of math that should show the differences in rates of failure. For simplicity, let's assume there is an even number of disks.

In both array configurations, each disk is broken up into blocks. In Raid 0+1, striping occurs first and then mirroring. In Raid 1+0, mirroring occurs first and then striping.

We can always partition Raid 0+1 into two groups (G1 and G2).
Note that I'm using 'partition' in a mathematical sense.
For n disks, we can define:
G1 = {D1, D2, ..., Dn/2}
G2 = {Dn/2+1, Dn/2+2, ..., Dn}

Raid 0+1

4 Disks:                       6 Disks:
Disk1 Disk2 Disk3 Disk4        Disk1 Disk2 Disk3 Disk4 Disk5 Disk6
----- ----- ----- -----        ----- ----- ----- ----- ----- -----
| a | | b | | a | | b |        | a | | b | | c | | a | | b | | c |
| c | | d | | c | | d |        | d | | e | | f | | d | | e | | f |
----- ----- ----- -----        ----- ----- ----- ----- ----- -----
G1 = {D1, D2}                  G1 = {D1, D2, D3}
G2 = {D3, D4}                  G2 = {D4, D5, D6}



For Raid 1+0, we can always partition the disks into n/2 groups.
Note that I'm using 'partition' in a mathematical sense.
For n disks, we can define:
G1 = {D1, D2}
G2 = {D3, D4}
...
Gn/2 = {Dn-1, Dn}

Raid 1+0

4 Disks:                       6 Disks:
Disk1 Disk2 Disk3 Disk4        Disk1 Disk2 Disk3 Disk4 Disk5 Disk6
----- ----- ----- -----        ----- ----- ----- ----- ----- -----
| a | | a | | b | | b |        | a | | a | | b | | b | | c | | c |
| c | | c | | d | | d |        | d | | d | | e | | e | | f | | f |
----- ----- ----- -----        ----- ----- ----- ----- ----- -----
G1 = {D1, D2}                  G1 = {D1, D2}
G2 = {D3, D4}                  G2 = {D3, D4}
                               G3 = {D5, D6}


Now, with that out of the way, let's get into some math!
For a failure to occur in a Raid 0+1 configuration, at least 1 hard disk from each group must die.
For a failure to occur in a Raid 1+0 configuration, all hard disks in any single group must die.

In either Raid configuration, at least two disks must die. Let's look at all the possible ways both Raid configurations could fail if two disks were to die.

Number of Disks (n) = 4
2 Disks Die : Raid Failure
D1D2        : R10
D1D3        : R01
D1D4        : R01
D2D3        : R01
D2D4        : R01
D3D4        : R10

With 4 disks, there are C(n, 2) = C(4, 2) = 6 combinations in total.

4/6 of these combinations would cause a Raid 0+1 configuration to fail. (66% chance of failure)
We can say that:

P1 = P (Raid 0+1 Failure | 2 Disks die) = 2/3


2/6 of these combinations would cause a Raid 1+0 configuration to fail. (33% chance of failure)
We can say that:

P2 = P (Raid 1+0 Failure | 2 Disks die) = 1/3


We can do the same test with n = 6, but I'll omit the table.

P1 = 9/15 = 3/5
P2 = 3/15 = 1/5
P3 = P (No failures | 2 Disks die) = 4/15
P1P2 = 1/15

With 6 disks, there are c(n, 2) = c(6, 2) = 15 possible combinations.
There is a 60% chance that a Raid 0+1 configuration fails.
There is a 20% chance that a Raid 1+0 configuration fails.

Now these results can be generalized for n disks.

P1 = c(n/2, 1) * c(n/2, 1) / c(n, 2)

   = (n/2 * n/2) / (n * (n - 1) / 2)

   = (n/2 * n/2) * (2 / (n * (n - 1))

   = (n * n / 4) * (2 / (n * (n - 1))

   = (n / 2) * (1 / (n - 1))

   = n / (2 * (n - 1))


P2 = (n/2) / c(n, 2)

   = (n/2) / (n * (n - 1) / 2)

   = (n/2) * (2 / (n * (n - 1)))

   = 1 / (n - 1)


Now the most useful and interesting part of the math. We can take the limits of the two equations above. Below, I use 'inf' to mean infinity.

Lim n->inf P1 = Lim n->inf n / (2 * (n - 1))     // We can use L'Hopital's rule

              = Lim n->inf 1 / 2 = 1 / 2

In other words, there will always be at least a 50% chance of failure if 2 disks die on a Raid 0+1 configuration!

Now let's see how a Raid 1+0 configuration fairs.

Lim n->inf P2 = Lim n->inf 1 / (n - 1) = 0

In other words, the more disks we add to a raid 1+0 configuration, the closer to a theoretical 0% chance of failure we get!

One final table (Please note that I am rounding the values to integers.)

-------------------
| n   | P1  | P2  |
-------------------
| 4   | 66% | 33% |
| 6   | 60% | 20% |
| 8   | 57% | 14% |
| 10  | 55% | 11% |
| 12  | 54% | 9%  |
| 16  | 53% | 7%  |
| 20  | 52% | 5%  |
| 24  | 52% | 4%  |
| 32  | 51% | 3%  |
| 64  | 50% | 1%  |
| 128 | 50% | 0%  |
-------------------

Conclusion: Use Raid 1+0.

red_eight
  • 291
  • 2
  • 3
  • 2
    I was really bored at work today when I decided to learn about Raids. I came up with these calculations and put everything into a word document. Unfortunately I don't have the rep to post images, so my tables and equations look a bit ugly. – red_eight Nov 05 '13 at 03:57
  • 6
    Honestly, I think your tables look quite nice and images would be worse. – Scott Chamberlain Nov 05 '13 at 04:53
  • 3
    On the comparison between 4-disk RAID10 vs RAID01 arrays, you have listed a D1+D4 failure and a D2+D3 failure as a total loss of the RAID01 array but data intact in the RAID10 array. This is incorrect. Both of these failure pairs will not lose data in either RAID implementation. In a 4-disk array, fault tolerance is identical between RAID10 and RAID01. It is only with larger arrays that RAID10 has better fault tolerance. – Justin L. Franks Mar 24 '16 at 00:47
  • @JustinL.Franks You're right that the data is not lost in D1+D4 or D2+D3 failures in 4 disk RAID01. However, the array is still non-functional, since there's a missing disk from both sides of the mirror. It would need some manual work to salvage the data. In general the data has same amount of copies in both configurations and therefore data loss probability is the same, but the RAID01 configuration has more combinations where the array is non-functional even though technically all data is still there. – Tuomas Oct 03 '21 at 11:53
3

This belongs on ServerFault but here is a quick overview of the difference from Wikipedia

RAID 10

RAID 1+0 (or 10) is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the "1+0" name. A RAID 1+0 array requires a minimum of four drives – two mirrored drives to hold half of the striped data, plus another two mirrored for the other half of the data. In Linux, MD RAID 10 is a non-nested RAID type like RAID 1 that only requires a minimum of two drives and may give read performance on the level of RAID 0.

RAID 01

RAID 0+1 (or 01) is a striped data set (RAID 0) which is then mirrored (RAID 1). A RAID 0+1 array requires a minimum of four drives: two to hold the striped data, plus another two to mirror the first pair.

Earlz
  • 969
  • 5
  • 12
  • 28