0

When creating classic mirrored raids, it's usual to locate copies of data on disks of different kinds. In this case you would use a mix of disks of different series and manufacturers. That reduces the risk of loosing data if a whole series is faulty.

Lets assume i want to build a md-based raid 10 with four disks built by two manufacturers. How can i ensure that data gets replicated on different typed disks?

I know there is the possiblity to specify the layout near, far and offset (description). These Layouts are pointing into the right direction, but i'm not sure how md "sorts" the disks. That could lead to copies on same typed disks.

michi.0x5d
  • 154
  • 8
  • 1
    Please explain **why** the order it does not matter. If a whole series has a failure (for example by failed design) and the data gets replicated on a disk of same series, the data may be lost in a short amount of time (e.g. hours). – michi.0x5d Sep 18 '15 at 20:06
  • 2
    If you have a three-disk failure in this scenario, then you're _incredibly_ unlucky. Of course, the potential for being so unfortunate is one reason why you also need backups. – Michael Hampton Sep 18 '15 at 20:20
  • 1
    When using raid 10 with for disks, only tolerance for one disk-failure is "guaranteed". If the _wrong_ two disks fail, the data is lost. So if replication happens on disks of same manufacturer/series disks it's more likely that data gets lost. I'm aware that raid is not a backup. It just makes storage with higher availability possible. – michi.0x5d Sep 18 '15 at 20:39
  • @MichaelHampton I think you are assuming RAID-1 with four replicas, but that is not what this question is about. – kasperd Sep 18 '15 at 21:42

2 Answers2

1

If you really, really want to guarantee that you pair drives from manufacturer 1 and manufacturer 2, you should probably set up the mdadm raid 10 manually.

Use lshw -class disk as a superuser, to verify the logical names of each of the disks, and the vendor names.

Then create the underlying RAID 1 devices for your raid 10:

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

Then assemble the RAID 1 devices into a RAID 0 device.

mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md0 /dev/md1

Which should give you a RAID 10 where you know which devices are paired with which.

Hope that helps. =)

Kassandry
  • 639
  • 1
  • 6
  • 15
  • So in short: it's not possible to set the replication layout completely manually via md's raid 10. The solution is a raid 1+0. – michi.0x5d Sep 18 '15 at 20:43
1

I imagine you probably found a way around this by now but the following should do the trick:

mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sda missing /dev/sdc missing

Where sda and sdc are of different manufacturers. Then add the missing disks with:

mdadm --add /dev/md0 /dev/sdb /dev/sdd

When running the mdadm --create command, order seemed to matter for sure, so I'd assume that, as long as the 1st and 3rd disk (in a 4 drive array) are of different manufacturers, you'd get the same effect, but you should test that one for yourself.

Joren Love
  • 11
  • 1