0

I just got a new remote root-access server with 2 1TB disks in a Raid 1 configuration, running Debian (squeeze). Before installing my stuff on it, I'd like to switch to Raid 10 if I can. All the instructions I can find, for example Best way to grow Linux software RAID 1 to RAID 10, are for going from a 2-disk Raid 1 to a four disk Raid 10. Anyone have experience of making the move I have in mind, i.e. w/o any extra disks?

HST
  • 11
  • 3
  • 1
    AFAIK, RAID 10 requires a minimum of four drives. Two drives can be a RAID 0 (stripe) or a RAID 1 (mirror) , but not a RAID 10 as RAID 10 is a striped mirror. – joeqwerty Sep 24 '13 at 23:15
  • 1
    To add: RAID 10 is really RAID 1+0. You take two drives, make a RAID 1, take another two drives, make that a RAID 1, then create a new RAID 0 with these two RAID 1s. Now you have RAID 10. Many RAID controllers take care of all of this for you, but that's what it should be doing. –  Sep 24 '13 at 23:21
  • @joeqwerty, this is not true, when using the Linux RAID10. the Linux RAID10 lets you do weird things, see: http://en.wikipedia.org/wiki/RAID#Non-standard_levels – Zoredache Sep 25 '13 at 00:00
  • 2
    @HST, while you can have a 2 disk _'RAID10'_ under Linux, but why do you think you would want to? What are you trying to accomplish? I highly doubt you are going to be able to do a re-shape from a RAID1 to some of the more obscure layouts. – Zoredache Sep 25 '13 at 00:03
  • @Zoredache - Not being a Linux guy, I had no idea. Thanks for the clarification. – joeqwerty Sep 25 '13 at 00:04
  • It would help us understand your question if you explained why you want to switch from RAID1 to RAID10. Or let us know if this is simply a thought experiment (Zoredache's comments about 'lets you do weird things' is an interesting point) or if you are simply trying to understand the difference between RAID1 & RAID10. – Stefan Lasiewski Sep 25 '13 at 02:22
  • @Stefan, My reasoning was simply that Raid-0 is good, because striping gives efficiency gains. Raid-1 is good, because duplication gives some hope of recovery after failure. So 0+1 makes sense -- it's a recoverable efficient configuration. – HST Sep 25 '13 at 08:13
  • I have succeeded in doing this, see comment to 2nd answer below – HST Oct 19 '13 at 12:55

3 Answers3

3

Normally you need a minimum of four disks for a RAID 10 array.

Chris McKeown
  • 7,128
  • 1
  • 17
  • 25
  • 2
    This is not true with Linux software RAID10. Linux software RAID10 is not a 'true' RAID 10. Instead it is permits the user to arbitrarily define the number of copies, and the striping method, in addition to many other settings. See: http://en.wikipedia.org/wiki/RAID#Non-standard_levels – Zoredache Sep 24 '13 at 23:59
  • 2
    Just prepend your answer with "Normally, you need..." :) – Stefan Lasiewski Sep 25 '13 at 02:18
  • @Zoredache Yes, but why would you want to? No gain in space from additional disks, and no performance gain since the same 2 drives and controller(s) are handling all the I/O. – fukawi2 Sep 25 '13 at 03:15
  • @fukawi2, with the MD-RAID10 and two drives it acts somewhat like a RAID1, but you can offset the copies between the two drives. So one of the copies of a block would live at the start of one drive, and the other copy would be in the middle. Under some specific, and atypical, workloads, this will result in better performance. I had a bookmark to an article with lots of benchmarks, but it appears to have disappeared from the net. – Zoredache Sep 25 '13 at 05:48
  • Wikipedia's [Non-standard RAID levels](http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10) article illustrates several 2-drive RAID-10 layouts, and claims performance gains for the 'far' layout "This offers striping performance on a mirrored set of only 2 drives." – HST Sep 25 '13 at 09:01
  • @HST If you do non-standard things, you should expect to have non-standard problems. Keep careful documentation of everything you do; you'll need it when something goes wrong. – Michael Hampton Sep 26 '13 at 03:56
  • I have succeeded in doing this.I largely worked from http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-ubuntu-10.04, with appropriate minor changes for Debian, for raid10 and for an existing RAID in place. Details on request – HST Oct 19 '13 at 12:54
0

The easy/safe way (rather than trying to do it in place):

  • Use fsarchiver to take copies of the filesystem(s) that will be migrated between the old array and the new array
    • Have a second backup copy of the data regardless
  • Unmount filesystems, stop and destroy the old array
  • Create the new array; see this question about raid10, f2 for some details
    • e.g. mdadm --create /dev/md0 -n2 -l10 -pf2 /dev/sda1 /dev/sdb1
  • Restore the filesystem(s) using fsarchiver
  • Check mount points and if this was your root drive, re-install the bootloader
Andrew
  • 7,772
  • 3
  • 34
  • 43
  • Sounds simple, indeed, _but_, where do you stand to do step 3 "create the new array"? I'm running remotely, I can't boot and run 'in' e.g. a USB-self-contained distro. – HST Sep 25 '13 at 14:48
  • @HST If you want to do this to the root of a live system, you're looking at a reshape, and based on what other comments are saying I'm not sure you're looking at the right answer to your problem. – Andrew Sep 26 '13 at 03:53
0

You could use btrfs which can convert between raid levels on a mounted filesystem. And supports different data and metadata raid levels, is a cow fs with all the benefits that brings (cow can be turned of for files/directories for which it's not well suited (vms, dbs...), protects against bit rot, supports deduplication, compression, .. but does have it's own set of problems - snapshots aren't recursive which isn't intuitive, raid1 only stores two copies of the data even if you have 5 drives, btrfs specific mount options can't be set separately for each subvolume, some people don't trust it due to critical flaws in it's raid5/6 that were discovered last year and still haven't been fixed.

Imo there is no reason to use raid5 (with today's drive sizes we are past the point where a second drive failing during a rebuild is extremely unlikely) or 6 (will have the same problem as raid5 in 2019-2020 and is extremely slow especially for writes (about as fast as a single drive) and the other levels are not flawed as far as we know. Compared to raid10 that is almost as fast as raid0, almost as safe as raid1 and unless you are using a lot of drives the space penalty is negligible (raid10 = n/2 usable capacity, raid6 = n-2 usable capacity... unless you have a lot of drives the difference is very small and if you do have a lot of drives then raid6 might not be good enough and you would need triple parity to be safe).

Parity based raid is dead.

Mr. C
  • 111
  • 1