25

mdadm does not seem to support growing an array from level 1 to level 10.

I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array.

My current strategy:

  1. Make good backup.
  2. Create a degraded 4 disk RAID 10 array with two missing disks.
  3. rsync the RAID 1 array with the RAID 10 array.
  4. fail and remove one disk from the RAID 1 array.
  5. Add the available disk to the RAID 10 array and wait for resynch to complete.
  6. Destroy the RAID 1 array and add the last disk to the RAID 10 array.

The problem is the lack of redundancy at step 5.

Is there a better way?

Hans Malherbe
  • 725
  • 2
  • 9
  • 11
  • 6
    Don't forget step 0. Make a good backup of everything. – Anthony Lewis Jul 21 '09 at 18:28
  • I believe your steps are correct. You lose the redundancy during the period you're copying the data from one set to another. – Kevin Kuphal Jul 21 '09 at 18:30
  • Is it possible to create a degraded 4disk RAID10? – pauska Jul 21 '09 at 19:03
  • 1
    Yes, you just use "/dev/hda missing /dev/hdb missing", because otherwise you lose one entire pair and it all falls apart. The "accepted answer" for this question, incidentally, is completely wrong and does not work. – womble Jul 25 '09 at 01:03
  • I'm also looking for a good way to do this, and I think the method described in the question is the best I found so far. Mark Turner's answer doesn't help because it creates a 2-device array that can't be reshaped to 4 devices (the other 2 can only be added as spares). And Suresh Kumar's answer is the same as described in the question, except it won't work exactly like that; the missing devices have to be the 2nd and 4th, not the 3rd and 4th. About the steps in the question: I think step 5 has full redundancy, and step 6 has redundancy for half the data. I actually see the steps were renumbere – aditsu May 05 '10 at 14:28
  • I just migrated my raid1 to raid10 based on this answer and wrote up a very detailed step by step guide. For those that are interested you can read it [here](http://www.burgundywall.com/tech/convert-raid1-to-raid10-with-lvm/). – Kurt Apr 07 '12 at 16:09

5 Answers5

11

With linux softraid you can make a RAID 10 array with only two disks.

Device names used below:

  • md0 is the old array of type/level RAID1.
  • md1 is the new array of type/level RAID10.
  • sda1 and sdb2 are new, empty partitions (without data).
  • sda2 and sdc1 are old partitions (with crucial data).

Replace names to fit your use case. Use e.g. lsblk to view your current layout.

0) Backup, Backup, Backup, Backup oh and BACKUP

1) Create the new array (4 devices: 2 existing, 2 missing):

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda1 missing /dev/sdb2 missing

Note that in this example layout sda1 has a missing counterpart and sdb2 has another missing counterpart. Your data on md1 is not safe at this point (effectively it is RAID0 until you add missing members).

To view layout and other details of created array use:

mdadm -D /dev/md1

Note! You should save the layout of the array:

# View current mdadm config:
cat /etc/mdadm/mdadm.conf
# Add new layout (grep is to make sure you don't re-add md0):
mdadm --detail --scan | grep "/dev/md1" | tee -a /etc/mdadm/mdadm.conf
# Save config to initramfs (to be available after reboot)
update-initramfs -u

2) Format and mount. The /dev/md1 should be immediately usable, but need to be formatted and then mounted.

3) Copy files. Use e.g. rsync to copy data from old RAID 1 to the new RAID 10. (this is only an example command, read the man pages for rsync)

rsync -arHx / /where/ever/you/mounted/the/RAID10

4) Fail 1st part of the old RAID1 (md0), and add it to the new RAID10 (md1)

mdadm /dev/md0 --fail /dev/sda2 --remove /dev/sda2
mdadm /dev/md1 --add /dev/sda2

Note! This will wipe out data from sda2. The md0 should still be usable but only if the other raid member was fully operational.

Also note that this will begin syncing/recovery processes on md1. To check status use one of below commands:

# status of sync/recovery
cat /proc/mdstat
# details
mdadm -D /dev/md1

Wait until recovery is finished.

5) Install GRUB on the new Array (Assuming you're booting from it). Some Linux rescue/boot CD works best.

6) Boot on new array. IF IT WORKED CORRECTLY Destroy old array and add the remaining disk to the new array.

POINT OF NO RETURN

At this point you will destroy data on the last member of the old md0 array. Be absolutely sure everything is working.

mdadm --stop /dev/md0
mdadm /dev/md0 --remove /dev/sdc1
mdadm /dev/md1 --add /dev/sdc1

And again - wait until recovery on md1 is finished.

# status of sync/recovery
cat /proc/mdstat
# details
mdadm -D /dev/md1

7) Update mdadm config

Remember to update /etc/mdadm/mdadm.conf (remove md0).

And save config to initramfs (to be available after reboot)

update-initramfs -u
Nux
  • 541
  • 3
  • 12
  • 21
Mark Amerine Turner
  • 2,574
  • 1
  • 16
  • 17
  • 1
    Where do the four disks come into it? – womble Jul 21 '09 at 19:30
  • Eh? I clearly state to create the array with 2 disks, copy the data, fail the raid 1 by removing one disk, add that disk to the RAID10, then boot to the RAID10, if it worked, destroy the RAID1 and move that last disk to the new RAID.... – Mark Amerine Turner Jul 21 '09 at 19:33
  • 4
    You edited your answer after my comment. Also, your procedure gives a two-disk RAID-10 with two spares... hardly a four-disk RAID-10. RAID-10 volumes can't be reshaped, either. – womble Jul 21 '09 at 22:32
  • Not that I want to argue with you but my commands DO create a 4 disk RAID10 array. All that would need to occur after the Array syncs disks is to resize the volume. Also, if you look at the edit I made all I did was add the commands that I summarized in step 6. – Mark Amerine Turner Jul 21 '09 at 23:11
  • 2
    I ran the commands as you provided them, and I end up with a two-disk RAID-10 with two spares, as shown by /proc/mdstat. This is on kernel 2.6.30, with mdadm v2.6.7.2. – womble Jul 22 '09 at 03:55
  • What happens if you run 'mdadm --grow --raid-devices=4 /dev/md1' ? – Mark Amerine Turner Jul 22 '09 at 07:16
  • 3
    "mdadm: raid10 array /dev/md1 cannot be reshaped." This is also mentioned in the mdadm manpage. – womble Jul 22 '09 at 21:32
  • What are the names of devices here? As I understand `md0` is the old RAID1 with some data in it. The `md1` is new RAID10 device. But what is `sda1`, `sdb2` used when creating `md1`? Are those new, clean partitions (from new disks)? What is `sda2` and `sdc1` are those old partitions with old data (old disks)? – Nux Jul 11 '18 at 14:11
  • No need to rsync - instead dump the data from the current ext4 (`pv < /dev/md12x > /mnt/backup-ext4`), then set up the mirror as in the guide, then restore the data with (`pv < /mnt/backup-ext4 > /dev/md12x`). – Nowaker Mar 28 '19 at 03:12
  • @MarkTurner what command would you run in "2) Format and mount. The `/dev/md1` should be immediately usable, but need to be formatted and then mounted." Something like `mkfs.ext4 /dev/md0`? But that doesn't destroy the data on the disks? – RobbieTheK Jun 11 '20 at 01:31
9

Follow the same procedure as Mark Turner but when you create the raid array, mention 2 missing disks

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda1 missing /dev/sdb2 missing

And then proceed with other steps.

In short, create RAID10 with total 4 disks(out of which 2 are missing), resync, add other two disks after that.

Sam Whited
  • 216
  • 3
  • 9
6

Just finished going from LVM on two 2TB disk mdadm RAID 1 to LVM on a four disk RAID 10 (two original + two new disks).

As @aditsu noted the drive order is important when creating the array.

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda missing /dev/sdb missing

Code above gives a usable array with two missing disks (add partition numbers if you aren't using whole disks). As soon as the third disk is added it will begin to sync. I added the fourth disk before the third finished syncing. It showed as a spare until the third disk finished then it started syncing.

Steps for my situation:

  1. Make good backup.

  2. Create a degraded 4 disk RAID 10 array with two missing disks (we will call the missing disks #2 and 4).

  3. Tell wife not to change/add any files she cares about

  4. Fail and remove one disk from the RAID 1 array (disk 4).

  5. Move physical extents from the RAID 1 array to the RAID 10 array leaving disk 2 empty.

  6. Kill the active RAID 1 array, add that now empty disk (disk 2) to the RAID 10 array, and wait for resync to complete.

  7. Add the first disk removed from RAID 1 (disk 4) to the RAID 10 array.

  8. Give wife go ahead.

At step 7 I think drive 1, 2, OR 4 can fail (during resync of disk 4) without killing the array. If drive 3 fails the data on the array is toast.

mgorven
  • 30,036
  • 7
  • 76
  • 121
user75601
  • 61
  • 1
  • 1
2

I did it with LVM. Initial configuration: - sda2, sdb2 - and created raid1 md1 on top. sda1 and sdb1 were used for second raid1 for /boot partition. - md1 was pv in volume group space, with some lvm's on it.

I've added disks sdc and sdd and created there partitions like on sda/sdb.

So:

  1. created md10 as:

    mdadm --create /dev/md10 --level raid10 --raid-devices=4 /dev/sdc2 missing /dev/sdd2

  2. extend vg on it:

    pvcreate /dev/md10 vgextend space /dev/md10

  3. moved volumes from md1 to md10:

    pvmove -v /dev/md1 /dev/md10

(wait for done) 4. reduce volume group:

vgreduce space /dev/md1
pvremove /dev/md1
  1. stop array md1:

    mdadm -S /dev/md1

  2. add disks from old md1 to md10:

    mdadm -a /dev/md10 /dev/sda2 /dev/sdb2

  3. update configuration in /etc/mdadm/mdadm.conf:

    mdadm -E --scan >>/dev/mdadm/mdadm.conf

(and remove there old md1)

Everything done on live system, with active volumes used for kvm's ;)

undefine
  • 956
  • 8
  • 20
1

I have moved my raid1 to raid10 now and while this page helped me but there are some things missing in the answers above. Especially my aim was to keep ext4 birthtimes.

the setup was:

  • 2 raid1 disks of each type msdos and md0 with ext4 partition and mbr with msdos
  • 2 fresh new disks becoming the new primaries (all same size)
  • resulting in an 4 disks raid md127 ext4 but due to size i had to switch from mbr to gpt
  • its my home disk, so no bootmanager setup is required or intended
  • using my everyday ubuntu (so: not using the external rescue disc)
  • using gparted, dd and mdadm

as anyone stated before: the zero step should be backup and there can allways go something wrong in the process resulting in extreme dataloss

  1. BACKUP

  2. setup of the new raid

    1. create a new raid

      mdadm -v --create /dev/md127 --level=raid10 --raid-devices=4 /dev/sdb1 missing /dev/sde1 missing
      

      (i found that the layout is important .. the 2nd and 4th seem to be the duplicates in a default 'near' raid )

    2. set the partition of the raid i was using gparted setting up gpt on the md127 and then adding a new partition (ext4) of the size of the old one or greater
  3. migrate

    1. now getting the data over ... i was first trying to use rsync wich worked but failed to keep the birthtime ... use dd to clone from the old raid to the new one

      dd if=/dev/md0 of=/dev/md127p1 bs=1M conv=notrunc,noerror,sync
      

      WAIT FOR IT
      you can check with sending USR1 to that process

      kill -s USR1 <pid>
      
    2. fix the raid
      gparted is a great tool: you tell it to check&fix the partition and resize it to the full size of that disk with just a few mouseclicks ;)

    3. set a new uuid to that partition and update your fstab with it (change uuid)

    4. store your raid in conf

      mdadm --examine --scan  >> /etc/mdadm/mdadm.conf
      

      and remove the old one

      vim /etc/mdadm/mdadm.conf 
      
    5. reboot if youre not on a rescusystem
  4. destroying the old one

    1. fail the first one and add it to the new raid

      mdadm /dev/md0 --fail /dev/sdc1 --remove /dev/sdc1
      

      then make gpt on that device and set a new empty partition

      mdadm /dev/md127 --add /dev/sdc1
      

      WAIT FOR IT
      you can check with

      cat /proc/mdstat
      
    2. stop the second one

      mdadm --stop /dev/md0 
      

      then make gpt on that last device and set a new empty partition again

      mdadm /dev/md127 --add /dev/sdd1
      

      WAIT FOR IT again

Summer-Sky
  • 111
  • 2