1

I have a system already configured with raid 1 with 2 disks:

/dev/md1             918347072 249416692 621528596  29% /var
/dev/md0               9920532    160640   9247828   2% /tmp

# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc5[0] sdb5[1]
      948035712 blocks [2/2] [UU]

md0 : active raid1 sdc2[0] sdb2[1]
      10241344 blocks [2/2] [UU]

I have asked for changing this into Raid 0+1 (strips are mirrored, http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_0.2B1 )

How can I change this partition into Raid 01 after adding 2 more disks without formatting?

seaquest
  • 668
  • 2
  • 11
  • 25

1 Answers1

1

The Linux software MD RAID10 personality not exactly the same as the standard RAID 1+0 or 0+1, or 10. Also, AFAIK it simply is not possible to reshape from a RAID1 to RAID10.

If you plan on using the RAID10 personality, then ignore all that 0+1 vs 1+0 stuff since you don't really get a choice with MD your bigger question is about the near|far|offset question which determines how the chunks are distributed between the various disks in the volume.

As dafydd alluded to in his comment if you had LVM on top of your RAID devices, then you would be able to setup an additional RAID1 volume with your two new disks, add it as a PV, and then use LVM to do the stripping. But from your df output it doesn't look like you have LVM in place.

It seems like it would be very dangerous, but it might be possible to create a new RAID10 with 4 disk, but with two of the disk set as marked as missing (basically RAID0. Copy the data over to the new RAID10, then add the existing RAID1 disks to the RAID10. But your data would be effectively be on a RAID0 until the rebuild completes after adding the two new disk from the RAID1. I believe you have to a near style layout for this to work though.

See this answer for the procedure. https://serverfault.com/a/101135/984

Zoredache
  • 128,755
  • 40
  • 271
  • 413