Expand a Linux MD RAID 10 array to use larger disks

11

3

From what I understand this is possible, but I can't find a straight answer anywhere about how exactly to go about it, and I don't want to risk losing data experimenting with it myself so I'm asking here.

I have a home server with five disks running CentOS. One is an SSD holding the OS. The remaining four disks are 4TB harddrives configured in RAID10 with mdraid. The filesystem in use is xfs.

I'm considering trying to replace the 4TB disks with 8TB ones. What exactly needs to be done to make this replacement happen without having to reconfigure a fresh RAID and lose data?

Details output:

[root@fluttershy ~]# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Mon Apr 18 12:46:24 2016
     Raid Level : raid10
     Array Size : 7813771264 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jun 13 11:04:41 2016
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : fluttershy:data  (local to host fluttershy)
           UUID : aa8f857a:g8bd0344:06d2f6d3:bac01a46
         Events : 13440

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync set-A   /dev/sda1
       1       8       17        1      active sync set-B   /dev/sdb1
       2       8       33        2      active sync set-A   /dev/sdc1
       3       8       49        3      active sync set-B   /dev/sdd1

Kefka

Posted 2016-06-14T11:05:05.047

Reputation: 951

Answers

13

Complete the following steps for each disk; replace /dev/sda1 with other disks as necessary. You must complete all of these steps for one disk before you can proceed to the next disk.

  • Mark the disk as failed so that MD stops using it: mdadm --manage /dev/md127 --fail /dev/sda1
  • Remove the disk from the array: mdadm --manage /dev/md127 --remove /dev/sda1
  • Physically replace the disk.
  • Partition the new disk using type 0xDA with one partition spanning the entire disk.
  • Add the new disk to the array: mdadm --manage /dev/md127 --add /dev/sda1

MD will rebuild the array once you add the replacement disk. Make sure the rebuild is complete before you proceed to the next disk. You can check the status of the array by running cat /proc/mdstat.

Once all of the disks have been replaced and the array rebuilt, expand the array to fill the maximum capacity of all of the disks with mdadm --grow /dev/md127 --size=max. You can resize the filesystem from there to fill the expanded RAID; in your case, use the xfs_growfs command.

More information on how to grow an MD array is available on the Linux RAID wiki.

As with any other disk manipulation task, you should take a backup before you begin.

bwDraco

Posted 2016-06-14T11:05:05.047

Reputation: 41 701

1Maybe add how you check the rebuild progress -> tim@MushaV3 ~ $ cat /proc/mdstat Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid1 sdb1[0] sda1[1] 131008 blocks [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk – djsmiley2k TMW – 2018-05-07T13:25:30.577

2

You'd want to swap out each disk one by one firstly.

To do this you'd 'fail' each disk, and replace it with it's new 8Tb replacement, infact if you have the spare ports, you can add the extra disks, then have mdadm 'replace' in place instead of removing a disk from the raid and having a higher risk of failure.

This question details the best way I can find of doing it 'safely'

Once you've done this, you'll want to simply expand the existing FS into the newly created space. It appears the command for this is 'xfs_growfs' however I haven't got experience with xfs to explain how exactly you'd do this.

As always, have backups ready (and raid isn't a backup!).

djsmiley2k TMW

Posted 2016-06-14T11:05:05.047

Reputation: 5 937

Being set up with a raid10 and mdraid, would the extra space even be recognized? Should I swap two at a time, one from each mirrored pair? – Kefka – 2016-07-22T01:38:19.360

You only add the space after swapping all the disks, and then yes it'll be recognised. – djsmiley2k TMW – 2016-07-22T13:09:06.030