13

I currently have a file server with 3 1.5TB disks in a RAID5 array. Since it's pretty much full, I got three additional disks (also 1.5TB each).

Now I'd like to switch to RAID6 since 6TB space is enough and I'd like to have the increased safety of raid6. While I do have a full backup - i.e. I could simply create a new array and restore the backup - I'd prefer to switch without having to restore a backup. Is this possible and if yes, how?

ThiefMaster
  • 378
  • 4
  • 19
  • I just performed this operation. - Ubuntu Server 20.04 (mdadm v4.1) - Was: 6x 2TB drives in raid 5 - Converted to: 7x 2TB drives in raid 6 - Time taken 12 Hours. – Bob Arezina Mar 13 '21 at 13:24

4 Answers4

11

The terminology you are looking for is a "RAID level migration".

According to this, it's possible. I haven't done it, but the procedure looks like you should add the new drive as a hotspare to the existing array, then use mdadm to update the raid level and the number of raid devices..

You'll need a recent mdadm to do this: mdadm-2.6.9 (eg, centos 5.x) doesn't seem to support it, but mdadm-3.1.4 (eg ubuntu 11.10) does:

   Grow   Grow (or shrink) an array, or otherwise reshape it in some way.  Currently supported growth options including changing the active size of component devices and
          changing the number of active devices in RAID levels 1/4/5/6, changing the RAID level between 1, 5, and 6, changing the chunk size and  layout  for  RAID5  and
          RAID5, as well as adding or removing a write-intent bitmap.

EG, add a new hotspare device, /dev/sdg, to the RAID5 array first:

$ sudo mdadm --manage /dev/md/md0 --add /dev/sdg

Then convert into a RAID6 array and make it rebuild to a clean state. The --raid-devices 4 tells you how many drives you have in total in the new array.

$ sudo mdadm --grow /dev/md/md0 --raid-devices 4 --level 6

I have no idea how quick this will be though. In my experience with doing raid level migrations on hardware RAID controllers, it's been quicker to create the new array from scratch and recover your backup to it.

Daniel Lawson
  • 5,426
  • 21
  • 27
  • 3
    Migrating a RAID 5 to RAID 6 has two slow operations - re-striping the data across the disks and calculating the second parity value for the extra parity disk. Wipe/restore will probably take the same amount of time as the resize. – Andrew Jan 19 '12 at 01:38
  • 1
    It also requires a certain kernel version. Found this out the hard way. – Sirex Jan 19 '12 at 13:27
  • Since I'm on gentoo both my kernel and mdadm are pretty recent versions - so that shouldn't be a problem. – ThiefMaster Jan 19 '12 at 14:54
  • `--raid-devices 4`? Are you sure it's not `6`? From what I read in the manpage, it's the number of active non-spare devices and in RAID6 there is no "spare" device. – ThiefMaster Jan 19 '12 at 14:55
  • @ThiefMaster: If you added three disks, it will be `6`, not `4`. – Sven Jan 19 '12 at 17:59
  • 1
    The link you refer to was written by me. Please note: I didn't really read the docs about how to actually do it.I just created a virtual machine with 30 drives (something like that) and started fiddling. So take all of the post with a grain of salt, it's merely a braindump... – Martin M. Jan 20 '12 at 07:05
  • 1
    @ServerHorror: Noted. The wider internet, including the mdadm man page, claims that raid level migration is possible though. :) – Daniel Lawson Jan 22 '12 at 20:21
  • 1
    @ThiefMaster The OP has a three disk array, and is adding one disk. He adds that as a spare, then does the raid-level migration by telling mdadm that there are a total of 4 devices now, and to use raid6. – Daniel Lawson Jan 22 '12 at 20:22
  • 1
    @Andrew Normally, conversion from RAID 5 to 6 is a 2-step process. First, parity is calculated and stored on the new disk (like in RAID 4). Second, the disks are re-striped to randomize the location of the parity data (for performance). The second step can be skipped (or delayed) with option `--layout=preserve`. – Aleksandr Dubinsky Feb 15 '20 at 07:34
7

Obligatory warning: Plan for failure. Keep a backup ready and take possible downtime into account.

Also, test it in a VM or something similar before, this is from my notes and I haven't done this in a long time. This might be incomplete.

  1. You will need to add the disks to the array:

    mdadm --manage /dev/md0 --add /dev/sdf  
    

    Do this for each of the three disks and replace the device names accordingly.

  2. Grow the array:

    mdadm --grow /dev/md0 --level 6 --raid-devices 6 
    
ThiefMaster
  • 378
  • 4
  • 19
Sven
  • 97,248
  • 13
  • 177
  • 225
6

Make use of the --backup-file option, so in case of power loss you can continue to grow the device after a reboot and ensure no data loss.

mdadm --grow /dev/md0 --level=raid6 --raid-devices=6 --backup-file=/root/mdadm5-6_backup_md0

The backup-file should be saved on a filesystem not part of the array you are going to grow.

--backup-file= is needed when --grow is used to increase the number of raid-devices in a RAID5 or RAID6 if there are no spare devices available, or to shrink, change RAID level or layout. See the GROW MODE section below on RAID-DEVICES CHANGES. The file must be stored on a separate device, not on the RAID array being reshaped.

--continue is complementary to the --freeze-reshape option for assembly. It is needed when --grow operation is interrupted and it is not restarted automatically due to --freeze-reshape usage during array assembly. This option is used together with -G , ( --grow ) command and device for a pending reshape to be continued. All parameters required for reshape continuation will be read from array metadata. If initial --grow command had required --backup-file= option to be set, continuation option will require to have exactly the same backup file given as well.

Any other parameter passed together with --continue option will be ignored.

Guggi
  • 91
  • 1
  • 3
2

It seems, that everybody recommends to add one disk to the array, first, then move to RAID6, then add the remaining disks and grow the RAID6. Please keep in mind, that this causes two time-consuming rearrangements of all the data on the disks. Even worse, the add-one-disk-and-move-from-RAID5-to-6-step is extra-slow, because it has to be done in place. So the mdadm modules have to read a chunk of data, write it to a backup file, then sync that file, then overwrite the same chunk with the rearranged data, then sync the chunk, then move to the next chunk.

This backup file is important for the case that power goes down and/or a disk fails during the reorganization process. While for many algorithms in-place is better than out-of-place, this is not true for RAID reorganizations due to the extra safety requirements. If you hadn't those, you wouldn't be doing RAID5 or RAID6 in the first place.

On the opposite, if you upgrade from RAID5 to RAID6 while merging in additional space, the "read" part of the job (following the original RAID5 layout with the fewer disks) will advance faster than the "write" part. After the first few blocks have been processed using the slow inplace logic, mdadm can freely issue reads and writes and needs to update the "already processed" pointer in the metadata only every so and so many chunks.

So I strongly recommend to do everything in one step:

mdadm --manage /dev/mdN --add /dev/sdd /dev/sde /dev/sdf
mdadm --grow /dev/mdN --raid-devices 6 --level 6 --backup-file /path/to/file

It will take a long time on current large drives, but you can continue to use the array while it is being reshaped. The performance is worse than normal, though.

Kai Petzke
  • 378
  • 1
  • 3
  • 10