Shrink RAID by removing a disk?

14

7

I have a Synology NAS with 12 bays. Initially, we decided to allocate all 12 disks for a single RAID-6 volume, but now we would like to shrink the volume to use only 10 disks and assign two HDDs as spares.

The Volume Manager Wizard can easily expand the volume by adding hard disks, but I have found no way to shrink the volume by removing hard disks. How can I do that without having to reinitialize the whole system?

Pierre Arnaud

Posted 2014-10-31T04:19:25.633

Reputation: 539

What is the goal here? Currently two disks are used as parity, and so the array can tolerate two failures. If you want two spares, you could just as well leave them nearby and have the same tolerance, but with more disk space. – Paul – 2014-10-31T04:34:40.607

Sure, but I have to go to the office, pop a disk out and insert a replacement disk. Having a spare allows to do this remotely. – Pierre Arnaud – 2014-10-31T05:39:04.963

Does your Synology have MDADM built in if you ssh to it? – Paul – 2014-10-31T06:19:14.570

Yes, I've access to the mdadm tool. – Pierre Arnaud – 2014-10-31T08:53:58.177

Answers

20

For this I am going to assume there are 12 disks in the array, and each are 1TB big.

That means there is 10TB of storage. This is for example, provided you are not using more than 6 disks (6TB) worth of storage, then it doesn't matter what size they are.

Oblig disclaimer: None of this may be supported by Synology, so I would check with them if this approach can cause problems, backup beforehand, and shutdown any synology services beforehand. Synology use standard md raid arrays as far as I know, and they are accessible if the disk are moved to a standard server that supports md - so there should be no problems.

Overview

The sequence goes like this:

  1. Reduce the filesystem size
  2. Reduce the logical volume size
  3. Reduce the array size
  4. Resize the file system back
  5. Convert the spare disks into hot spares

File system

Find the main partition, using df -h, it should look something like:

Filesystem                Size      Used Available Use% Mounted on
/dev/vg1/volume_1         10T       5T   5T         50% /volume1

Use this command to resize to the maximum it needs and no more:

umount /dev/vg1/volume_1
resize2fs -M /dev/vg1/volume_1

Now check:

mount /dev/vg1/volume_1 /volume1
df -h

Filesystem                Size      Used Available Use% Mounted on
/dev/vg1/volume_1         5T       5T    0T        100% /volume1

Volume

To reduce the volume size, use lvreduce (make it a bit bigger just in case):

umount /dev/vg1/volume_1
lvreduce -L 5.2T /dev/vg1/volume_1

Now that the logical volume has been reduced, use pvresize to reduce the physical volume size:

pvresize --setphysicalvolumesize 5.3T /dev/md0

If the resize fails, see this other question for moving the portions of data that were allocated at the end of the physical volume towards the beginning.

Now we have a 5.3T volume on a 10T array, so we can safely reduce the array size by 2T.

Array

Find out the md device:

 pvdisplay -C
 PV         VG      Fmt  Attr PSize   PFree
 /dev/md0   vg1     lvm2 a--  5.3t    0.1t

The first step is to tell mdadm to reduce the array size (with grow):

mdadm --grow -n10 /dev/md0
mdadm: this change will reduce the size of the array.
       use --grow --array-size first to truncate array.
       e.g. mdadm --grow /dev/md0 --array-size 9683819520

This is saying that in order to fit the current array onto 10 disks, we need to reduce the array size.

 mdadm --grow /dev/md0 --array-size 9683819520

Now it is smaller, we can reduce the number of disks:

 mdadm --grow -n10 /dev/md0 --backup-file /root/mdadm.md0.backup

This will take a loong time, and can be monitored here:

 cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md4 : active raid6 sda4[0] sdb4[1] sdc4[2] sdd4[3] sde4[4] sdf4[5] sdg4[6] sdh4[7] sdi4[1] sdj4[1] 
      [>....................]  reshape =  1.8% (9186496/484190976)
                              finish=821.3min speed=9638K/sec [UUUUUUUUUU__]

But we don't need to wait.

Resize the PV, LV and filesystem to maximum:

pvresize /dev/md0
lvextend -l 100%FREE /dev/vg1/volume_1
e2fsck -f /dev/vg1/volume_1
resize2fs /dev/vg1/volume_1

Set spare disks as spares

Nothing to do here, any spare disks in an array are automatically spares. Once your reshaping is complete, check the status:

cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md4 : active raid6 sda4[0] sdb4[1] sdc4[2] sdd4[3] sde4[4] sdf4[5] sdg4[6] sdh4[7] sdi4[S] sdj4[S] 

Paul

Posted 2014-10-31T04:19:25.633

Reputation: 52 173

Thanks a lot for these detailed instructions. I'll first wait for my RAID array to finish rebuilding after having replaced an HDD (total capacity: 17.86 TB, it's taking some time). – Pierre Arnaud – 2014-11-02T06:39:52.910

Also have a look at the mdadm cheat sheet (http://www.ducea.com/2009/03/08/mdadm-cheat-sheet).

– Pierre Arnaud – 2014-12-31T07:41:24.047

@Paul - https://superuser.com/questions/1274328/may-have-just-screwed-the-pooch-attempting-to-follow-instructions-in-another-thr?noredirect=1#comment1880614_1274328 flag this comment for removal after you determine if you can help the user

– Ramhound – 2017-12-06T00:01:31.010

Beware! I think this answer could lead to data loss, as is: there is no check that the lvm lv is indeed at the beginning of the pv! (which is not guaranteed with lvm). See https://unix.stackexchange.com/questions/67702/how-to-reduce-volume-group-size-in-lvm (and https://unix.stackexchange.com/questions/67702/how-to-reduce-volume-group-size-in-lvm#67707 in case of error) for a way to ensure the end of the PV is free to be shrinked.

– Ekleog – 2018-03-24T00:21:47.257

@Ekleog Thanks, this comment would be better placed as part of the answer in case missed – Paul – 2018-03-24T00:24:15.780

@Paul Indeed, please feel free to add it :) – Ekleog – 2018-03-24T00:26:42.790

@Ekleog I can't right now, on the move. Go ahead – Paul – 2018-03-24T00:27:49.250

@Paul Oh didn't think I could, just sent a tentative edit :) I've improvised the columns output, but hopefully that's close enough what would have been reality were someone in your use case. – Ekleog – 2018-03-24T00:37:32.027

I noticed that the lvreduce command doesn't seem to be installed anywhere with DSM 6.2. Running lvm lvreduce <args> seems to work though. – Scott Dudley – 2019-10-22T15:16:35.907