5

I'm very excited about the new features of btrfs and would like to start tesing it. Before I get started, I would like to ask if btrfs supports increasing raid capacity by replacing disks by bigger ones (and not adding additional disks). Example: A RAID10 consisting of 8x 2TB drives results in a capacity of 8 TB. Then, each 2TB drive gets replaced by a 6 TB drive. After each disk replacement, a rebuild / rebalance is executed. I'm wondering, if after the last disk replacement and rebalance the capacity jumps from 8TB to 24TB?

There is some lecture about it in the internet, but there is no 100% statement like "yes, after rebalance, the capacity gets increased!". https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Adding_new_devices

The NAS devices from Synology support exactly the feature I'm asking about: https://www.synology.com/en-global/knowledgebase/DSM/help/DSM/StorageManager/volume_diskgroup_expand_replace_disk But, I'm not sure, if this feature is a native feature of btrfs or if the developers from Synology created it especially for their disk station operating system.

dawud
  • 14,918
  • 3
  • 41
  • 61
Chris
  • 63
  • 1
  • 6

2 Answers2

3

It should work as you have described it. However, an additional step may be necessary.

For example, if you put 4 drives with 3 GB each in a raid1 configuration, you'll end up with a capacity of 6 GB. Replacing two of those drives with 4 GB drives should give you 7 GB of capacity (btrfs disk usage calculator).

Step 1: Create BTRFS RAID1 volume with 4x 3G = 6G capacity:

# mkfs.btrfs -f -draid1 -mraid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde >/dev/null 
# mount /dev/sdb BTRFS/
# btrfs fi show BTRFS/
Label: none  uuid: e6dc6a95-ae5e-49c4-bded-77001b445ac7
    Total devices 4 FS bytes used 192.00KiB
    devid    1 size 3.00GiB used 331.12MiB path /dev/sdb
    devid    2 size 3.00GiB used 0.00B path /dev/sdc
    devid    3 size 3.00GiB used 0.00B path /dev/sdd
    devid    4 size 3.00GiB used 0.00B path /dev/sde

# parted -s /dev/sdb print | grep Disk
Disk /dev/sdb: 3221MB
Disk Flags: 
# parted -s /dev/sdc print | grep Disk
Disk /dev/sdc: 3221MB
Disk Flags: 
# parted -s /dev/sdd print | grep Disk
Disk /dev/sdd: 3221MB
Disk Flags: 
# parted -s /dev/sde print | grep Disk
Disk /dev/sde: 3221MB
Disk Flags: 
# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.3G   1% /mnt/BTRFS
# btrfs fi df BTRFS/
Data, RAID1: total=1.00GiB, used=320.00KiB
Data, single: total=1.00GiB, used=0.00B
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=256.00MiB, used=112.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B

Step 2: Replace 2 3G drives (3rd and 4th drive) with 4G drives:

# parted -s /dev/sdf print | grep Disk
Disk /dev/sdf: 4295MB
Disk Flags: 
# parted -s /dev/sdg print | grep Disk
Disk /dev/sdg: 4295MB
Disk Flags: 
# btrfs replace start -f 3 /dev/sdf BTRFS/
# btrfs replace start -f 4 /dev/sdg BTRFS/
# btrfs fi show BTRFS/
Label: none  uuid: e6dc6a95-ae5e-49c4-bded-77001b445ac7
    Total devices 4 FS bytes used 512.00KiB
    devid    1 size 3.00GiB used 1.28GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.25GiB path /dev/sdc
    devid    3 size 3.00GiB used 1.06GiB path /dev/sdf
    devid    4 size 3.00GiB used 544.00MiB path /dev/sdg

# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.2G   1% /mnt/BTRFS

The RAID1 filesystem should have a capacity of 7 GB, but it only has 6 GB.

Solution

It needs to be resized to use all available space (balance won't help). It needs to be resized on every device that has been replaced, so on device #3 and #4.

# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.8G   1% /mnt/BTRFS
# btrfs fi show BTRFS/
Label: none  uuid: e71b4996-5f7c-4b08-b8d8-87163430b643
    Total devices 4 FS bytes used 448.00KiB
    devid    1 size 3.00GiB used 1.00GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.00GiB path /dev/sdc
    devid    3 size 3.00GiB used 288.00MiB path /dev/sdf
    devid    4 size 3.00GiB used 288.00MiB path /dev/sdg

# btrfs fi resize 3:max BTRFS/
Resize 'BTRFS/' of '3:max'
# btrfs fi resize 4:max BTRFS/
Resize 'BTRFS/' of '4:max'
# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        7.0G   17M  6.8G   1% /mnt/BTRFS

The filesystem now has its expected capacity of 7 GB.

Step 2 (alternative): Remove drives (the old way, not recommended)

Before the replace command was added, the only workaround to replace drives was to add a new drive and remove the old one. However, this may take more time. And it has the drawback that it will leave you with a devid hole, i.e., the removed device's id won't be used anymore and the device ids no longer match their respective position in the raid array.

# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        6.0G   17M  5.3G   1% /mnt/BTRFS
# btrfs dev add -f /dev/sdf BTRFS/
# btrfs dev add -f /dev/sdg BTRFS/
# btrfs fi show BTRFS/
Label: none  uuid: ac40a98a-ac3b-4563-9ec9-6135332e5cdc
    Total devices 6 FS bytes used 448.00KiB
    devid    1 size 3.00GiB used 1.03GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.25GiB path /dev/sdc
    devid    3 size 3.00GiB used 1.03GiB path /dev/sdd
    devid    4 size 3.00GiB used 256.00MiB path /dev/sde
    devid    5 size 4.00GiB used 0.00B path /dev/sdf
    devid    6 size 4.00GiB used 0.00B path /dev/sdg

# btrfs dev rem /dev/sdd BTRFS/
# btrfs dev rem /dev/sde BTRFS/
# df -h BTRFS/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        7.0G   17M  6.8G   1% /mnt/BTRFS
# btrfs fi show BTRFS/
Label: none  uuid: efc5d80a-54c6-4bb9-ba8f-f9d392415d3f
    Total devices 4 FS bytes used 640.00KiB
    devid    1 size 3.00GiB used 1.00GiB path /dev/sdb
    devid    2 size 3.00GiB used 1.00GiB path /dev/sdc
    devid    5 size 4.00GiB used 1.03GiB path /dev/sdf
    devid    6 size 4.00GiB used 1.03GiB path /dev/sdg

When using add/remove, it is not necessary to manually grow the volume.

Note that, when using add/remove, the 3rd drive in the raid array has index 5 instead of 3, which may be confusing when you need to identify a drive based on its slot in your rack.


This is BTRFS version 4.4. Future versions may behave differently.

basic6
  • 333
  • 3
  • 8
  • Won't `btrfs device remove ` remove the hole? – Tom Hale Jan 26 '19 at 10:54
  • @TomHale unfortunately it doesn't. Quite the opposite: `dev rem 3` *creates* a "hole" at #3 as you won't be able to use this id anymore. Maybe not noticeable with 3 drives but it gets chaotic with a real setup (say > 10 drives) because the drive ids don't match the positions in the rack. – basic6 Mar 11 '19 at 13:03
  • Thanks, the resize step was required indeed. – pedroapero Jul 24 '19 at 15:15
1

Yes, the capacity will grow in btrfs when you replace the drives with bigger ones. But make sure you always have backups! While the RAID0/1 code is not nearly as buggy as the RAID5/6 code in btrfs (as of 07/2016), your device replacement would not be the first one to go horrible wrong.

Gerald
  • 51
  • 2