1
on a testing machine I installed four HDDs and they are configured as RAID6. For a test I removed one of the drives (/dev/sdk) while the volume was mounted and data was written to it. This worked well, as far as I can see. Some I/O errors were written to /var/log/syslog, but the volume kept working. Unfortunately the command "btrfs fi sh" did not show any missing drives. So I remounted the volume in degraded mode: "mount -t btrfs /dev/sdx1 -o remount,rw,degraded,noatime /mnt". This way the drive in question was reported as missing. Then I plugged in the HDD again (it is of course /dev/sdk again) and started a balancing: "btrfs filesystem balance start /mnt". Now the volume looks like this:
$ btrfs fi sh
Label: none uuid: 28410e37-77c1-4c01-8075-0d5068d9ffc2
Total devices 4 FS bytes used 257.05GiB
devid 1 size 465.76GiB used 262.03GiB path /dev/sdi1
devid 2 size 465.76GiB used 262.00GiB path /dev/sdj1
devid 3 size 465.76GiB used 261.03GiB path /dev/sdh1
devid 4 size 465.76GiB used 0.00 path /dev/sdk1
How do I reinitiate /dev/sdk1? Running "$ btrfs fi ba start /mnt" does not help. I tried to remove the hdd, but
$ btrfs de de /dev/sdk1 /mnt/
ERROR: error removing the device '/dev/sdk1' - unable to go below four devices on raid6
A replacement does not work this way either:
$ btrfs replace start -f -r /dev/sdk1 /dev/sdk1 /mnt
/dev/sdk1 is mounted
Are there other ways to replace/reinitiate the hdd then converting to RAID 5?
Should I better post this to ServerFault? If so, could an admin please move this question? – Oliver R. – 2014-11-28T13:59:25.863
1What does btrfs scrub say? – basic6 – 2015-07-08T19:07:31.923