3

Here's my story: I noticed read errors on one of my four RAID10 Btrfs drives (/dev/sde). This occurred when I was attempting a backup using btrfs send/receive. I bought a new hard drive of the same size and attempted to replace the failed one. I physically replaced the drive first, then mounted the raid array with the "degraded" parameter. I was able to add the new drive to the array and then I began a re-balance. The re-balance failed at about 10% complete due to new read errors on a different drive (/dev/sdb). I disabled NCQ on /dev/sdb hoping that was the problem but nothing changed. So, what are my options? Could I add the new drive as a fifth drive and attempt a re-balance? Although the two failed drives have read errors, the chance of the same sectors being bad on both drives is pretty low. Would btrfs be smart enough to try to get data from the other RAID10 mirror if one fails a read?

1 Answers1

2

1) Check the SMART of each disk and make sure that there is no fault on the physical disks.

2) Backup the data, make the test for bad blocks on each disk and recreate RAID10 volume.

In any case, back up the data with any possible option and then make any operations with storage.

BTW, the bad blocks on two disks are possible if the disks were damaged physically.

batistuta09
  • 8,723
  • 9
  • 21