Cloning a btrfs RAID pair using partclone gives vast number of recoverable errors

1

My home server (Debian Jessie) had a pair of 1TB disks for bulk storage, configured as a RAID1 mirror volume on raw devices (no partitions).

When I came to swap those disks for two new 3TB drives I had some difficulty finding any good guidance / examples on how to move the data over.

In the end the procedure I chose was to boot the machine into a GParted live envirenment and use partclone.btrfs to copy each source disk to its replacement. This is simple but risky, as cloning duplicates the volume/subvolume UUIDs, so it's not safe to reboot the machine with all the disks connected as duplicate IDs will confuse btrfs.

After disconnecting the old disks I rebooted, and the machine came up and remounted the new disks at the original UUIDs, indicating the clone was successful. However when I ran a btrfs scrub it generated many thousands of recoverable errors. It looked as if there might be one error for every block checksum.

After the scrub finished the volume looked to be running ok, and a second scrub pass showed no errors.

I found one post on this site from someone who had the same issue when cloning a single drive, so it doesn't seem specific to RAID volumes.

Does anyone know if this is expected behaviour when moving data between physical devices (the checksums being invalidated) or is partclone not as "btrfs aware" as it claims to be?

Incans

Posted 2016-09-17T23:11:16.397

Reputation: 31

No answers