2

On a Linux MD software RAID1 I removed a failed drive and added a new one. (All fine now.) The old drive and the new were never in the system at the same time.

Do I see that right, that adding the old drive to the system (for examination) will not hinder the MD assembly on boot in any way?

I assume that:

  • MD will see 3 partitions that belong(ed) to the same RAID,
  • but will also know that 2 of those are still part of the RAID and 1 is not

Which superblock data will MD help to sort this right? Is there a “not part of the RAID at the moment” bit? What data makes it clear to the MD subsystem that the old partition is not in fact the sole member of a degraded RAID?

Did the new device/partition get a fresh random member UUID on RAID rebuild?

Adding the old drive and confusing MD could be catastrophic if it chooses the wrong devices for the array. I want to avoid that.

Robert Siemer
  • 543
  • 9
  • 19

1 Answers1

0

Actually , even if you specified the wrong disks in /etc/mdadm.conf - it shouldn't build the MD without using the mdadm.

First you got the superblock which is writen to all disks. If a fresh disk is added - it has no superblock and cannot be added to the MD raid.

Next, you got the bitmap which also holds vital data for the array and prevents syncing from the wrong copy (for example a disk was removed , data written to the partner disk and later readded).

The safest way of approach is to :

  • run mdadm with examine and scan options
  • create the /etc/mdadm.conf with entries pointing to "/dev/disk/by-id/..." which are unique and won't change (actually the Serial Number of the disk is used for generating those)