1

My software RAID1 arrays (/boot, /) always become degraded when restarted after adding SATA controller. It's CentOS 7. Here is what happening and what had been done:

  1. I made 4-disks RAID1 arrays with following setup: SATA Controller A (-HDD1 / -HDD2) + SATA Controller B (-HDD3 / -HDD4)
  2. There is a problem with Cont-A so I added another one, Cont-C, and moved HDD1/2 from Cont-A to Cont-C. So, the setup became following: Cont-A (none) + Cont-B (-HDD3 / -HDD4) + Cont.C (-HDD1 / -HDD2)
  3. After this exchange, on every (re)boot, RAID1 arrays always become degraded, only with HDD3/4 active.
  4. I can re-add HDD1/2 to RAID1 arrays but they again got degraded after reboot, losing HDD1/2.

I doubt this is because CentOS does not see Cont-C (and its subsidiaries HDD1/2) on boot phase as boot sequence stopped about 2 min and HDD1/2 appears on dmesg quite later.

I can boot from HDD1/2 when boot order is set as such (still losing HDD1/2 from RAID1 though), so at least BIOS correctly recognizes Cont-C.

Is there anyway to solve this?

NON
  • 13
  • 2

1 Answers1

2

My suspicion would be that the drivers for the newly added controller are not available in the initramfs, so they are loaded only later when the root file system is available -- which is after the array has been assembled.

Try rebuilding your initramfs.

Simon Richter
  • 3,209
  • 17
  • 17
  • Looks like so, rebooting with rescue image just assemble RAID1 array correctly. But after rebuilding initramfs it won't boot (mdadm missing?). I'm now trying --mdadmconf and --add-drivers options but dracut strangely refused to add needed drivers :/ – NON Mar 18 '21 at 17:46
  • OK I just have to specify kernel path, drivers added and all arrays get back to normal. Thanks! I use these pages to rebuild initramfs: http://cjcheema.com/2019/06/how-to-recover-or-rebuild-initramfs-in-centos-7-linux/ https://bugzilla.redhat.com/show_bug.cgi?id=674657 – NON Mar 18 '21 at 17:56