The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup?
My problem: I boot my server and get:
Gave up waiting for root device.
ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell!
This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd.
To fix this and boot my server I do:
$(initramfs) mdadm --stop /dev/md1
$(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
$(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
$(initramfs) exit
And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]
.
Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1
UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda
outputs the same thing as $ mdadm --examine /dev/sda2
. $ mdadm --examine /dev/sda1
seems to be fine because it outputs information about /dev/md0
. I don't know if this is the problem or not, but it seems to fit with /dev/md1
getting assembled with /dev/sd[abcd]
instead of /dev/sd[abcd]2
.
I tried zeroing the superblock on /dev/sd[abcd]
. This removed the superblock from /dev/sd[abcd]2
as well and prevented me from being able to assemble /dev/md1
at all. I had to $ mdadm --create
to get it back. This also put the super blocks back to the way they were.