Updating for current software versions (Ubuntu 14.10):
Grub2 2.02~beta2-15
I set up my partitions and made md devices on them, then ran mkfs on them. THEN I started Ubuntu's installer (ubiquity). (If you don't mkfs first, ubiquity insists on partitioning your md devices. IDK if grub will handle a partition table inside an MD on a partition.)
With /
(including /boot
) on XFS on a RAID10,f2 of 2 disks, GRUB has no problem booting, even when on disk is missing. (There is currently, or was, a bug where GRUB thinks a RAID10 is unusable if it's missing 2 devices, without checking WHICH two devices are missing. So there are cases where Linux would have no problem using a degraded RAID10, but GRUB's simple check fails.)
(XFS notes: grub2 2.02 beta2 does NOT support XFS's new mkfs.xfs -m crc=1
metadata format. Don't use that on the filesystem that includes /boot
until you get a patched GRUB.)
I tested with a chunksize of 32kiB for my RAID10,f2 to make sure vmlinuz and my initrd were not contiguous on disk. I didn't test with a configuration that would require GRUB to read from multiple disks to get a complete file, though, so I still don't know if that's supported. (I think not though: I tested from the GRUB command line (after normal.mod was loaded, not from a grub recovery console).
ls (md/0)
blah blah holds an XFS filesystem
ls (md/1)
<machine reboots instantly>
or booting with only one disk connected:
error: failure reading sector 0xb30 from `fd0'.
error: disk `md/1' not found.
(Intel DZ68DB mobo, disks on the mobo's SATA controller (set to RAID, not AHCI, in case that matters)) So I guess it's looking for a partition with the right UUID to complete the RAID0.
My RAID10,f2 used the default metadata 1.2 format (located at 4kiB from the start of the partition). Since grub understands md devices these days, you don't need the old practice of hiding your md superblock at the end of the partition (which you could get with format 1.0, as well as 0.9 I think.) I didn't test if grub also supports the ddf
or imsm
(BIOS raid metadata formats used by some mobo controllers).
Both my disks had GPT partition tables, with a EF02
Bios boot partition before the first "regular" partition. (from sector 40 to 2047). Grub uses it to put the stuff it needs to read RAID and XFS, but that doesn't fit in the 512B boot sector.
Don't RAID your bios boot partitions. You need to grub-install
each /dev/sdX
independently, so you can boot any of your disks. Doing that also writes what grub needs to the BIOS boot partition on that disk. update-grub
doesn't touch the bios boot partition, it just rebuilds the menu and initrd from config files. Only grub-install
touches the boot partition, and like I said, needs to be run on each disk anyway.
Testing actually booting with one HD removed:
Ubuntu gives an interactive option to skip mounting filesystems that aren't available. (I had /var/cache
on RAID0). But after telling it to skip, it's fine.