RAID1 Partitioning Scheme for a Linux Server

1

I'm having trouble setting up a RAID1 for my server where both drives are bootable. What I'm doing now is /biosboot and /boot partitions on both drives, then I have my LVM volume group with logical volumes on each which contain /swap, /, and /home. The installation goes fine and the drives sync but the second drive will never boot. I've read you always have to manually install GRUB to the second drive but when I do then I get a GRUB error and neither drive boots. I'm also not sure if LVM has something to do with this. I like the idea of the flexibility and snapshots but is it really needed if I don't think I'll be moving sizes around?

Steve N

Posted 2016-07-16T11:19:55.343

Reputation: 19

Problem is not LVM - LVM only kicks in after the kernel has booted - so after GRUB is done. If you are getting GRUB errors, knowing what they are would be helpful. – davidgo – 2016-07-16T11:35:55.687

What kind of "RAID1" is that? You wouldn't need to do anything "to the second drive" if it's really a RAID1. And what the heck is "/biosboot"? – Tom Yan – 2016-07-16T14:04:18.373

@TomYan /biosboot is required when running large drives with GPT partitioning schemes on older machines with BIOS. It takes the place of an MBR. – Steve N – 2016-07-17T15:57:54.357

If you mean BIOS boot partition (a.k.a EF02), it is only needed by grub on BIOS/GPT. More precisely it is the replacement of the post-MBR gap for core image embedding, while any legacy boot loader, including grub, will need to make use of the protective MBR anyway. However, this partition should never be formatted (it's best to zero fill it to avoid grub-install hiccup), so it will never have a mount point (e.g. /biosboot). Otherwise you are doing it completely wrong. – Tom Yan – 2016-07-17T16:26:30.417

No answers