2

I'm trying to move my root partition to raid based physical volumes, and I seem to be failing.

The procedure I'm using is somewhat complicated, but that's because my hosting provider has very limited installation capabilities, so I can't start with root filesystem made using lvm on raid volumes.

To test my case, I created virtual instance in VirtualBox, with 4 disks:

  • /dev/sda - 8GB
  • /dev/sdb - 8GB
  • /dev/sdc - 20GB
  • /dev/sdd - 20GB

And installed there Linux (Debian 8.5). Initially, after installation, layout is:

  • /dev/sd[bcd] - not partitioned, not used
  • /dev/sda - have 1 partition (/dev/sda1), small (4GB), used as PV for LVM
  • on this PV, I created VG, and LV, which is now used as /:

This is how it looks:

=# mount /
mount: /dev/mapper/vg-root is already mounted or / busy
=# lvs
LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
root vg   -wi-ao---- 3.72g
=# vgs
VG   #PV #LV #SN Attr   VSize VFree
vg     1   1   0 wz--n- 3.72g    0
=# pvs
PV         VG   Fmt  Attr PSize PFree
/dev/sda1  vg   lvm2 a--  3.72g    0

Now, what I need/want, is to create couple of raids on small and large disks, and put / on them.

So, first I create the partitions. Since I will need to repartition /dev/sda, it's intermediary layout, and it looks like this:

=# for a in /dev/sd[abcd]; do fdisk -l $a; done | grep ^/
/dev/sda1  *     2048 7813119 7811072  3.7G 8e Linux LVM
/dev/sdb1        2048 16777215 16775168   8G fd Linux raid autodetect
/dev/sdc1           2048 16777215 16775168   8G fd Linux raid autodetect
/dev/sdc2       16777216 41943039 25165824  12G fd Linux raid autodetect
/dev/sdd1           2048 16777215 16775168   8G fd Linux raid autodetect
/dev/sdd2       16777216 41943039 25165824  12G fd Linux raid autodetect

Then, I make raid1 on the /dev/sd[cd]2 partitions:

=# mdadm -C /dev/md0 -l 1 --raid-devices 2 /dev/sd[cd]2

This made me /dev/md0, which I will use as temporary place for / filesystem:

=# pvcreate /dev/md0
=# vgextend vg /dev/md0
=# pvmove /dev/sda1 /dev/md0
=# vgreduce vg /dev/sda1
=# pvremove /dev/sda1

At this time, /dev/sda is free, so I can repartition it, to exact specification of /dev/sdb. (this step is rather irrelevant, but it's just for completeness).

Now, with all this in place, I update mdadm.conf:

=# mdadm --detail /dev/md0 --brief >> /etc/mdadm/mdadm.conf && update-initramfs -u

This added line:

ARRAY /dev/md0 metadata=1.2 name=debian:0 UUID=55692d54:b0beedae:9d85bc20:324d7f9f

With this in place, I reboot the system, to make sure it works ok. And immediately it crashes, on GRUB, with message:

error: disk `lvmid/F9eO8I-PB9F-Dsli-ZOSY-rVA1-7a37-Faos46/1N3Ah7-wIjT-HFxc-MS9U-lAcw-tYZw-N7sRO8' not found.
Entering rescue mode...
grub rescue>

ls in the prompt shows:

(hd0) (hd0,msdos1) (hd1) (hd1,msdos1) (hd2) (hd2,msdos2) (hs2,msdos1) (hd3) (hd3,msdos2) (hd3,msdos1)

What did I do wrong? What have I forgotten?

Huash7ee
  • 45
  • 1
  • 3

1 Answers1

3

You need to update your GRUB and boot kernel installation.

  1. update-initramfs -u

This command updates your boot kernel configuration to match the current state of your system.

  1. mdadm --detail --scan > /tmp/mdadm.conf

Copy /tmp/mdadm.conf contents to /etc/mdadm/mdadm.conf replacing any previous entries. This way the MD device configuration information will be correct.

  1. update-grub

This will update GRUB configuration so that it knows about the new devices.

  1. dpkg-reconfigure grub-pc

This will install GRUB to the hard disks on the server.

Tero Kilkanen
  • 34,499
  • 3
  • 38
  • 58
  • Thank you. Looks like the update-grub, and dpkg-reconfigure were what was needed. Problem solved, all works well. – Huash7ee Jul 04 '16 at 09:33