11

I resized my logical volume and filesystem and all went smoothly. I installed new kernel and after reboot I can't boot neither current nor former one. I get volume group not found error after selecting grub(2) option. Inspection from busy box reveals the volumes are not registered with device mapper and that they are inactive. I wasn't able to mount them after activating, I got file not found error (mount /dev/mapper/all-root /mnt).

Any ideas how to proceed or make them active at the boot time? Or why the volumes are all of sudden inactive at boot time?

Regards,

Marek

EDIT: Further investigation revealed that this had nothing to do with the resizing of logical volumes. The fact that logical volumes had to be activated manually in ash shell after failed boot and possible solution to this problem is covered in my reply below.

zeratul021
  • 359
  • 1
  • 5
  • 18
  • http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=616689 – Keith Mar 13 '13 at 20:32
  • What I've tried so far: 1) your patch 2) diffing /etc/lvm/lvm.conf 3) `GRUB_PRELOAD_MODULES="lvm"` 4) `GRUB_CMDLINE_LINUX="scsi_mod.scan=sync"` 5) `sudo grub-install /dev/sda && sudo grub-install /dev/sdb && sudo update-grub && sudo update-initramfs -u -k all` 6) `sudo apt-get install --reinstall lvm2 grub-pc grub-common` 7) adding `lvm vgchange -ay` to the end of `/usr/share/initramfs-tools/scripts/local-top/lvm2` I'm quickly running out of things to try. – isaaclw Jun 21 '14 at 05:27

9 Answers9

7

So I managed to solve this eventually. There is a problem (bug) with detecting logical volumes, which is some sort of race condition (maybe in my case regarding the fact that this happens inside KVM). This is covered in the following discussion. In my particular case (Debian Squeeze ) the solution is as follows:

  • backup the script /usr/share/initramfs-tools/scripts/local-top/lvm2
  • apply the patch from mentioned bug report
  • run update-initramfs -u

This helped me, hope it'll help others (strangely, this is not part of mainstream yet).

Link to patch: _http://bugs.debian.org/cgi-bin/bugreport.cgi?msg=10;filename=lvm2_wait-lvm.patch;att=1;bug=568838

Below is a copy for posterity.

--- /usr/share/initramfs-tools/scripts/local-top/lvm2 2009-08-17 19:28:09.000000000 +0200
+++ /usr/share/initramfs-tools/scripts/local-top/lvm2 2010-02-19 23:22:14.000000000 +0100
@@ -45,12 +45,30 @@

  eval $(dmsetup splitname --nameprefixes --noheadings --rows "$dev")

- if [ "$DM_VG_NAME" ] && [ "$DM_LV_NAME" ]; then
-   lvm lvchange -aly --ignorelockingfailure "$DM_VG_NAME/$DM_LV_NAME"
-   rc=$?
-   if [ $rc = 5 ]; then
-     echo "Unable to find LVM volume $DM_VG_NAME/$DM_LV_NAME"
-   fi
+ # Make sure that we have non-empty volume group and logical volume
+ if [ -z "$DM_VG_NAME" ] || [ -z "$DM_LV_NAME" ]; then
+   return 1
+ fi
+
+ # If the logical volume hasn't shown up yet, give it a little while
+ # to deal with LVM on removable devices (inspired from scripts/local)
+ fulldev="/dev/$DM_VG_NAME/$DM_LV_NAME"
+ if [ -z "`lvm lvscan -a --ignorelockingfailure |grep $fulldev`" ]; then
+   # Use default root delay
+   slumber=$(( ${ROOTDELAY:-180} * 10 ))
+
+   while [ -z "`lvm lvscan -a --ignorelockingfailure |grep $fulldev`" ]; do
+     /bin/sleep 0.1
+     slumber=$(( ${slumber} - 1 ))
+     [ ${slumber} -gt 0 ] || break
+   done
+ fi
+
+ # Activate logical volume
+ lvm lvchange -aly --ignorelockingfailure "$DM_VG_NAME/$DM_LV_NAME"
+ rc=$?
+ if [ $rc = 5 ]; then
+   echo "Unable to find LVM volume $DM_VG_NAME/$DM_LV_NAME"
  fi
 }
Ben Lessani
  • 5,174
  • 16
  • 37
zeratul021
  • 359
  • 1
  • 5
  • 18
  • it should be noted that in the debian bug discussion the issue has not been resolved. so the solution presented here may not be the correct one – eMBee Jun 11 '19 at 03:00
  • I would be amazed if it would be as this is 9 year old bug with solution tested on 8 year old distribution. I don't get it how are there are sightings of that bug 3 years later. – zeratul021 Jun 12 '19 at 21:58
6

Create a startup script in /etc/init.d/lvm containing the following:

#!/bin/sh

case "$1" in
 start)
    /sbin/vgscan
    /sbin/vgchange -ay
    ;;
  stop)
    /sbin/vgchange -an
    ;;
  restart|force-reload)
    ;;
esac

exit 0

Then execute the commands:

chmod 0755 /etc/init.d/lvm
update-rc.d lvm start 26 S . stop 82 1 .

Should do the trick for Debian systems.

Nisse Engström
  • 208
  • 2
  • 5
Le dude
  • 61
  • 1
  • 2
  • 2
    for those who are wondering, like i was, `vgscan` searches for volume groups on the system, and `vgchange -a` makes volume groups available (`-ay`) or not (`-an`). – Dan Pritts Jun 16 '14 at 21:30
1

If vgscan "finds" the volumes, you should be able to activate them with vgchange -ay /dev/volumegroupname

$ sudo vgscan
[sudo] password for username: 
  Reading all physical volumes.  This may take a while...
  Found volume group "vg02" using metadata type lvm2
  Found volume group "vg00" using metadata type lvm2

$ sudo vgchange -ay /dev/vg02
  7 logical volume(s) in volume group "vg00" now active

I am not sure what would cause them to go inactive after a reboot though.

Alex
  • 6,477
  • 1
  • 23
  • 32
  • Hi, thanks I did exactly that before. But if I reboot we are back to the inactive thing. I tried to mount immediately after activating them but it shuts me with file not found error. – zeratul021 Nov 07 '10 at 23:59
  • Can be problem with /etc/lvm/lvm.conf, take backup of current file and try to copy lvm.conf from some other system and see if it solves the problem – Saurabh Barjatiya Nov 08 '10 at 10:57
1

I had this problem too. In the end this is what seemed to fix it:

diff -u /usr/share/initramfs-tools/scripts/local-top/lvm2-backup /usr/share/initramfs-tools/scripts/local-top/lvm2
--- /usr/share/initramfs-tools/scripts/local-top/lvm2-backup    2014-06-06 19:55:19.249857946 -0400
+++ /usr/share/initramfs-tools/scripts/local-top/lvm2   2014-06-21 01:26:01.015289945 -0400
@@ -60,6 +60,7 @@

 modprobe -q dm-mod

+lvm vgchange -ay
 activate_vg "$ROOT"
 activate_vg "$resume"

Other things I tried:

  1. your patch
  2. diffing /etc/lvm/lvm.conf
  3. GRUB_PRELOAD_MODULES="lvm"
  4. GRUB_CMDLINE_LINUX="scsi_mod.scan=sync"
  5. sudo grub-install /dev/sda && sudo grub-install /dev/sdb && sudo update-grub && sudo update-initramfs -u -k all
  6. sudo apt-get install --reinstall lvm2 grub-pc grub-common

I went through and undid the other changes, this is the only one that mattered for me, though it's probably the least elegant.

isaaclw
  • 123
  • 7
0

Without any of the configuration details or error messages we'd need to give an actual answer, I'll take a stab in the dark with grub-mkdevicemap as a solution.

BMDan
  • 7,129
  • 2
  • 22
  • 34
0

Assuming you system uses initramfs, there's probably a configuration problem there. You should update your initramfs image that's started at boot time by grub (in Debian you do this with update-initramfs, don't know about other distros).

You could also do this by hand by unpacking initramfs and changing /etc/lvm/lvm.conf (or something like it) in your initramfs image and then repack it again.

Jasper
  • 1,087
  • 10
  • 10
  • Hi, thanks for suggestion I will try inspecting them later tonight. Strange thing is that after installing new kernel deb, update initramfs and update grub followed immediately. – zeratul021 Nov 08 '10 at 14:37
  • something likewise happened to me with two raid arrays that were needed to boot. They didn't start anymore in initramfs, although update-initramfs ran fine. I had to manually change the way mdadm looked for the raid arrays in mdadm.conf and then rerun initupdate-ramfs. – Jasper Nov 08 '10 at 16:47
  • I commented on post below regarding lvm.conf. I found out that when i run command lvm and then vgscan and vgchange -ay and drop out of initramfs shell I boot like I'm supposed to. So the problem is somewhere in initramfs, that it doesn't activate LVM. Just for the record, /boot is on separate partition. – zeratul021 Nov 09 '10 at 10:33
  • Your problem is still with update-initramfs not working properly. Maybe you should see if there's an update for initramfs-tools and then try update-initramfs. If this doesn't work, you should still look inside the initramfs image at lvm.conf. – Jasper Nov 09 '10 at 12:33
  • Sadly I don't know how to configure LVM, all I ever did was during installation. Next hint is taht other virtual machine with exactly same disk layout fails in the exact same way, so I need to dig why LVMs are not activated at boot time. – zeratul021 Nov 10 '10 at 21:45
0

I've got the same problem in my environment running Red Hat 7.4 as a KVM guest. I'm running qemu-kvm-1.5.3-141 and virt-manager 1.4.1. At first I was running Red Hat 7.2 as guest without any problem, but after upgrading minor release from 7.2 to 7.4 and kernel to latest version 3.10.0-693.5.2, something went wrong and couldn't boot my /var LV partition any more. The system went to emergency mode asking for root password. Entering with root password and running the commands lvm vgchange -ay and systemctl default I was able to activate my /var LV and boot the system.

I haven't figured out what causes this issue, but my workaround was to include the LV /var in /etc/default/grub as you see below:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_local/root rd.lvm.lv=vg_local/var rd.lvm.lv=vg_local/swap rhgb quiet biosdevname=0 net.ifnames=0 ipv6.disable=1"

Then I had to run grub2-mkconfig -o /boot/grub2/grub.cfg and check if the rd.lvm.lv=vg_local/var was included in vmlinuz line of /boot/grub2/grub.cfg. After rebooting the system, I didn't get the error for activating my /var LV anymore and the system completes the boot up process with success.

Nisse Engström
  • 208
  • 2
  • 5
0

figured out in my case that the grub root was root=/dev/vgname/root

so the test in /usr/share/initramfs-tools/scripts/local-top/lvm2

  # Make sure that we have a d-m path
  dev="${dev#/dev/mapper/}"          
  if [ "$dev" = "$1" ]; then         
    return 1                         
  fi      

was always false. and root volume never activated.

updated /etc/fstab from

/dev/vgname/root        /

to

/dev/mapper/vgname-root   /

and did:

update-grub
grub-install /dev/sda

solved my problem

exeral
  • 1,609
  • 9
  • 19
0

we ran into this problem and found that disabling lvmetad by setting use_lvmetad=0 in /etc/lvm/lvm.conf forced the volumes to be found and mae accessible at boot.

eMBee
  • 101
  • 1