LVM volume activation before fstab mounts

2

1

I am trying to mount LVM volumes during startup of my Debian Squeeze system. Since for some reason, the LVM volumes / groups are inactive by default, they need to be activated before any mounting can be done. Also, the volume group that my volumes belong to is on another physical LVM volume. Therefore, I cannot use the default /etc/init.d/lvm2 init script, but wrote my own, that activates the first level of LVM volumes first and then the ones I want to mount:

~# cat /etc/init.d/lvm2_vtt
#!/bin/sh
### BEGIN INIT INFO
# Provides:          lvm2_vtt
# Required-Start:    mountdevsubfs udev
# Required-Stop:     
# Should-Start:      mdadm-raid cryptdisks-early multipath-tools-boot
# Should-Stop:       umountroot mdadm-raid
# Default-Start:     S
# Default-Stop:      0 6
# X-Start-Before:    checkfs mountall
# X-Stop-After:      umountfs
### END INIT INFO

SCRIPTNAME=/etc/init.d/lvm2_vtt

. /lib/lsb/init-functions

[ -x /sbin/vgchange ] || exit 0

do_start()
{
    echo "bla"> /root/hah
    modprobe dm-mod 2> /dev/null || :
    /sbin/vgscan --ignorelockingfailure --mknodes || :
    /sbin/vgchange -aly --ignorelockingfailure || return 2
    /sbin/vgscan
        /sbin/vgchange -ay
        /sbin/lvmdiskscan
        /sbin/vgscan
        /sbin/vgchange -ay agvtt-volume
}

do_stop()
{
    /sbin/vgchange -aln --ignorelockingfailure || return 2
    /sbin/vgchange -an agvtt-volume
}

case "$1" in
  start)
    log_begin_msg "Setting up LVM Volume Groups"
    do_start
    case "$?" in
        0|1) log_end_msg 0 ;;
        2) log_end_msg 1 ;;
    esac
    ;;
  stop)
    log_begin_msg "Shutting down LVM Volume Groups"
    do_stop
    case "$?" in
        0|1) log_end_msg 0 ;;
        2) log_end_msg 1 ;;
    esac
    ;;
  restart|force-reload)
    ;;
  *)
    echo "Usage: $SCRIPTNAME {start|stop}" >&2
    exit 3
    ;;
esac

That script works, I can execute it manually and it does everything, it should do. I activate it using update-rc.d lvm2_vtt defaults which works (although it complains about some runlevels don't match):

~# ls -g /etc/rcS.d
total 4
-rw-r--r-- 1 root 447 Mar 24  2012 README
lrwxrwxrwx 1 root  24 Oct 23 12:18 S01mountkernfs.sh -> ../init.d/mountkernfs.sh
lrwxrwxrwx 1 root  14 Oct 23 12:18 S02udev -> ../init.d/udev
lrwxrwxrwx 1 root  26 Oct 23 12:18 S03mountdevsubfs.sh -> ../init.d/mountdevsubfs.sh
lrwxrwxrwx 1 root  18 Oct 23 12:18 S04bootlogd -> ../init.d/bootlogd
lrwxrwxrwx 1 root  18 Mar  1 11:26 S04lvm2_vtt -> ../init.d/lvm2_vtt
lrwxrwxrwx 1 root  21 Oct 23 12:18 S05hostname.sh -> ../init.d/hostname.sh
lrwxrwxrwx 1 root  25 Oct 23 12:18 S05hwclockfirst.sh -> ../init.d/hwclockfirst.sh
lrwxrwxrwx 1 root  22 Oct 23 12:18 S06checkroot.sh -> ../init.d/checkroot.sh
lrwxrwxrwx 1 root  20 Oct 23 12:18 S07hwclock.sh -> ../init.d/hwclock.sh
lrwxrwxrwx 1 root  24 Oct 23 12:18 S07ifupdown-clean -> ../init.d/ifupdown-clean
lrwxrwxrwx 1 root  27 Oct 23 12:18 S07module-init-tools -> ../init.d/module-init-tools
lrwxrwxrwx 1 root  17 Oct 23 12:18 S07mtab.sh -> ../init.d/mtab.sh
lrwxrwxrwx 1 root  20 Oct 23 12:18 S08checkfs.sh -> ../init.d/checkfs.sh
lrwxrwxrwx 1 root  18 Oct 23 12:18 S09ifupdown -> ../init.d/ifupdown
lrwxrwxrwx 1 root  21 Oct 23 12:18 S09mountall.sh -> ../init.d/mountall.sh
....

So, my init script is executed before mountall, which should mount the fstab entries. My fstab now looks as follows:

~# cat /etc/fstab

# the local partitions
proc                                      /proc            proc    defaults        0       0
UUID=07791c3e-5388-4edc-b30f-a4b4f2dbcb33 none             swap    sw              0       0
UUID=6522596a-210d-47ab-8894-e6259ffd99ee /                ext3    defaults        0       1

# our lvm volumes, secured and unsecured. Get the uuids using blkid.
UUID=66a66e81-9eb8-4ce8-a370-f3a48ece289e /space/secured   xfs     defaults        0       0
UUID=9e74cbd4-d3a0-4047-8466-74c00c14542a /space/unsecured xfs     defaults        0       0

# these are simpler aliases
/space/unsecured                          /unsecured       bind    bind            0       0
/space/secured                            /secured         bind    bind            0       0

As you can see, the LVM volumes (xfs file system) are mounted first and then some bind is created for a different location.

What I am now seeing is, that after boot neither the LVM volumes are activated nor they are mounted correctly. (Which they can't in an inactive state.)

What am I missing here?

janoliver

Posted 2013-03-01T10:45:03.017

Reputation: 163

Consider using /dev/mapper/... or LABEL=... for the first fstab field (in case you have to deal with volume restoration from a snapshot down the road...). – thomp45793 – 2017-02-13T04:39:39.177

No answers