14

Which is really messing with my plan to back up this machine...

I have a server which is a KVM hypervisor to several virtual machines. One of these is running Docker. It has its Docker volumes on /dev/vdb, which is set up as an LVM PV, on which Docker uses its direct-lvm driver to store Docker container data. This virtual disk is an LVM LV on the host's local disk.

Both host and guest run Fedora 21.

The host's view of this volume is (only the relevant volume is shown):

[root@host ~]# lvs
  LV                           VG         Attr       LSize
  docker2.example.com-volumes vm-volumes -wi-ao---- 40.00g
[root@host ~]# dmsetup ls --tree
vm--volumes-docker2.example.com--volumes (253:10)
 └─ (9:125)

The guest's view of this volume is (again, only the relevant volume is shown):

[root@docker2 ~]# pvs
  PV         VG             Fmt  Attr PSize  PFree
  /dev/vdb   docker-volumes lvm2 a--  40.00g    0 

With all the other LVM volumes on the host, I can take a snapshot with lvcreate --snapshot, backup the snapshot and then lvremove it with no issue. But with this particular volume, I can't lvremove it because it is in use:

[root@host ~]# lvremove /dev/vm-volumes/snap-docker2.example.com-volumes 
  Logical volume vm-volumes/snap-docker2.example.com-volumes is used by another device.

Eventually I figured out that device-mapper on the host had somehow figured out that this logical volume snapshot contained an LVM PV, and then proceeded to map the logical volumes within the snapshot to the host (only the relevant volumes are shown):

[root@host ~]# dmsetup ls --tree
vm--volumes-docker2.example.com--volumes (253:10)
 └─vm--volumes-docker2.example.com--volumes-real (253:14)
    └─ (9:125)
docker--volumes-docker--data (253:18)
 └─vm--volumes-snap--docker2.example.com--volumes (253:16)
    ├─vm--volumes-snap--docker2.example.com--volumes-cow (253:15)
    │  └─ (9:125)
    └─vm--volumes-docker2.example.com--volumes-real (253:14)
       └─ (9:125)
docker--volumes-docker--meta (253:17)
 └─vm--volumes-snap--docker2.example.com--volumes (253:16)
    ├─vm--volumes-snap--docker2.example.com--volumes-cow (253:15)
    │  └─ (9:125)
    └─vm--volumes-docker2.example.com--volumes-real (253:14)
       └─ (9:125)

These correspond exactly to the logical volumes inside the VM:

[root@docker2 ~]# lvs
  LV          VG             Attr       LSize
  docker-data docker-volumes -wi-ao---- 39.95g
  docker-meta docker-volumes -wi-ao---- 44.00m

Notably, it doesn't try to do this to the LVM LV when the system is booting, but only when I take a snapshot.

What is going on here? I really don't want device-mapper inspecting the contents of LVM snapshots to see if there's anything within them it can unhelpfully map for me. Can I suppress this behavior? Or do I need to create the snapshot via some other method?

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940

3 Answers3

10

Sometimes the relevant documentation is hidden away in configuration files rather than in, say, the documentation. So it seems with LVM.

By default LVM will automatically attempt to activate volumes on any physical devices which get connected to the system after boot, so long as all of the PVs are present, and lvmetad and udev (or more recently systemd) are running. When the LVM snapshot gets created, a udev event gets fired off, and since the snapshot contains a PV, lvmetad automatically runs pvscan, and so forth.

By looking at /etc/lvm/backup/docker-volumes I was able to determine that lvmetad had explicitly run pvscan on the snapshot by using the device major and minor numbers, which bypassed LVM filters that would normally prevent this. The file contained:

description = "Created *after* executing 'pvscan --cache --activate ay 253:13'"

This behavior can be controlled by setting the auto_activation_volume_list in /etc/lvm/lvm.conf. It allows you to set which volume groups, volumes, or tags are allowed to be activated automatically.

So, I simply set the filter to contain both of the volume groups for the host; anything else won't match the filter and does not get automatically activated.

auto_activation_volume_list = [ "mandragora", "vm-volumes" ]

The guest's LVM volumes are no longer appearing on the host, and finally, my backups are running...

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
5

You want to edit the 'filter' value in /etc/lvm/lvm.conf to inspect only the physical devices on the KVM host; the default value accepts 'every block device' which includes LVs themselves. The comment above the default value is fairly comprehensive and can do a better job of explaining usage than I can.

Craig Miskell
  • 4,086
  • 1
  • 15
  • 16
  • Note that I added the filter, and ran `pvscan --cache` to tell lvmetad about the new filter, and `pvscan` now states the PV is being rejected by a filter, but the problem persists. – Michael Hampton Mar 27 '15 at 06:16
  • I assume you mean the inability to remove the snapshot. At this stage, it might be tricky, and I can only offer vague suggestions. If rebooting the KVM host is out of the question (and I acknowledge that is a sledgehammer approach), then perhaps 'lvchange -an /path/to/LV' from the host will release its hold. If not that, then you're probably into experimenting with various dmsetup operations to try and bypass the LVM tools. It gets hairy there though, and I don't feel comfortable recommending any specific operations. – Craig Miskell Mar 27 '15 at 06:29
  • The filter does nothing because lvmetad is scanning the snapshot explicitly in response to a udev event. The solution turned out to be something else in the configuration, though... – Michael Hampton Mar 27 '15 at 06:33
2

I encountered roughly the same problem in combination with vgimportclone. It would sometimes fail with this:

  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  Physical volume "/tmp/snap.iwOkcP9B/vgimport0" changed
  1 physical volume changed / 0 physical volumes not changed
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  Volume group "insidevgname" successfully changed
  /dev/myvm-vg: already exists in filesystem
  New volume group name "myvm-vg" is invalid
Fatal: Unable to rename insidevgname to myvm-vg, error: 5

At that point, if I wanted to destroy the snapshot, I first had to temporarily disable udev because of the bug described at https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1088081

But even then, after seemingly successfully deactivating the nested LVM's volume group, the partition mapping for the nested PV, created by kpartx, somehow remained in use.

The trick appeared to be that the device mapper kept an extra parent mapping using the old volume group name, like this in tree list:

insidevgname-lvroot (252:44)
 └─outsidevgname-myvm--root-p2 (252:43)
    └─outsidevgname-myvm--root (252:36)

The solution was to simply remove that particular mapping with dmsetup remove insidevgname-lvroot. After that, kpartx -d and lvremove worked fine.

Josip Rodin
  • 1,575
  • 11
  • 17