4

We're experimenting with DRBD/pacemaker on top of an existing Debian 6 Encrypted RAID1 setup. We have one comparatively general and one comparatively specific question:

1) DRBD - backed vs. physical volume (general design option question)

We want to run a number of virtual servers in our DRBD/pacemaker setup. Having looked (and gotten great help on this forum - thank you DOC) at using LVM as a backing device, it seems if we want to spin up and tear down Logical Volumes on the fly, that we might be better off using LVM on top of DRBD as a physical volume. Does this sound right?

For our purposes, is "backing" or "physical volume" a better choice? Is there a design option that would allow us to have one drbd, put all of the logical volumes on that, which would then allow us to have a simpler drbd and pacemaker config? If we were to continue using LVM as a backing device, would you have one drbd for each logical volume and create our pacemaker CRMs accordingly?

Part of the complexity for us here is in addition to these issues of using Encrypted RAID1 (so we've been struggling with the disk/device maps in the drbd.conf).

2) LVM filter problem (more specific question)

In the "Configuring a DRBD resource as a Physical Volume" documentation, it has you adjust lvm.conf after you create a physical volume on your drbd: filter = [ "a|drbd.|", "r|.|" ], write_cache_state = 0, and then wipe the lvm cache.

Problem is, once we do this, we can't get any of the "pvscan, vgscan, lvscan" to work, and we'll need to volume group to be active to add our modify our next logical volume in the drbd. One set up documentation said you needed update the intrafs, which after doing we couldn't boot the machine anymore (it's a test machine, so just inconvenient).

Q: What are we doing wrong here? The documentation seems to suggest that after tweaking the "lvm.conf" that you should be able to use things like "vgchange -aey volumegroup" but all of our runs of this come back blank.

Is this the sort of thing where we need to temporarily update "lvm.conf" back to its original filter, add a logical volume, and then change the "lvm.conf back? FYI - if we boot off of the original filter, our drbd mounts but we get errors in the tty saying the encrypted device couldn't start...so we're assuming that's not the answer.

Help appreciated (happy to post any configs or logs as requested...just not sure what would be helpful)!

madog
  • 51
  • 2
  • 4

1 Answers1

1

If I understand your requirements correctly, this is what I would be doing:

  1. Create a single DRBD device mark it as a PV for LVM. Create Pacemaker resources for the DRBD volume and each of the LVM logical volumes, with the logical volumes depending on the DRBD volume.

  2. Your pv filter looks correct. Have you verified that the DRBD device is correctly marked as a PV? If it doesn't have metadata on it, it won't show up. Try using the pvck command to verify this:

    $ sudo pvck /dev/sda1
      Found label on /dev/sda1, sector 1, type=LVM2 001
      Found text metadata area: offset=4096, size=192512
    

    You could also try replacing your filter with a/.*/ so it scans every volume. Although unless you've messed with how drbd devices are named, the filter you listed should work just fine. I think it's more likely that the metadata is missing.

Umm... Also try running pvscan -d to get more debugging data.

Insyte
  • 9,314
  • 2
  • 27
  • 45
  • Thanks Insyte!! Will follow on with suggestions on #1, and try pvck...will post back. Appreciate it! – madog Mar 05 '13 at 18:22
  • We tried the pvck and it works until we adjust the LVM filter, not afterwards (things come back blank, stay that way after reboot). Think this may be "/dev/mapper" problem using Encrypted RAID1 LVM along with DRBD. Is our drbd.conf is wrong? We tried in an earlier build using "/dev/mapper…" in the disk statement, but then couldn't get pacemaker to work. Should we revisit that decision? In the last build we are trying "/dev/dm-7" which we think points to the encrypted area. Our drbd device section is: device /dev/drbd1 minor 1; disk /dev/dm-7; meta-disk internal; – madog Mar 06 '13 at 15:00
  • So it works if the filter in lvm.conf is empty or at the default of `.*`? If so, then just let it be. The filter exists to keep LVM from scanning devices it shouldn't. If that's not a problem, you're OK. – Insyte Mar 07 '13 at 06:17
  • Further thoughts: I have not used encrypted RAID-1 and I'm not familiar with the "/dev/mapper" problem. However, I would suggest starting from first principles: Create your encrypted RAID-1 set. Create a filesystem and mount it. Now you know for sure which device is the right one. This is the device that must be matchable by the filter in lvm.conf. Zero out the partition table and "bless" the device with `drbdadm create-md`. Bring up the DRBD pair. If that works, you should be good to go with LVM. – Insyte Mar 07 '13 at 06:19
  • Thanks Insyte - that's basically what we're doing....at this point what would be helpful, is there way to wipe out drbd meta data. Not sure how to ask this, but we keep running into getting things working, going on to the next step, and finding what was working is getting in the way (currently have minor 1 conflict messages). Is there some way to put DRBD completely back to square 1? Finding some stuff in the documentation that suggest this (e.g., detach) but keep getting errors. – madog Mar 07 '13 at 12:15