3

My server is an Oracle Enterprise Linux 5.4 (RHEL5) on an hp blade (x64) with qlogic HBAs connected to an EMC clariion SAN.

We are migrating from multipath to powerpath, because emc and the company storage team will not support multipath.

Currently my 3 lvm volume groups are using the /dev/dm-X devices that device mapper/multipath creates:

  • vg01 is one whole disk partition on a 25GB lun
  • vg02 is 3 16 gb luns, no partitions
  • vg03 is one 1tb lun, no partitions.

(experiment 1) I turn off multipathd and disable it in the check config then add the following filter to the lvm.conf.

filter = [ "a|/dev/emc.*|", "a|/dev/cciss.*|", "r/.*/" ]

and when I reboot:

  • vg01 is undetectable
  • vg02 detected successfully
  • vg03 detected successfully

vg01 is undetected on its emcpower disk, even though I can see the lvm header stuff on there with dd. The other two VGs are detected just fine. also all the dm-X devices are still in /dev/.

(2) So I delete the filter and enable the everything blacklist in multipath.conf:

blacklist {
    devnode "*"
}

now on reboot there are no more dm-X devices in /dev/ and vg02 and vg03 are found on their emcpower devices but vg01 is still undetectable.

(3) I reboot with both the filter and the blacklist and results are that vg01 is undetectable but vg02 and vg03 are fine.

Can anyone help me to figure out why this volume group seems to be undetectable without device-mapper/multipath?

And can someone explain what the relationship between lvm, device-mapper, and multipath is/are?

peterh
  • 4,914
  • 13
  • 29
  • 44
jwinders
  • 125
  • 1
  • 9

1 Answers1

1

I don't currently have access to EMC equipment to verify this, but I had to set it up at several previous jobs. If I remember right, you had to use this filter line: filter=["r/sd./", "a/./"] This removes any sd devices (sda, sdb, etc), then allows everything else. Of course, if you are booting off an internal disk that shows up as /dev/sda, then you will have to specify: filter=["r/sd[b-z]./", "a/./"] or something similar.

Edit: I found a configuration line in my old notes (I think this was for RHEL 4, but should still work); this filter is for an HP server that boots off of an internal raid controller (cciss), and has Powerpath for the data drives:

filter = [ "a|^/dev/cciss/.*|", "a|^/dev/emcpower.*|", "a|^/dev/loop.*|", "r /.*/" ]

So this accepts the cciss devices, emcpower, any loopback device, and rejects everything else (regex rules apply here).

To answer the last part of your question, when LVM does a scan, it looks in /proc/partitions for any device that matches its accept / reject filters, and scans those block devices for LVM headers. The first block device that it finds for a particular LVM volume header is the one that's used. Now with the SAN, both /dev/sda, and /dev/sdg (for example) map to the same data, and so does /dev/emcpowera (the command "powermt display all" should give you the proper mappings). Hopefully this helps.

Derek Pressnall
  • 643
  • 4
  • 8
  • I have the LVM filters pretty well under control. The question centers around why LVM would see its label (the PV info) on the disk when presented by mutlipath and device mapper but seemingly not see the same label on the same disk when presented by powerpath. I have confirmed that the PV info is there on the disk using 'dd' and 'less' with powerpath running and with multipath running. – jwinders May 21 '12 at 19:21
  • Ah, I believe that the other problem is the initial ramdisk would need to be rebuilt after installing powerpath. It may be that the system does a vgscan from the initrd, which if it hasn't been rebuilt since adding in the powerpath drivers, wouldn't see the powerpath disks. I believe the mkinitrd command looks for drivers to include by looking at the /etc/modprobe.conf (or /etc/modprobe.d directory on newer RedHat builds). Make sure to save a copy of your existing initrd in case something borks on the way. – Derek Pressnall May 21 '12 at 21:11
  • initrd was updated for powerpath use previously. It is important to note ( and i have edited the formatting of the post to make this clearer ) that of the 3 SAN volumes, the disappearing physical volume only happens with one VG and not the other two. – jwinders May 22 '12 at 13:57