1

I have managed to get things to a point that I see all my drives listed with "multipath -l". Now the question is how to add them to a ZFS zpool? All attempts at using dm* naming or /dev/mapper names fail with device busy or already active. I can't find the right syntax for the vdev_id.conf either. dmesg definitely reports both of my expanders and all 24 drives are listed once for each expander. LSI says they don't support this functionality in the 9211-8i. It's just part of the "supported features" but it's up to the buyer to figure out how to make failover or multipath work. They offer a more integrated solution of course, in whitch they do support these things. Shocker :-\

Can anyone comment or point me in the right direction on the following. I am setting up a CentOS (Rocks6) box with an LSI 9211-8i HBA (IT mode) and a Supermicro SAS expander (24 drive). If I connect both cables to the expander I get 48 devices, and I presume I need to setup multipath from what I read. But I'm having trouble finding appropriate guidance to create a working multipath.conf. Multipath seems capable of detecting all of the matching device ID's but I never end up with any devices listed in multipath -l. And I'm not sure if this setup supports multipath or just failover. I think what may be missing is the ability for the driver to figure out which devices have the higher priority. Among other things.

Apr 08 21:16:23 | found multiple paths with wwid 35000c50004415bcb, multipathing sdaw
Apr 08 21:16:23 | Found matching wwid [35000c50004415bcb] in bindings file. Setting alias to mpathp
Apr 08 21:16:23 | sdy: ownership set to mpathp
Apr 08 21:16:23 | sdy: not found in pathvec
Apr 08 21:16:23 | sdy: mask = 0xc
Apr 08 21:16:23 | sdy: get_state
Apr 08 21:16:23 | sdy: path checker = readsector0 (controller setting)
Apr 08 21:16:23 | sdy: checker timeout = 30000 ms (sysfs setting)
Apr 08 21:16:23 | sdy: state = running
Apr 08 21:16:23 | sdy: state = 3
Apr 08 21:16:23 | sdy: state = running
Apr 08 21:16:23 | sdy: detect_prio = 2 (config file default)
Apr 08 21:16:23 | sdy: prio = const (config file default)
Apr 08 21:16:23 | sdy: const prio = 1
Apr 08 21:16:23 | sdaw: ownership set to mpathp
Apr 08 21:16:23 | sdaw: not found in pathvec
Apr 08 21:16:23 | sdaw: mask = 0xc
Apr 08 21:16:23 | sdaw: get_state
Apr 08 21:16:23 | sdaw: path checker = readsector0 (controller setting)
Apr 08 21:16:23 | sdaw: checker timeout = 30000 ms (sysfs setting)
Apr 08 21:16:23 | sdaw: state = running
Apr 08 21:16:23 | sdaw: state = 3
Apr 08 21:16:23 | sdaw: state = running
Apr 08 21:16:23 | sdaw: detect_prio = 2 (config file default)
Apr 08 21:16:23 | sdaw: prio = const (config file default)
Apr 08 21:16:23 | sdaw: const prio = 1
Apr 08 21:16:23 | mpathp: pgfailback = 15 (controller setting)
Apr 08 21:16:23 | mpathp: pgpolicy = multibus (controller setting)
Apr 08 21:16:23 | mpathp: selector = round-robin 0 (controller setting)
Apr 08 21:16:23 | mpathp: features = 0 (internal default)
Apr 08 21:16:23 | mpathp: hwhandler = 0 (controller setting)
Apr 08 21:16:23 | mpathp: rr_weight = 2 (controller setting)
Apr 08 21:16:23 | mpathp: minio = 1 rq (config file default)
Apr 08 21:16:23 | mpathp: no_path_retry = -2 (controller setting)
Apr 08 21:16:23 | pg_timeout = NONE (internal default)
Apr 08 21:16:23 | mpathp: retain_attached_hw_handler = 1 (config file default)
Apr 08 21:16:23 | mpathp: set ACT_CREATE (map does not exist)
Apr 08 21:16:23 | mpathp: domap (0) failure for create/reload map
Apr 08 21:16:23 | mpathp: ignoring map
Mark M
  • 11
  • 3

1 Answers1

0

I essentially blew away as much as possible and started over. The second time around things seem to have worked as expected. I probably had one or two variables in multipath.conf that mucked it up. This time I allowed it to start with no file and then made small edits. As of CentOS 6.3 I think this looks like the best way to start.

Creating the ZFS volumes had no issues since restarting the multipath configuration process from scratch.

Mark M
  • 11
  • 3