0

I'm desperately trying to improve performance of my SAN connection.

Here's what i have:

[root@xnode1 dell]# multipath -ll
mpath1 (36d4ae520009bd7cc0000030e4fe8230b) dm-2 DELL,MD36xxi
[size=5.5T][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=200][active]
 \_ 18:0:0:0  sdb 8:16  [active][ready]
 \_ 19:0:0:0  sdd 8:48  [active][ghost]
 \_ 20:0:0:0  sdf 8:80  [active][ghost]
 \_ 21:0:0:0  sdh 8:112 [active][ready]

And multipath.conf :

defaults {
    udev_dir        /dev
    polling_interval    5
    prio_callout        none
    rr_min_io       100
    max_fds         8192
    user_friendly_names yes
    path_grouping_policy    multibus
    default_features    "1 fail_if_no_path"
}
blacklist {
    device {
               vendor "*"
        product "Universal Xport"
        }
}
devices {
    device {
           vendor "DELL"
           product "MD36xxi"
           path_checker rdac
           path_selector "round-robin 0"
           hardware_handler "1 rdac"
           failback immediate
           features "2 pg_init_retries 50"
           no_path_retry 30
           rr_min_io 100
           prio_callout "/sbin/mpath_prio_rdac /dev/%n"
       }
}

And sessions.

[root@xnode1 dell]# iscsiadm  -m session
tcp: [13] 10.0.51.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c
tcp: [14] 10.0.50.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c
tcp: [15] 10.0.51.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c
tcp: [16] 10.0.50.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c

I'm getting very poor read performance :

dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=1000

The SAN is configured as follows:

   CTRL0,PORT0 : 10.0.50.220
   CTRL0,PORT1 : 10.0.50.221
   CTRL1,PORT0 : 10.0.51.220
   CTRL1,PORT1 : 10.0.51.221

And on the host :

   IF0 : 10.0.50.1
   IF1 : 10.0.51.1

(Dual 10GbE Ethernet Card Intel DA2)

It's connected to a 10gbE switch dedicated for SAN traffic.

My questions being; why the connection is set up as 'ghost' and not 'ready' like an active/active configuration ?

growse
  • 7,830
  • 11
  • 72
  • 114
Disco
  • 1,301
  • 5
  • 19
  • 34
  • As I answered in the other question, it might be your disks, and we need more information. – Basil Jul 01 '12 at 00:26
  • possible duplicate of [How to best tune Dell PowerVault MD3600i SAN/Initiators for best performance?](http://serverfault.com/questions/402733/how-to-best-tune-dell-powervault-md3600i-san-initiators-for-best-performance) – jscott Jul 01 '12 at 01:56

2 Answers2

2

The Dell MD series are all LSI clones (like entry level IBM DS boxes), and thus use RDAC for multipathing. RDAC is an A/P multipath mechanism, there's nothing you can do about it.

Note, I haven't used MD3600 yet, but the statement is true for 3000 and 3200 series, and I doubt anything changed except for some extra specs (like the switch to 10GigE)

EDIT: apparently it is possible to switch to active/active now, best to call Dell techsupport for a walkthrough

dyasny
  • 18,482
  • 6
  • 48
  • 63
2

The product document does state ALUA-Active-Active LUN access, but this is all wrong. It is a LSI-based chipset and should be in RDAC multipath mode.

In the original post, they made error with same logical network on both ports on controller:

CTRL0,PORT0 : 10.0.50.220
CTRL0,PORT1 : 10.0.50.221
CTRL1,PORT0 : 10.0.51.220
CTRL1,PORT1 : 10.0.51.221

This is correct, where there is separate logical network on each controller:

CTRL0,PORT0 : 10.0.50.220
CTRL0,PORT1 : 10.0.51.220
CTRL1,PORT0 : 10.0.50.221
CTRL1,PORT1 : 10.0.51.221

Notice in proper configuration that each logical network can access each controller and not the same controller. Next, on SAN, all LUNs should have the same preferred path. Otherwise we have to wait for non-preferred path to become active. This leads to poor performance.

longneck
  • 22,793
  • 4
  • 50
  • 84
James
  • 21
  • 1