7

We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI.

From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them?

Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports?

It also gives the following example:

Example from techbook

Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)?

Edit: Another brainpuzzle

enter image description here

I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription?

Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..

pauska
  • 19,532
  • 4
  • 55
  • 75

3 Answers3

4

What EMC's documents seem to be discussing is to have two separate IP broadcast domains - two separate fabrics on separate hardware, so that a misconfig in a given switch or a switching loop or somesuch doesn't bring down all storage connectivity.

Along these lines:

Storage Fabric

I personally think it's a little nuts to keep creating additional fabrics for each port per SP, though - I'd say just split them up evenly among the storage fabrics; SP A's other two ports would be 10.168.10.9 for the one plugged to fabric 1, and 10.168.11.9 for the one plugged to fabric 2.

The client's multipathing should be the one handling all load balancing and failover. And how the heck are you supposed to put a client with two HBAs into 4 vlans, anyway? They can handle two targets visible from a given initiator just fine.

(no idea on the "oversubscription" thing.)

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
  • Thanks - these were my thoughts aswell. Seems like I can just save the extra SP ports for DMZ or something like that in the future, as I only have two HBA's per ESX host. – pauska Oct 05 '11 at 09:31
0

No, no. We want all 8 ports on the same subnet. You never want iSCSI traffic to cross subnets. It'll just slow down while going through the routers. You want both SPs connected to each switch. E0,E2 should be connected to Switch0 and E1,E3 should be connected to Switch1.

Not sure what you are seeing in the Edit2 screenshot that makes you want to rape your sales person with a watermelon (can I watch while you do this)? Slots and ports are different.

You'll want to install PowerPath (software package to purchase) so that you get the best possible MPIO setup.

mrdenny
  • 27,074
  • 4
  • 40
  • 68
  • So the EMC Best Practices guide for Block for the VNX is wrong? You should put all the ports on one SP on the same subnet/vlan? PowerPath is not an option right now due to budget constraints, we'll see next year about that. Regarding the watermelon; you're correct - I mixed up ports and slots. – pauska Oct 04 '11 at 06:46
  • EMC may list that as a best practice, but I've never heard of it before, and some of their "best practices" are kind of stupid (like putting all your fibre channel switches into a single fabric). By putting the iSCSI traffic over two subnets there will be an added delay when connecting from the iSCSI client to the array when the data needs to go between subnets. You'll also end up killing the performance on your router. – mrdenny Oct 04 '11 at 18:49
  • Damn, and I was really looking forward to the "watermelon show". – mrdenny Oct 04 '11 at 18:50
  • 1
    One of us got lost in translation here.. there is no routing involved, just two different (separated) VLAN's with one (unique) subnet on each. The EMC tech showed up today and explained that I need to create 4 VLAN's (with 4 subnets, one to each) to be able to use 4 ports per SP. – pauska Oct 04 '11 at 18:54
  • If you've got each port on it's own vLAN and there's no routing between those subnets then you have 4 iSCSI NICs in each server? Otherwise there's a potential for cross vLAN traffic and that'll require a router. – mrdenny Oct 04 '11 at 19:05
  • each port on ONE SP is separated on different VLAN's. SPA-P0 and SPB-P0 is on one, SPA-P1 and SPB-P1 is on another one etc. There is no cross-VLAN traffic at all, and it doesnt require a router. VMware can tag VLAN traffic on a trunk. – pauska Oct 04 '11 at 19:15
  • So then what IP do you assign to the iSCSI NICs? – mrdenny Oct 04 '11 at 19:21
  • You shouldn't use two NICs on the same machine in the same subnet. By using this only blows up your routing and arp tables with misleading information. – Vinícius Ferrão Apr 03 '14 at 07:54
0

It seems that the best answer would be two separate networks, a la fibre, and they don't route to each other. Put them all on, possibly 2 active, two passive if that is a config requirement, otherwise all on ALUA if possible.

Sven
  • 97,248
  • 13
  • 177
  • 225
Mate
  • 11