6

I am in the process of re-doing the iSCSI network at my work. We currently have the following equipment:

  • 1x Dell PowerConnect 6224 switch
  • 1x Dell PowerVault MD3000 SAN connected to 2x Dell PowerEdge 1950 servers providing iSCSI
  • 1x Dell PowerVault MD3000i SAN
  • 2x Dell PowerEdge 2950 servers running ESX 3.5 soon to be ESX 4 - has 6 NICs
  • 2x Dell ??? servers that have just been ordered for 2 more ESX hosts - has 8 NICs

Current setup:
All iSCSI traffic is on its own switch and is in the 192.168.1.x network. All other network traffic is on it's own network switch and is in the 10.10.x.x network. We have 2 NICs teamed (1 onboard Broadcom NIC and 1 Intel Pro 1000 NIC) to each ESX server in active/active state being connected to the single PC 6224 switch dedicated to iSCSI. All 4 NIC ports on the back of the MD3000i are connected to the same switch as well.

The problem with this setup is that the switch provides a large single point of failure. We are trying to correct/fix this with setting up a 2 switch network for iSCSI traffic for redundancy. I have 2 new PowerConnect 6224 switches that we will be using for this new network. The current switch that we have for iSCSI traffic will be then used for redundancy for the LAN side of the network or used as a vMotion network only between 4 ESX servers. (vMotion is currently a crossover connection between the 2 ESX servers

I have talked with Dell on a few occasions trying to figure this new network setup before we get the 2 new ESX servers that will be connecting to the MD3000i where our virtual machines are stored. I have come to the conclusion that it would be best to:

  • Enable flowcontrol on switches - not currently setup
  • Enable spanning-tree portfast on switches - not currently setup
  • Setup jumbo frames on the switches, NICs and SAN - not currently setup
  • Setup a 2 port LAG between the 2 switches

I am not sure that stacking the 2 PowerConnect switches is the best idea. Due to the fact that if the master switch were to fail the stack would reboot causing network outage while the stack re-elects a new master. Which to me sounds like it would not provide the redundancy/HA that we are looking for.

Since the MD3000i has 4 connections for the iSCSI traffic (2 for controller 0 and 2 for controller 1) connect the 0 side to switch A and then the 1 side to switch B. Then have a connection coming from our ESX servers to each switch for the iSCSI traffic.

My confusion about the setup comes with how the ESX server is configured. I am not sure how the 2 teamed NICs should be handled. From my understanding teamed NICs have to be connected to the same switch but we would be connecting them to 2 switches. Would we need to break the teaming and create a new vSwitch for each connection to switch A and B?

Is there a better way to configure this network or is the direction I am attempting to go the best?

Update: I am in the process of reading the iSCSI configuration guide for ESX 4. Will post back/mark answered once I finish reading that document.

DanielJay
  • 265
  • 2
  • 5
  • 13

2 Answers2

5

Nicely structured approach and you're asking all the right questions. Your suggested redesign is excellent.

ESX 3.5 doesn't really do iSCSI Software Initiator multipathing but it will happily failover to another active or standby uplink on the vSwitch if a link fails for any reason . The VI3.5 iSCSI SAN Configuration Guide has some information on this, not as much as I'd like but it is clear enough. You shouldn't have to do anything on the ESX side when you change over but you will no longer get any link aggregation effects (because your uplinks are going to two separated non-stacked switches), only failover. Given the weakness of multipathing in the ESX 3.5 iSCSI stack this probably wont have any material effect but it might because you have multiple iSCSI targets so bear it in mind. I'm sure you know this already but Jumbo frames are not supported with the Software Initiator on ESX 3.5 so that's not going to do anything for you until you move to ESX 4.

In setting up the ESX vSwitch and VMkernel ports for iSCSI with ESX4 the recommendation is to create multiple VMkernel ports with a 1:1 mapping to uplink phyiscal NICs. If you want to create multiple vSwitches for this you can or you can use the NIC teaming options at the port level so that you have a single NIC designated as active per VMkernel port with 1 or more as standby. Once you have the ports\vSwitch configured you then need to bind the ports to the iSCSI multipath stack and it will then handle both multipathing and failover more efficiently. Given the way this works there is no need to worry about teaming across the switches, the multipath driver is doing the work at the ip-layer. This is just a quick idea of how it works, it is described in very good detail in the VI 4 iSCSI SAN Configuration Guide. That will explain everything you need to do, including how to set up Jumbo frame support properly.

As far as the stacking is concerned I don't think you need or want to do it for this config, in fact Dell's recommended design for MD3000i iSCSI environments is not to stack the switches as far as I can recall, for precisely the reason you mention. For other iSCSI solutions (Equallogic) high bandwidth links between arrays is required so stacking is recommended by Dell but I've never had a satisfactory explanation of what happens when the master fails. I'm pretty sure the outage during the new master election will be shorter than the iSCSI timeouts so VM's shouldn't fail but its not something I'm comfortable with and things will definitely stall for an uncomfortable period of time.

Helvick
  • 19,579
  • 4
  • 37
  • 55
0

same switch will mean mode-4 bonding, you can go for failover instead (ESX should be able to support that) Any type of bonding that provides failover and doesn't need to be configured on the switch should do IMO

dyasny
  • 18,482
  • 6
  • 48
  • 63