6

We inherited a SAN/vmWare/iSCSI setup with both Dell and HP equipment purchased for it, but not fully setup, and we need to get some additional switches to make it all work. But we're getting different feedback from the vendors on what's needed for the inter-links between the switches.

This is a diagram of what we believe we want to end up with - http://www.gthomson.us/san-vmware-project2.jpg

ESXi 5.0 environment, running about 60 Windows VMs - will be adding more once further along.

For the most part, the left 2/3rds of the diagram is setup - HP servers for ESXi and HP P4000 Lefthand SAN. But currently there is just a single HP Procurve 2810g switch dedicated to 1gb iSCSI. We want to make that redundant on higher end switches.

We also have a Dell Equallogic PS6010E SAN and a couple Dell R900 servers. The Equallogic is 10gb iSCSI, and no switches were purchased to put it to use.

We'd like to have two realms - one realm with 1gb iSCSI redundant switches, with the HP Lefthand SAN and HP servers - one realm with 10gb iSCSI redundant switces, with the Dell Equallogic and Dell servers

And use one vCenter setup to manage and migrate systems between the two (1gb and 10gb) realms if/when it makes sense to. When we migrate from one to the other, we want to migrate both the host and storage (i.e. would not be wanting the host on the 1gb side, and the storage on the 10gb side, or vice-versa.)

One vendor tells us none of the switches need interlinks for this all to work. The other vendor tells us all the switches need to be interlinked for this to work.

From what I understand so far (and I'm very new to all of this, hence the diagram to help me understand what's happening and what's needed), on the 10gb side of things, the switches would need to be inter-linked because the Equallogic requires it.

On the 1gb side (left side of the diagram) do the two 1gb switches need to be interlinked for the HP Lefthand SAN side of things, or some other reason? My gut feeling tells me this inter-link isn't needed because the Lefthand SAN makes sure all data is written to at least two different physical devices, so the 'link redundancy' is actually handled in that way, rather than inter-links at the switch level.

Does there need to be inter-links between the 1gb side and the 10gb side at the switches, and if so, why? And how do those inter-links get done - each 1gb to each 10gb using the uplink ports on the 1gb going to regular ports on the 10gb? My understanding was that vmotion, for both host migrations and storage migrations, would happen across the vmotion NICs, rather than across the iSCSI links. And if that's the case, why would the links directly between the 1gb and 10gb switches be needed for anything? We don't have any goals of automatic failover from 1gb to 10gb side or vice-versa.

  • You actually have 2 questions here and they should probably be seperated: 1) Does LeftHand recommend/require inter-switch links, and 2) What are the networking requirements for vMotion between 2 ESXi clusters that don't share iSCSI storage? – longneck Dec 19 '12 at 21:15
  • 1
    And I fear that #2 will be the bastard. I'm sure it can't be done - you can't vMotion between hosts if they don't have access to the same shared storage. Thus, there will need to be a link, and all hosts will need to have access to all storage : or else all of your migrations will have to be offline (not vMotion or svMotion.) – mfinni Dec 19 '12 at 22:14

2 Answers2

3

The HP LeftHand implementation guides specifies links between the switches.

I have found that during high load, the links between the switches requires about 1 link per 2 nodes. Since you have 6 nodes, that would be 3 links for you. However, I tend to +1 when possible (to guard against port failure, allow for recabling without reduced performance, etc.) so 4 would be my final suggestion for you.

longneck
  • 22,793
  • 4
  • 50
  • 84
  • 1
    +1 My thoughts exactly (though I might add another link or two for "really safe", *4 ought to be enough for anybody*) – Chris S Dec 19 '12 at 21:25
3

vMotion migrates the active state of the virtual machines, it does not migrate the virtual machine disks or configuration files (unless you're using storage vMotion to migrate the VM storage to another datastore). As such, the source and destination hosts need access to the virtual machine disks and configuration files, which is why shared storage is required to migrate a powered on virtual machine to another host.

What I see as a possible workaround would be to connect one host from each realm to the other realm's shared storage (rather than connecting each realm's switching infrastructure). This would then allow you to migrate virtual machines from one realm to the other using these two hosts as "placeholders". When migrating a VM you would migrate the VM first to the placeholder host and then perform a second migration to migrate the VM storage from the source realm datastore to the target realm datastore, both of which the placeholder hosts have access to. From there you can migrate the VM to any host in the target realm. Since each placeholder host has a connection to each realm's shared storage this should be do-able without powering down any of the VM's.

NOTE: I don't have an infrastructure like yours to test with so I'm merely visualizing how it might be done and I think this method might work.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
  • Probably work - since he says they have no goal of "automatic failover" between realms, he probably isn't hoping for DRS between them. – mfinni Dec 19 '12 at 23:32