We inherited a SAN/vmWare/iSCSI setup with both Dell and HP equipment purchased for it, but not fully setup, and we need to get some additional switches to make it all work. But we're getting different feedback from the vendors on what's needed for the inter-links between the switches.
This is a diagram of what we believe we want to end up with - http://www.gthomson.us/san-vmware-project2.jpg
ESXi 5.0 environment, running about 60 Windows VMs - will be adding more once further along.
For the most part, the left 2/3rds of the diagram is setup - HP servers for ESXi and HP P4000 Lefthand SAN. But currently there is just a single HP Procurve 2810g switch dedicated to 1gb iSCSI. We want to make that redundant on higher end switches.
We also have a Dell Equallogic PS6010E SAN and a couple Dell R900 servers. The Equallogic is 10gb iSCSI, and no switches were purchased to put it to use.
We'd like to have two realms - one realm with 1gb iSCSI redundant switches, with the HP Lefthand SAN and HP servers - one realm with 10gb iSCSI redundant switces, with the Dell Equallogic and Dell servers
And use one vCenter setup to manage and migrate systems between the two (1gb and 10gb) realms if/when it makes sense to. When we migrate from one to the other, we want to migrate both the host and storage (i.e. would not be wanting the host on the 1gb side, and the storage on the 10gb side, or vice-versa.)
One vendor tells us none of the switches need interlinks for this all to work. The other vendor tells us all the switches need to be interlinked for this to work.
From what I understand so far (and I'm very new to all of this, hence the diagram to help me understand what's happening and what's needed), on the 10gb side of things, the switches would need to be inter-linked because the Equallogic requires it.
On the 1gb side (left side of the diagram) do the two 1gb switches need to be interlinked for the HP Lefthand SAN side of things, or some other reason? My gut feeling tells me this inter-link isn't needed because the Lefthand SAN makes sure all data is written to at least two different physical devices, so the 'link redundancy' is actually handled in that way, rather than inter-links at the switch level.
Does there need to be inter-links between the 1gb side and the 10gb side at the switches, and if so, why? And how do those inter-links get done - each 1gb to each 10gb using the uplink ports on the 1gb going to regular ports on the 10gb? My understanding was that vmotion, for both host migrations and storage migrations, would happen across the vmotion NICs, rather than across the iSCSI links. And if that's the case, why would the links directly between the 1gb and 10gb switches be needed for anything? We don't have any goals of automatic failover from 1gb to 10gb side or vice-versa.