I'm starting to explore VMware Distributed Switches (dvSwitches or VDS) for use in existing and new installations. Assume VMware version 5.1 and newer with Enterprise Plus licensing. Prior to this, I've made good use of standard vSwitches defined with the right types of physical uplinks (1GbE or 10GbE) and managed independently on individual hosts.
How does using a Distributed Switch help me in basic terms? Examining other installations and setups described on the internet, I see a lot of situations where the virtual management network or VMkernel interfaces remain on standard switches, with VM traffic going to distributed switches; a hybrid model. I've even seen recommendations to avoid distributed switches entirely! But more than anything, the info I find online seems outdated. In a weak attempt to convert one of my existing servers, I was not sure where the management interface needed to be defined, and couldn't quite find a good answer on how to resolve this.
So, what's the best-practice here? To use a combination of standard and distributed switches? Or is this just not a technology with good mindshare? How is this colored by the recent inclusion of LACP capabilities in VDS?
Here's a real life new installation scenario:
- HP ProLiant DL360 G7 servers with 6 x 1GbE interfaces serving as ESXi hosts (maybe 4 or 6 hosts).
- 4-member stacked switch solution (Cisco 3750, HP ProCurve or Extreme).
- NFS virtual machine storage backed by an EMC VNX 5500.
What's the cleanest most-resilient way to build this setup? I've been asked to use distributed switches and possibly incorporate LACP.
- Throw all 6 uplinks into one distributed switch and run LACP across different physical switch stack members?
- Associate 2 uplinks to a standard vSwitch for management and a run a 4-uplink LACP-connected distributed switch for VM traffic, vMotion, NFS storage, etc?
- ???
- Profit.