14

I'm starting to explore VMware Distributed Switches (dvSwitches or VDS) for use in existing and new installations. Assume VMware version 5.1 and newer with Enterprise Plus licensing. Prior to this, I've made good use of standard vSwitches defined with the right types of physical uplinks (1GbE or 10GbE) and managed independently on individual hosts.

How does using a Distributed Switch help me in basic terms? Examining other installations and setups described on the internet, I see a lot of situations where the virtual management network or VMkernel interfaces remain on standard switches, with VM traffic going to distributed switches; a hybrid model. I've even seen recommendations to avoid distributed switches entirely! But more than anything, the info I find online seems outdated. In a weak attempt to convert one of my existing servers, I was not sure where the management interface needed to be defined, and couldn't quite find a good answer on how to resolve this.

So, what's the best-practice here? To use a combination of standard and distributed switches? Or is this just not a technology with good mindshare? How is this colored by the recent inclusion of LACP capabilities in VDS?


Here's a real life new installation scenario:

  • HP ProLiant DL360 G7 servers with 6 x 1GbE interfaces serving as ESXi hosts (maybe 4 or 6 hosts).
  • 4-member stacked switch solution (Cisco 3750, HP ProCurve or Extreme).
  • NFS virtual machine storage backed by an EMC VNX 5500.

What's the cleanest most-resilient way to build this setup? I've been asked to use distributed switches and possibly incorporate LACP.

  • Throw all 6 uplinks into one distributed switch and run LACP across different physical switch stack members?
  • Associate 2 uplinks to a standard vSwitch for management and a run a 4-uplink LACP-connected distributed switch for VM traffic, vMotion, NFS storage, etc?
  • ???
  • Profit.
ewwhite
  • 194,921
  • 91
  • 434
  • 799

2 Answers2

7

The two main benefits to distributed switches are

  1. More features.
    • LACP as you mentioned
    • Visibility into the network activity on each virtual port (so you can see the unicast/multicast/broadcast counters for a specific VM in the vCenter interface)
    • CDP advertisements from the vDS to the physical network devices
    • Mirroring/SPAN for monitoring or troubleshooting
    • Netflow
    • Private Vlans
    • And they're required for some features like network I/O control and Cisco 1000V switches
  2. Easier management and configuration.
    • When adding a new host with an interface serving the port groups in a vDS, you just need to assign the interfaces to the switch and it's good to go with all of the port groups configured. (Host profiles can achieve pretty much the same end, but making changes in a host profile is much more of a pain.)

I've used them quite successfully since 4.1. It's a big improvement over the standard vSwitch, and it's awesome to be able to add a new VM port group to all hosts in a cluster or configure a new host's networking in two clicks, but I've always avoided using them on the hosts' management interfaces; seemed like a bad idea.

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
  • So you still end up using a hybrid model of standard and distributed switches? – ewwhite Jan 20 '13 at 19:09
  • @ewwhite Exactly; the way I've always set it up is to have the interfaces for the hosts' management vmkernel ports as standard vSwitches, and everything else as vDS. – Shane Madden Jan 20 '13 at 19:10
  • See the 6 pNIC server arrangement above. What makes the more sense design-wise, incorporating LACP *and* VDS? – ewwhite Jan 20 '13 at 19:21
  • @ewwhite Second option sounds great, with the caveat that I haven't used the LACP feature in 5.1's vDS yet so I can't vouch for how well it works. – Shane Madden Jan 20 '13 at 19:36
3

I do know that a lot of new features are no longer supported on standard switches, such as the network rollback in case of misconfiguration, and network health checking. You can now save and restore your dVS separately, which I think was a big problem for people (which is why some would have recommended avoiding dVS entirely.

I guess there are three reasons why you should use dVS as opposed to standard in 5.1 setups:

  • Above mentioned network configuration rollback and health checks
  • Ease of management. For vMotion etc, you generally need all your networking identical on all hosts. This is a pain and there is a lot of room for errors when using standard switches. This process is much simpler when using dVS. Because of these features you should also have vmk ports on dVS
  • It's my opinion that there wont be much more development on standard switches, I think everything is going to move more and more towards dVS. Ie. I don't think features such as LACP will be moved to Standard switches.
  • You can use Network IO to control uplink usage if you require it (if your worried about vmotion saturation etc.)
Rqomey
  • 1,055
  • 9
  • 25