0

Hopefully an easy one for someone...

If I'm setting up either an MPIO or Team/bond on a Windows server, does a channel group need setting up on the Cisco Catalyst switches that they're connected to? Is there any benefit to doing this? or likewise any downside? we have plenty of free ports on the switches so that's not an issue.

I assume...that a channel group should be created to leverage the benefits of teaming/MPIO on the server side of the equation?

Bart De Vos
  • 17,761
  • 6
  • 62
  • 81

3 Answers3

2

I don't think it is something you want to do for MPIO. For teaming/bonding/link aggregation you might consider it, but you don't have to.

Is your aim to maximise throughput or redundancy?

For throughput, then you would set up channel group. For redundancy you would want each NIC in the team to be connected to a different switch.

If you have stacked switches, or lots of NICS in the server, then you might be able to maximise both redundancy and throughput.

You could spend too much time figuring this out too. Are you maxing out on a single connection?

dunxd
  • 9,482
  • 21
  • 80
  • 117
  • Both really, we have two stacked switches (obv one NIC going in to each switch). Not maxing out a single connection at the moment, but while we're doing the switch upgrade we want to ensure we make the most of the kit we have, ensuring that should load increase we have the configuration in place to withstand that extra throughput. – Toby LaRone Nov 29 '11 at 11:35
2

Assuming you aren't using NIC vendor proprietary technologies, NIC teaming usually means LACP (802.3ad), which you would also have to configure on the switch by creating a port-channel interface and bundling the physical links to that port-channel by using the channel-group interface configuration command. A common use case for teaming is increased throughput and physical link redundancy between a server and a switch (or a switch stack).

MPIO refers pretty much exclusively to path redundancy between an iSCSI/FCoE/SAS initiator and target. MPIO does not use LACP, and only needs to be configured on both endpoints of the iSCSI/FCoE/SAS connection. A common use case for MPIO is path redundancy between a server node and an iSCSI storage node, where either node has multiple physical interfaces.

MPIO is compatible with LACP if you want increased throughput and physical redundancy for your server-switch connection in addition to path redundancy for your storage connection. For example, if you have four NICs on your server, you can configure two NIC teams (LACP) between the server and the switch, and use MPIO between the server and an MPIO endpoint that will use these two logical paths.

tsmo
  • 21
  • 2
  • That's great, makes sense...we'll use MPIO on the NIC's on the server and configure a LCAP on the two switchports that the server is connected to (which is for the iSCSI network). – Toby LaRone Nov 29 '11 at 11:36
0

For storage traffic which has an MPIO compatible driver (DSM) you do not need LACP. Using multiple independent NICs with MPIO will create multiple paths; which, subject to capabilities of the DSM and storage target, can enable round-robin on the paths for high throughput; and have redundancy by virtue of multiple paths.

LACP would complicate and bottleneck this; so wouldn't generally be used together: LACP reduces your number of endpoints ( multiple NICs become one ), thus reducing number of possible MPIO path combinations. Also, on the switches LACP / port aggregation / etherchannel uses a hash algorithm to determine which link in the aggregate to send data down - if you've only one destination for the traffic ( the storage device ) it will only ever use one link in the aggregate, so although LACP gains you redundancy it doesn't gain throughput.

For non-storage traffic (everything that isn't MPIO capable) then LACP is a valid choice - on the servers you can define the send load-balance strategy, and on the switches you can configure the hashing algorithm to best use multiple links using the global config command port-channel load-balance

CGretski
  • 111
  • 2