1

Question on the performance and function of LACP vs HP "Trunk".

Example, I have two HP 2910al-48G switches connected via two runs of CAT6 cable. Is there any difference in performance between setting up the link as LACP vs Trunk?

Second and related question. If switch 1 is connected to server A via 10gb fiber and switch 2 is connected to server B via 10gb fiber will a single file transfer between the two servers run at 1gb or 2gb?

The connection would look like this:

  • Server A
  • 10gb fiber
  • Switch 1
  • Trunk (2 x CAT 6 1gb)
  • Switch 2
  • 10gb fiber
  • Server B
JPoole
  • 23
  • 3
  • You want to use per-flow, not per-packet, so a single flow uses a single link. You can actually get a performance hit trying to use per-packet because you increase lost and out-of-order packets. Properly hashed, multiple flows will use the full capacity of both links. – Ron Maupin Jan 07 '20 at 22:22
  • So a single file copy between the two servers would be a single flow and therefore only 1gb? The only option for a faster file copy would be to run a 10gb fiber between the two switches? – JPoole Jan 07 '20 at 23:20
  • That is correct. – Ron Maupin Jan 08 '20 at 06:07
  • See [this question](https://serverfault.com/q/626368/324849). – Ron Maupin Jan 08 '20 at 07:38

1 Answers1

1

Is there any difference in performance between setting up the link as LACP vs Trunk?

No. LACP should be preferred because the protocol takes care of potential configuration or connection errors. The performance is the exact same.

If switch 1 is connected to server A via 10gb fiber and switch 2 is connected to server B via 10gb fiber will a single file transfer between the two servers run at 1gb or 2gb?

For aggregated interfaces, the 2910al selects the egress interface for a frame by the combination (hash) of the source and destination MAC and IP addresses (their lower 5 bits) - this cannot be changed on the 2910al series. Accordingly, all communication using the same combination of IP and MAC addresses always uses the same interface. So, any flow between the servers across the LAG trunk has a maximum bandwidth of 1 Gbit/s.

Some switches additionally use the TCP or UDP port numbers in the hash - with those, multiple flows across two nodes might use different interfaces.

Those traffic distribution schemes ensure that frames in each flow always arrive in the order they were sent.

To overcome the limitation you could

  • upgrade the switch interconnect to 10GE to allow a bandwidth of 10G between the servers
  • configure multiple IP addresses on the servers' 10G interfaces and set up the software(?) to alternate between source and destination IP addresses (ECMP). That way, multiple flows could use different interfaces up to a potential aggregate of 2 Gbit/s. Increasing the number of additional IP addresses increases the chance for the flows to distribute across the trunk, increasing the number of links increases the maximum total bandwidth. Each single flow will still be limited to 1 Gbit/s though.
Zac67
  • 8,639
  • 2
  • 10
  • 28