0

In my XenServer 7.3 I have 4 Intel 1G network adapters. I configured Nic1 as Management and Nic0 & Nic2 as a LACP Bond based on IP and port (On my Cisco L3 Switch I did the same for the two connections).XenCenter displays a Speed of 2G for the Bond.

I assigned the bonded network adapter to my Windows 2016 Server guest OS. In Windwos the network adapter (XenServer PV Network) shows only a speed of 1G. And it's true, i tested it with an other PC (with 2 Intel network adapter, configures as LACP Bond). The file tranfer rate is 110 MB/s.

Is there any way to become the 2G Speed to the Guest OS?

Corben
  • 1
  • 1

2 Answers2

0

LACP doesn't work that way. Any one connection will get the maximum speed of one slave. When multiple connections are load balanced over the bond, you can reach the aggregate speed of all slaves.

This question has been asked a million times on here and should be a duplicate, something like Link aggregation (LACP/802.3ad) max throughput might be a good candidate.

suprjami
  • 3,476
  • 20
  • 29
0

While the answer by @suprjami is correct, there is another facet causing this to happen.

The vswitch in XenServer will de-rate itself from the speed of the host bus (super fast, depending on how fast the host is), to the speed of the slowest attached NIC in the vswitch or bridge.

The fact that you're using LACP is irrelevant to this particular part of the issue, as the slowest single interface attached to your bridge is 1Gbps, so the fastest virtual interface on your bridge must also run at 1Gbps.

You can get around this issue by using only an empty bridge to provide networking to your VMs, while routing layer 3 traffic from that empty bridge to a LACP bond attached only to the host. This way, internal VM-to-VM traffic will be at host bus speeds, while also still being able to use an effective 2Gbps on each VM with a single interface. There may be a fancier name for this method, but it's generally referred to as "terminating layer 2 at the host".

This method of hypervisor networking introduces some complexities, such as having to manage routes and/or a NAT to get external traffic into and out of your VMs. However, if you want each VM to be able to use the full capacity of your hosts LACP bonds it's a solid method.

Spooler
  • 7,016
  • 16
  • 29