0

I have a "client" machine with 8 ethernet interfaces. (conf as dhcp) (These interfaces are plugged into a special switch which has a vlan conf/port such as plugging into a specific port always gets you the same ip.)

I have a "server" machine which wants to instigate tcp traffic on the client machine via all its interface to maximize bandwidth. (The server is plugged into the same switch with a fiber cable to sustain the load of the 8 1GbE)

My problem is that client machine is routing all the traffic into only one of its interface, thus my transfer speed caps at about 120MB/s.

Extract of route commands on the client machine:
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt  Iface
0.0.0.0         10.11.13.1      0.0.0.0         UG        0 0          0 eth4
10.11.9.0       0.0.0.0         255.255.255.0   U         0 0          0 eth10
10.11.9.2       10.11.9.1       255.255.255.255 UGH       0 0          0 eth10 # I ADDED THIS ONE
10.11.10.0      0.0.0.0         255.255.255.0   U         0 0          0 eth11
10.11.10.2      10.11.10.1      255.255.255.255 UGH       0 0          0 eth11 # I ADDED THIS ONE
10.11.11.0      0.0.0.0         255.255.255.0   U         0 0          0 eth9
10.11.12.0      0.0.0.0         255.255.255.0   U         0 0          0 eth8
10.11.13.0      0.0.0.0         255.255.255.0   U         0 0          0 eth4
10.11.14.0      0.0.0.0         255.255.255.0   U         0 0          0 eth7
10.11.15.0      0.0.0.0         255.255.255.0   U         0 0          0 eth6
10.11.16.0      0.0.0.0         255.255.255.0   U         0 0          0 eth5

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 usb0 169.254.95.0 0.0.0.0 255.255.255.0 U 0 0 0 usb0

I understand why it's happening based on this output. You can see I try to modify it, but it seems I do not understand the problem well enough.

I hope you can help!

  • I don't get your setup. Maybe you can describe the physical connections more clear? 120 MB/s equals 1 Gbps. Unless you create more flows (parallelization), even with LACP you won't be able to pass the speed of a single link. – Jeroen Sep 03 '15 at 10:45
  • On the client machine, I have 8 sockets each binding to their own interface. (one socket binding to 10.11.9.2, one binding to 10.11.10.2 etc....). One the server, I connect simultaneously to each of those sockets and start a transfer. I'd expect a total traffic of 8 x 120MB/s then – robertM Sep 03 '15 at 11:09
  • Do you mean interface with socket? So the client and the server both have 8 interfaces? How are you doing the transfer, is that not disk/io bound? Can you use tcpdump to see if it is using all interfaces? – Jeroen Sep 03 '15 at 11:14
  • Communication is done via sockets yes. Server has only one interface but a 10GbE one. Disk IO will not be bottleneck (tested). Monitoring tool shows that all traffic to outside go via eth4 (10.11.13.1) – robertM Sep 03 '15 at 11:30
  • just configure bond0 interface on your "client" and assign a single IP address there and talk with your server normally -- don't invent complexities where there are none :) – galaxy Sep 03 '15 at 11:51

2 Answers2

0

It would help to know your distribution. In any case, what you want to do is to use bonding. This is how it could be done on CentOS (your question suggests that you want mode=4 (Dynamic Link Aggregation) if your switch can support it or mode=6 which is likely what you were trying to achieve if you want to load-balance both incoming and outgoing traffic: https://wiki.centos.org/TipsAndTricks/BondingInterfaces

This is somebody did it for CentOS 7 (the latest CentOS): http://www.unixmen.com/linux-basics-create-network-bonding-on-centos-76-5/

galaxy
  • 1,974
  • 1
  • 13
  • 15
0

Bonding is definitely what you want to do, but it may not give you the results you are hoping for. As Jeroen pointed out in a comment above, bonding 8x1Gb will not give you a 1x8Gb connection. You'll end up with 1Gb connection that is 8x harder to saturate. But if you are primarily talking to the same server, you probably won't approach bond saturation.

See this question for a nice explanation of the bonding modes: What are the differences between channel bonding modes in Linux? and pay close attention to this final paragraph in answer #2:

Note: whatever you do, one network connection always go through one and 
only one physical link. So when aggregating GigE interfaces, a file 
transfer from machine A to machine B can't top 1 gigabit/s, even if each
machine has 4 aggregated GigE interfaces (whatever the bonding mode in use).

You might want to consider a 10Gb interface for your client machine if you truly need that much bandwidth.

Edit: In light of clarified requirements from OP: Make the connection between your server and switch a trunk, assign all the vlans to it, then add an IP for each vlan on the server.

Brandon Xavier
  • 1,942
  • 13
  • 15
  • That's why I cant use bonding. I have packets coming out of 8 interfaces, each going into a port of a switch. Then going out of the switch via FC. The aggregation should be done inside the switch. I dont see what is preventing me from saturating each of my 8 interface on the client machine? – robertM Sep 03 '15 at 12:08
  • Some sort of layer 2 bonding will be required to utilize all your interfaces (BTW, all your interfaces are on different IP /24 subnets - is that intentional?) – Brandon Xavier Sep 03 '15 at 12:15
  • it's intentional. Really what i am trying to simulate is 8 physical devices pouring data into the switch via their single interface, and processing their data on my server. Except I am trying to reproduce that with only one computer and 8 interfaces. – robertM Sep 03 '15 at 12:59
  • That's a completely different problem. Make the connection between your server and switch a trunk, assign all the vlans to it, then add an IP for each vlan on the server. – Brandon Xavier Sep 03 '15 at 17:08