2

I am trying to bond two nics together to get better performance. I have two 1000MBps Intel Nics. When I had one nic I was running some benchmarks getting 106 MB/s. This is pretty close to 1GBps. So I am happy with that. When I add a second nic and bond it together with Mode 0 (balance-rr) I still see the same 106 MB/s.

My setup is: Server ====== Switch ===== Storage

Both the Storage and the Server have two nics hooked up. I know it is not the storage because I can run two benchmarks at the same time and get 2 Gbps through put.

I am pretty sure this setup is just using one nic. Is there anyway to set things up to get better performance?

Nicolas Kaiser
  • 165
  • 1
  • 4
  • 16
  • What interconnect is your storage array using to connect to the switch? Are you SURE you can get 2Gbit/s throughput out of your storage, or are you just seeing some caching artifact? – MarkR Dec 02 '09 at 07:50
  • Can you post the output of "ifconfig"? – RainyRat Dec 02 '09 at 07:53
  • What are you using to test? What type of storage are you talking about? Is this iSCSI, FC, other? – Zoredache Dec 02 '09 at 08:31
  • Is your switch properly configured for nic bonding/EtherChannel? Can you share the configuration settings? – kmarsh Dec 02 '09 at 13:10

2 Answers2

12

I think you may be assuming that bonding can do more then what is actually possible. A single connection between two host will pretty much never be able to use more then the capacity of a single interface. Aggregation is useful when you have lots of parallel connections from many hosts. Check the description for mode 0.

Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

It only does round robin on the packets being transmitted. It doesn't and can't do anything to balance the received packets. Incoming packets will pretty much be limited to a single interface. If your test is to copy files from the storage array to your server, then you are probably receiving data for the most part.

Zoredache
  • 128,755
  • 40
  • 271
  • 413
  • 1
    Good answer, the same is true with the majority, if not all, link aggregation methods, even a four-way cisco etherchannel will only use one link for each MAC-to-MAC path. – Chopper3 Dec 02 '09 at 08:50
3

What's probably happening here is that while the sender is balancing across the two NICs, the switch is sending all the packets down the one port to the receiver because only one of the receiving NICs is ARPing (or your switch only records a MAC against one port). You can check this by looking at the port statistics.

If you instead have multiple switches, and you hook up one NIC in each server to each switch, and there's no cross-connect, then you can probably get better performance. However, that depends on both sides (the storage and the server) doing round-robin balancing, and nothing getting confused about the whole situation and giving up in disgust. With more details about what storage is involved might turn up more details about whether what you're using is capable of doing the right thing.

womble
  • 95,029
  • 29
  • 173
  • 228