2

I have two CentOS 6 servers. I am trying to transfer files between them. Source server has 10GB/s NIC nd destination server has 1GB/s NIC.

Regardless to the command used nor the protocol, the transfer speed is ~1 Mega byte per second. The goal is at least couple dozens MB per second.

I have tried: rsync (also with various encryptions), scp, wget, aftp, nc.

Here's some testing results with iperf:

[root@serv ~]# iperf -c XXX.XXX.XXX.XXX -i 1
------------------------------------------------------------
Client connecting to XXX.XXX.XXX.XXX, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local XXX.XXX.XXX.XXX port 33180 connected with XXX.XXX.XXX.XXX port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  1.30 MBytes  10.9 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  1.0- 2.0 sec  1.28 MBytes  10.7 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  2.0- 3.0 sec  1.34 MBytes  11.3 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  3.0- 4.0 sec  1.53 MBytes  12.8 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  4.0- 5.0 sec  1.65 MBytes  13.8 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  5.0- 6.0 sec  1.79 MBytes  15.0 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  6.0- 7.0 sec  1.95 MBytes  16.3 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  7.0- 8.0 sec  1.98 MBytes  16.6 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  8.0- 9.0 sec  1.91 MBytes  16.0 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  9.0-10.0 sec  2.05 MBytes  17.2 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.68 MBytes  14.0 Mbits/sec 

I guess HD is not the bottleneck here.

user150324
  • 29
  • 2
  • 2
    So are the two systems connected to each other through a switch? Have you verified that the interfaces are are actually negotiating to the correct speed? Do you see any evidence of collections, or other errors on any of the interfaces involved? – Zoredache Dec 19 '12 at 22:42
  • 1
    Have you tried a different switch? manually set the throughput and NICs to full duplex? Are you using Cat6? – jamieb Dec 19 '12 at 22:44
  • How about a different port on the same switch. – mdpc Dec 19 '12 at 22:49
  • @Zoredache Collections ? :-) Collisions I presume... #jamieb on a short run (< 5 meter) in a server-room/rack even CAT5e can do 10 Gb/s. Cat 6 is only needed if it exceeds 5 meter or if the environment is electrically very noisy (in which case you have bigger issues to worry about). – Tonny Dec 19 '12 at 22:55
  • And how are you sure the bottleneck is on the network anyway? – HopelessN00b Dec 19 '12 at 22:57
  • @jamieb: hardcoding speed & duplex is a big no-go nowadays. Auto-nego _has_ to be supported by the 1/10Gb NICs. If it isn't, drop the NIC and buy another one. – petrus Dec 19 '12 at 22:58
  • @Tonny Bleah, gotta love auto-correct. Yes I mean collisions. – Zoredache Dec 19 '12 at 23:11
  • @HopelessN00b So if not the network what are you suggesting he should look at? Iperf is tool for testing the network. Those results that should completely rule out storage/memory considerations. Even crappy arm processors can handle more then ~10mb/s, which should rule out the CPU as an issue. – Zoredache Dec 19 '12 at 23:15
  • Can you try a different cable? Maybe it has a short? – ionFish Dec 19 '12 at 23:54
  • 1
    @Zoredache Not sure, just asking - doesn't look like it's anything obvious (like the link being stuck at 10Mbit), so it seems like a good idea to make sure he's not barking up the wrong tree here. – HopelessN00b Dec 19 '12 at 23:56
  • The servers are not located near each other, for that matter its over the internet but when there are multiple clients its reaches high output. Another weird thing happen - when i set iperf to UDP it got up to 960Mbps stable.it doesn't feel like cable issue but i will try. both servers are on full duplex - the 10Gbps doesn't support auto negotiation. – user150324 Dec 20 '12 at 06:22
  • 1
    When copying files over network pipes, turn off compression, copy a file from a ramdisk to another ramdisk, or something simpler, like stream reading /dev/zero, xfering it, and then dumping it out to /dev/null. When dd'ing, remember to play with block sizes and flags on BOTH sides (especially flags might not be symmetric!). When playing with qperf/iperf/netperf, make sure you're controlling window sizes and TCP congestion algorithms. Watch switch statistics. If not meaningful, connect two boxes directly, see if it changes anything. Install new drivers, especially Intel, both Win and LNX. – Marcin Dec 20 '12 at 07:04
  • I have a gigabit card plugged into my wireless B router plugged into my dialup modem, I don't get anywhere near a gigabit! Are you sure your internet connection is rated at 10gbit? – David Houde Apr 11 '13 at 11:36

2 Answers2

1
  1. Verify the switch is GigE or higher

    In my experience (hard drive configurations can vary results):

    10Base-T ~ 512KB/s-1MB/s

    100Base-T ~ 1-3MB/s

    1GigE ~ 3-11MB/s

    10GigE ~ 11-40MB/s

  2. Verify you have auto-negotiation enabled on both servers' NICs and the switch

    Mismatched negotiation defaults to lowest common denominator in the path

  3. Verify the switch and the servers' NICs are all using the same MTU size

    Base MTU is 1500. If your servers and switches can support 9000, try that.

CIA
  • 1,606
  • 2
  • 13
  • 30
  • I thought autoneg was the issue, until I learned that they were not on the same LAN, this is internet traffic he is measuring. – David Houde Apr 11 '13 at 11:38
0

If I use rsync with the -W option on servers connected to eachother with just gigabit, the speeds top at 80..90MB/s (or even higher if it the fsync didn't hit). Depending on the drives (sas or ssd) sustained is about 70MB/s with ssd and with sas it ranges between 10...70MB/s.

My guess is, the switch your 10Gb/s card is connected to is sending pause frames to your server. It takes time to recover from that.