Maximise network transfer speed of various applications

3

1

When using nc, scp, wget to transfer files between 2 machines on a dedicated 2Mbps link, I get speeds between 0.5 and 1 Mbps. However, when I use iperf -c 10.0.1.4 -t 20 -P 12 (for example) I can maximise the speed of the link (getting stable 2Mbps).

Is there a way to make single stream transfers (such as those done by scp) to utilise all/most of the link? Some kind of tcp settings, or iptables...?

Alex

Posted 2012-06-06T03:14:14.130

Reputation: 973

Alex, I am curious, what was the resolution to your problem? Did you have packet loss, or did you break up your files for parallel transfers? – Mike Pennington – 2012-06-12T12:08:20.697

Answers

2

First let's admit that you're comparing apples and oranges.

nc, scp and wget typically transfer with a single TCP socket. However, when you use iperf -P 12, you are using twelve parallel TCP sockets. This is a non-trivial distinction. The more parallel connections you have, the larger your bandwidth consumption will be. In fact, speedtest.net uses multiple parallel TCP streams to reliably measure bandwith capacity, even if your link has significant packet loss that would tank a single TCP socket; I have seen them saturate links with 1.5% loss (which would decimate throughput on a normal TCP socket).

The primary reasons for sub-optimal single-socket TCP transfers are packet loss and delay / jitter. You need to identify and correct whether you have any ongoing packet loss through your link... I usually use mtr or winmtr for this...

mpenning@mpenning-T61:~$ mtr -n <destination_ip>
HOST: mpenning-T61              Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. 10.239.84.1                0.0%    407    8.8   9.1   7.7  11.0   1.0
  2. 66.68.3.223                0.0%    407   11.5   9.2   7.1  11.5   1.3
  3. 66.68.0.8                  0.0%    407   19.9  16.7  11.2  21.4   3.5
  4. 72.179.205.58              0.0%    407   18.5  23.7  18.5  28.9   4.0
  5. 66.109.6.108               5.2%    407   16.6  17.3  15.5  20.7   1.5 <----
  6. 66.109.6.181               4.8%    407   18.2  19.1  16.8  23.6   2.3
  7. 4.59.32.21                 6.3%    407   20.5  26.1  19.5  68.2  14.9
  8. 4.69.145.195               6.4%    406   21.4  27.6  19.8  79.1  18.1
  9. <destination_ip>           6.8%    406   22.3  23.3  19.4  32.1   3.7

If you see a hop where you consistently loose packets over time, and the hops behind it are loosing packets, then you need to fix whatever is causing that packet loss. I usually measure for at least five or ten minutes... often for hours, if I don't see the problem immediately.

The other situation is delay... you will need to quantify the problem further with specifics about the end-to-end delay as well as src/dest OS information before someone can respond to this.

So you have some choices... either:

  • Figure out what is causing you to drop performance
  • Break up your transfers into multiple files and transfer them in parallel (to overcome whatever factors are dropping your throughput now)

Mike Pennington

Posted 2012-06-06T03:14:14.130

Reputation: 2 273