3

I have a box with Gigabit Ethernet, and I'm unable to get past about 200Mbps or 250Mbps in my download tests.

I do the tests like this:

% wget -6 -O /dev/null --progress=dot:mega http://proof.ovh.ca/files/1Gb.dat
--2013-07-25 12:32:08--  http://proof.ovh.ca/files/1Gb.dat
Resolving proof.ovh.ca (proof.ovh.ca)... 2607:5300:60:273a::1
Connecting to proof.ovh.ca (proof.ovh.ca)|2607:5300:60:273a::1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 125000000 (119M) [application/x-ns-proxy-autoconfig]
Saving to: ‘/dev/null’

     0K ........ ........ ........ ........ ........ ........  2% 5.63M 21s
  3072K ........ ........ ........ ........ ........ ........  5% 13.4M 14s
  6144K ........ ........ ........ ........ ........ ........  7% 15.8M 12s
  9216K ........ ........ ........ ........ ........ ........ 10% 19.7M 10s
 12288K ........ ........ ........ ........ ........ ........ 12% 18.1M 9s
 15360K ........ ........ ........ ........ ........ ........ 15% 19.4M 8s
 18432K ........ ........ ........ ........ ........ ........ 17% 20.1M 7s

With the constraint that I only control one server which I want to test, and not the sites against which I want to perform the tests, how do I do a fair test?

Basically, is there some tool that would let me download a 100MB file in several simultaneous TCP streams over HTTP?

Or download several files at once in one go?

tomodachi
  • 217
  • 1
  • 5
cnst
  • 12,948
  • 7
  • 51
  • 75
  • What else is in between those servers? Are they connected to the same switch? Are they the only two servers on that switch? – toppledwagon Jul 25 '13 at 16:51
  • No, it's on the internets. E.g., in case of proof.ovh.ca, `ping6 -c4 proof.ovh.ca` results in a summary of `round-trip min/avg/max/std-dev = 23.907/24.165/24.392/0.179 ms`. – cnst Jul 25 '13 at 17:08
  • 7
    If there is any connection between you and the server that is slower than 1Gb you will not be able to attain Gb speeds. – dbasnett Jul 25 '13 at 17:15
  • Is the source also serving up data to other clients? Also, what is the disk read speed of the server? – Bad Dos Jul 25 '13 at 17:26
  • @BadDos, I only control one of the hosts; I dunno about disk read speed, but I would guess they aren't complete idiots to presumably have 10GigE connections, but not have enough RAM in their test-server to cache all the testfiles. An alternative is to download 10MB file multiple times over several TCP connections, which would eliminate disk read speed issue, but I don't know of a tool that does that. – cnst Jul 25 '13 at 18:08
  • Most importantly: WHAT are you trying to test? – MikeyB Jul 26 '13 at 02:44
  • @MikeyB, I'm trying to test network connectivity between a box of mine (currently on internet2 and under my desk, always running command-line UNIX) and various hosting providers that have 100MB speedtest files for download (e.g. https://www.linode.com/speedtest/, http://proof.ovh.ca/files/1Gb.dat, http://hetzner.de/100MB.iso, http://cachefly.cachefly.net/100mb.test etc) – cnst Jul 26 '13 at 15:45

3 Answers3

6

Aria2 is command line tool similar to wget that supports multiple simultaneous downloads over http , bittorent, ftp etc.

aria2c -d /dev -o null  --allow-overwrite=true -x 15 url  --file-allocation=none

Download file with 15 connections to /dev/null.

--allow-overwrite prevents aria from trying to rename /dev/nulll.

I prefer not to start allocating space before the download since it takes time for the download to start

tomodachi
  • 217
  • 1
  • 5
  • OK, I'm able to reach 43MiB/s with `aria2c -x4 http://proof.ovh.ca/files/1Gb.dat`. Having more than 4 connections doesn't seem to improve the speed, but I'm still unsure how to make `aria2c` write to /dev/null, instead of a filesystem. – cnst Jul 25 '13 at 18:12
  • Nice, @tomodachi, thanks for the edit (why is there a two of you?), I've just got `83MiB/s` with `aria2c -d /dev -o null --allow-overwrite=true -x 5 http://cachefly.cachefly.net/100mb.test --file-allocation=none`, with just wget, I'm only getting about 50MB/s from cachefly, but their server is only 8 or 9ms away, so, that's probably why it's quicker than OVH.CA, which is 24ms away. Basically, I'm now safe in the knowledge that indeed I have a GigE connection to the internets, 50MB/s at 9ms is just about 512KB of BDP, so, no wonder single streams can't do much better than that. – cnst Jul 25 '13 at 21:19
  • I was just able to make my Activity Monitor on OS X display `Peak: 99.9 MB/sec` when running `time aria2c -d /dev -o null --allow-overwrite=true -x 12 http://proof.ovh.ca/files/10Gb.dat --file-allocation=none` (1Gbps * 25ms / 256KB is about 12, just to be on the safe side), but my USB 3.0 to GigE adapter dies shortly after starting these tests, and `aria2c` doesn't seem to display an average speed as it goes. Any way to make it print the speed every XX MB or so, just as `wget` does with `--progress=dot:mega` as above? – cnst Jul 25 '13 at 21:46
2

You will be limited to less then the speed of the slowest link. You could have a 10Gig connection, but if your internet connection is Dialup, you are going to be waiting. Even on a LAN that can support 1GB end to end, you may see a bottlneck with the read speeds of the source server or the write speeds of the destination server.

jmoyer8
  • 64
  • 4
  • Completely false; you're limited by the network buffers of TCP on both ends in scenarios that involve real broadband connections. – cnst Jul 25 '13 at 17:55
  • 2
    @cnst That's not correct. Slower links don't magically go faster because a bigger pipe said to. – Nathan C Jul 25 '13 at 18:03
  • Thanks Nathan. Your answer seems to follow suit with more detail. Completely agree. – jmoyer8 Jul 25 '13 at 19:47
  • http://www.speedtest.net old standard...should give you a rough idea of your internet speed... – jmoyer8 Jul 25 '13 at 19:47
1

There are many factors that contribute to this:

For one thing, you're downloading over the Internet. Let's assume you truly have a gigabit down connection at your disposal:

TCP overhead can eat anywhere from 5-10% of your bandwidth - for simplicity's sake let's say 10%. So you're down to 900Mbits/s.

Remote server load is a major factor and you can't control or see it. Many servers can easily pull 200 MB/s read times, but under load it can push the speeds down.

Routing is a factor in speed too. If your route is saturated, speed will suffer.

And finally ...do you really have a gigabit connection to the Internet, or is it just your port speed? Speeds are limited by the slowest link that you cross. Also, if you have a hosted server with a gigabit link, these are often shared by other clients and you don't get a dedicated gigabit link to begin with.

Edit: The reason I didn't recommend a tool is because they're a google search away and there's tons.

Nathan C
  • 14,901
  • 4
  • 42
  • 62
  • I indeed don't know if I truly have a Gigabit connection to the Internet, because noone can provide me with an answer of a utility that would open-up several TCP streams, and will start downloading multiple files (potentially from multiple servers), appending the contents to /dev/null. – cnst Jul 25 '13 at 18:06
  • @cnst There's an answer with another tool you can use. – Nathan C Jul 25 '13 at 18:09