1

Scp is quite slow for transferring individual files. What is the quickest way to do this?

The reason I need speed is not because I have a large number of files to transfer. I just want the individual file transfer (start to finish) to finish quickly (so rsync and tar and transfer are not quick enough).

user788171
  • 279
  • 1
  • 5
  • 12
  • 2
    A fundamental limit of making lots of reads from different parts of the disk is seek time and latency. I want a Ferrari and a threesome with supermodels for the loose change in my pocket... but simply wanting it won't make it happen. Nor will anything else within the bounds of reality. – HopelessN00b Nov 20 '14 at 03:51
  • Where are you copying from and to? Distance between locations? Connection speed? Source and target server specifications? Number of files? Size of files? – ewwhite Nov 20 '14 at 03:53
  • There's no indication on whether this is or not a "professional server, networking, or related infrastructure" and you have assumed the worst. I assert the opposite is true. Dealing with problems of speed and scale is the epitome of professional engineering. In fact, UUNET founder and internet innovator once said, "The only real problem is scaling. Everything else is a sub-problem." – TomOnTime Nov 20 '14 at 15:15

2 Answers2

11

There are many limits for transferring many small files. Some have already been mentioned: network latency, disk write speed, etc. However most of those can be optimized best by using "rsync". If the files don't exist on the destination, and you are pretty sure the process won't be interrupted, using tar piped to tar will be very efficient:

cd /SOURCE/DIR && tar cf - . | ssh DESTINATIONHOST "cd /DESTINATION/DIR && tar xpvf -"

Fundamentally you need to batch all the files together so that the startup/shutdown overhead of SCP only happens once. If you do that startup/shutdown for each file, it will be very inefficient. The above "tar" pipe will do that. In fact, 90% of all use cases this will be good enough.

This "tar pipe" has the benefit of parallel processing (reading in one process while writing in another). However it is limited by a few things:

  1. TCP/IP will never utilize 100% of the pipe it has.
  2. Each process is limited by disks that can only do one write or one read at a time. If you use spinny disks, that's fine. If you use SSDs or RAID (the kinds of RAID that permit multiple parallel reads) this technique will under-perform.

You can work around #2 through various hacks like running two or more processes, each on a subset of the files. However those are imperfect and a bit sloppy.

TCP/IP is more difficult to work around and will continue to be your limit. In fact, if you tune the system so that everything is optimal, TCP/IP won't use the full pipe. Every time TCP/IP thinks it has found the optimal send rate, it will try to send a little more to test if there is "more room" available. This will fail and TCP/IP will back-off a bit. This constant increase/fail/back-off loop means that a TCP/IP stream will alternate between 100% utilization and 50% utilization... the result is that on average the pipe will be 75-80% utilized. (NOTE: These are estimates... do some google searches to find the exact numbers. The point is that it will be the average of 100% and something that isn't 100%, therefore it won't ever be 100%).

If you run multiple TCP/IP streams, they will all be constantly looping through this increase/fail/back-off loop. If you are unlucky they'll all collide at the same time and all back off very far, leaving the pipe underutilized more. If you are lucky they'll collide less and you'll get a graph that looks like many bouncing balls... still leaving the pipe underutilized in aggregate.

Oh, and if you have a single machine who's TCP/IP implementation doesn't have the latest optimizations, or isn't tuned perfectly, it can send the whole system out of whack.

So if TCP/IP is so terrible, why do we continue to use it? It isn't so bad in the typical case of many different types of traffic sharing a pipe. The problem here is that you have a very specific application with a very specific requirement. Therefore you need a very specific solution. Luckily a lot of people also are in your position so these solutions are becoming easier to find.

Systems like http://asperasoft.com/ use a custom protocol over UDP/IP so they can control the back-off/rety algorithm. They use forward-error-correction (FEC) so that small errors don't require retransmission (with TCP/IP a small error is a signal to back off), custom compression schemes, delta copying, and their own back-off algorithms and rate-limiting systems to achieve full (or close-to-full) utilization of the pipe. These are all proprietary so it isn't clear exactly what techniques Aspera and their competitors use or exactly how they work.

There are many companies that have invented such systems and either made them part of their own products, or sell them as a commercial product.

I don't know of any open source implementations at this time. (I'd like to be corrected!)

If this is a very pressing problem and worth spending money to fix, try one of the commercial products. Or, if you can not change your software, you'll need to buy a larger pipe. Luckily, 10G and 40G network interfaces are coming down in price.

TomOnTime
  • 7,567
  • 6
  • 28
  • 51
  • 2
    Good answer to a crappy question. This has definitely [been answered](http://serverfault.com/a/638065/13325) a [few times](http://serverfault.com/a/640821/13325) here. I've been using tools that leverage [UDT](http://udt.sourceforge.net) to provide WAN acceleration more efficient transfers over high-speed/high-latency links (or short distances with lots of small files)... – ewwhite Nov 20 '14 at 11:27
  • This is in response to your question about open-source implementations.. – ewwhite Nov 20 '14 at 11:34
2

There is an elegant solution developed by William Glick: parallelizing rsync.

/bin/bash

# SETUP OPTIONS
export SRCDIR="/folder/path"
export DESTDIR="/folder2/path"
export THREADS="8"

# RSYNC TOP LEVEL FILES AND DIRECTORY STRUCTURE
rsync -lptgoDvzd $SRCDIR/ /$DESTDIR/

# FIND ALL FILES AND PASS THEM TO MULTIPLE RSYNC PROCESSES
cd $SRCDIR; find . -type f | xargs -n1 -P$THREADS -I% rsync -az % /$DESTDIR/%

# IF YOU WANT TO LIMIT THE IO PRIORITY, 
# PREPEND THE FOLLOWING TO THE rsync & cd/find COMMANDS ABOVE:
#   ionice -c2 

The magic happens in xargs -P which splits the input automagically into $THREADS chunks. Fast, efficient, easy.

See William's original publication for details.

  • 1
    I'd like to see benchmarks of this against plain old "rsync -avP". Considering the optimizations added to rsync recently (i.e. in the last 5 years) I'd expect fewer and fewer situations would find this solution to be better. – TomOnTime Nov 20 '14 at 15:18