17

What is the best way to duplicate files on server via ssh?

In my case: I'm talking about duplicating magento shop. (15000 files ~ 50MB)

cp -a source destination

Is taking hours... (in my case server is 2.4 Xeon, 2GB RAM)

enloz
  • 271
  • 1
  • 2
  • 5

3 Answers3

24

One word: rsync.

Note that if you're on a slow link, or the server is under heavy load, the tool used for copying won't be the bottleneck, and any way of copying will be slow anyway.

This should give you the basic usage for copying between your local computer and the remote server: http://oreilly.com/pub/h/38

To copy from local computer to a remote server (you need to replace the paths, user name and host address, of course):

rsync -avz -e ssh /path/on/local/computer remoteuser@remotehost.somewhere.example.com:/path/on/server
  • -a archive
  • -v verbose
  • -z compress
  • -e ssh "use a SSH tunnel"

To copy in the other direction, switch the paths (first is from, second is to):

rsync -avz -e ssh remoteuser@remotehost.somewhere.example.com:/path/on/server /path/on/local/computer

But rsync is useful even for copying things around on the same server:

rsync -av /path-to/copy/from /path_to/copy/to
  • 2
    Make note that @Piskvor left the `-z` option out for local copying since it adds unnecessary overhead. IMHO, you should only use `-z` when using rsync across a slow network link. If copying large amounts of data over 100Base-T, you may be just fine without `-z`. With a fast network connection, using compression can peg your CPU and starve other processes. – tomlogic Jan 16 '12 at 17:34
  • @tomlogic: Good point - in other words, don't use `-z`for LAN copy or copying within one machine; test with and without `-z` for copy across the Internet (one or the other may be faster, depending on many things). – Piskvor left the building Jan 16 '12 at 17:38
  • 1
    I'd also leave out compression if you know your files are already compressed, such as syncing a folder tree full of JPEGs since there isn't anything to gain. – fmalina Aug 02 '16 at 05:57
  • Note: `-e ssh` is now default for remote hosts, so it's not necessary to pass the option explicitly. – Piskvor left the building Jul 04 '19 at 09:19
3

Another word: scp

scp /path/on/local/computer remoteuser@remotehost.somewhere.example.com:/path/on/server

For one-shot deals, scp is handy. If it's a lot of files, then rsync is a good idea. If a connection is dropped, rsync can pick up where it left off.

I knew that rsync had compression (-z), and have just learned that scp does as well (-C).

tomlogic
  • 320
  • 3
  • 13
0

In your setup, rsync is probably enough... but as a example, if there are many small files, it may be faster to tar the files first than transfer then via rsync. This is because transfering the owner, timestamps, permissions is sometime heavier than the file itself if the file is small. Tar will merge all that info in one file and rsync will copy bigger blocks.

Or even better, if no security is needed, use tar and nc:

On destination, prepare a receiving daemon, uncompress and untar:

nc -l -p 12345 | pigz -d | tar xvf - 

On the source, tar everything, parallel compress and send it to the destination:

tar cvf - ./ | pigz | nc host 12345
higuita
  • 1,093
  • 9
  • 13