rsync hack to bounce files between two unconnected servers

8

4

Here's the connection:

[Server1] <---> [my desktop] <---> [Server2]

Server1 and server2 are not permitted to talk directly to eachother (don't ask). My desktop, however is able to access both servers via ssh.

I need to copy files from server1 to server2.

Traiditionally I have been using a ssh+tar hack, as such:

ssh -q root@Server1 'tar -vzc /path/to/files ' | ssh -q root@Server2 'tar -vzx -C /'

And that works great, but I would like to take it a step further and get rsync working between the two servers VIA my desktop.

Now I know that I could start an ssh port-forward tunnel in one terminal and then rsync over that tunnel in another window, but I don't want to fuss around with a second terminal or making and breaking a separate port forward tunnel. what I want is:

  • One liner command to rsync files from Server1 to server2 VIA my desktop
  • all on ONE command line, one terminal window
  • I want the port forward tunnel to only exist for the life of the rsync command.
  • I don't want to scp, I want to rsync.

Does anybody have a trick for doing that?

EDIT: Here is the working command! Great work everyone: 1. For rsa key path, can't use tildae, had to use "/root/". 2. Here's the final commandline:

ssh -R 2200:SERVER2:22 root@SERVER1 "rsync -e 'ssh -p 2200 -i /root/.ssh/id_rsa_ROOT_ON_SERVER2' --stats --progress -vaz /path/to/big/files root@localhost:/destination/path"

Boom goes the dynamite.

regulatre

Posted 2010-08-23T12:28:45.240

Reputation: 380

This is really excellent. The one thing that needs to be changed is to disable host key verification. You have the SSH key for TTYless authentication, but since the servers have never talked to each other they can't verify the host keys. So best to disable it: -o StrictHostKeyChecking=no after the -p or -i flags. – Amala – 2014-07-23T15:45:24.557

Answers

5

If you are happy to keep a copy of the data on the intermediate machine then you could simply write a script that updated the local copy using server1 as a reference then updated the backup on server2 using the local copy as a reference:

#!/bin/sh
rsync user@server1:/path/to/stuff /path/to/loca/copy -a --delete --compress
rsync /path/to/loca/copy user@server2:/path/to/where/stuff/should/go -a --delete --compress

Using a simple script means you have he desired single command to do everything. This of course could be a security no-no if the data is sensitive (you, or others in your company, might not want a copy floating around on your laptop). If server1 is local to you then you could just delete the local copy afterwards (as it will be quick to reconstruct over the local LAN next time).

Constructing a tunnel so the servers can effectively talk to each other more directly should be possible like so:

  1. On server 2 make a copy of /bin/sh as /usr/local/bin/shforkeepalive. Use a symbolic link rather than a copy then you don;t have to update it after security updates that patch /bin/sh.
  2. On server 2 create a script that does nothing but loop sleeping for a few seconds then echoing a small amount of text out, and have this use the now "copy" of sh:

    #!/usr/local/bin/shforkeepalive
    while [ "1" != "0" ]; do
            echo Beep!
            sleep 5
    done
    

    (the echo probably isn't needed, as the session is not going to be idle for long enough to time-out even if SSHd is configured to ignore keep-alive packets from the ssh client)

  3. Now you can write a script on your laptop that starts your reverse tunnel in the background, tells server1 to use rsync to perform the copy operation, then kills the reverse tunnel by killing the looping script (which will close the SSH session):

    #!/bin/sh
    ssh user@server2 -L2222:127.0.0.1:22 /usr/local/bin/keepalivesctipt &
    ssh user@server1 -R2222:127.0.0.1:2222 rsync /path/to/stuff user@127.0.0.1:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'
    ssh user@server2 killall shforkeepalive
    

The way this works:

  • Line 1: standard "command to use to interpret this script" marker
  • Line 2: start a SSH connection with reverse tunnel and run the keepalive script via it to keep it open. The trailing & tells bash to run this in the background so the next lines can run without waiting for it to finish
  • Line 3: start a tunnel that will connect to the tunnel above so server1 can see server2, and run rsync to perform the copy/update over this arrangement
  • Line 4: kill the keep-alive script once the rsync operation completes (and so the second SSH call returns), which will and the first ssh session.

This doesn't feel particularly clean, but it should work. I've not tested the above so you might need to tweak it. Making the rsync command a single line script on server1 may help by reducing any need to escape characters like the ' on the calling ssh command.

BTW: you say "don't ask" to why the two servers can not see each other directly, but there is often good reason for this. My home server and the server its online backups are held on can not login to each other (and have different passwords+keys for all users) - this means that if one of the two is hacked it can not be used as an easy route to hack the other so my online backups are safer (someone malicious deleting my data from live can't use its ability to update the backups to delete said backups, as it has no direct ability to touch the main backup site). Both servers can both connect to an intermediate server elsewhere - the live server is set to push its backups (via rsync) to the intermediate machine early in the morning and backup server is set (a while later to allow step one to complete) to connect and collect the updates (again via rsyc followed by a snapshotting step in order to maintain multiple ages of backup). This technique may be usable in your circumstance too, and if so I would recommend it as being a much cleaner way of doing things.

Edit: Merging my hack with Aaron's to avoid all the mucking about with copies of /bin/sh and a separate keep-alive script on server2, this script on your laptop should do the whole job:

#!/bin/sh
ssh user@server2 -L2222:127.0.0.1:22 sleep 60 &
pid=$!
trap "kill $pid" EXIT 
ssh user@server1 -R2222:127.0.0.1:2222 rsync /path/to/stuff user@127.0.0.1:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'

As with the above, rsync is connecting to localhost:2222 which forwards down the tunnel to your laptop's localhost:2222 which forwards through the other tunnel to server2's localhost:22.

Edit 2: If you don't mind server1 having a key that allows it to authenticate with server2 directly (even though it can't see server2 without a tunnel) you can simplify further with:

#!/bin/sh
ssh user@server1 -R2222:123.123.123:22 rsync /path/to/stuff user@127.0.0.1:/destination/path/to/update -a --delete --compress -e 'ssh -p 2222'

where 123.123.123.123 is a public address for server2, which could be used as a copy+paste one-liner instead of a script.

David Spillett

Posted 2010-08-23T12:28:45.240

Reputation: 22 424

If you don't want to use keys and would like to type in the password instead, use ssh -t option, which will force pseudo-tty allocation (it wouldn't normally be allocated for non-interactive command). You should also use the ssh -t option if you are running the command for the first time and the hosts are not in the known_hosts files and/or are getting the Host key verification failed. error message. – Janci – 2015-07-30T13:59:52.333

It's a large amount of data (20+gb) and I prefer to stream it in and out at the same time rather than storing locally. I wish to stream the data through my PC without anything being stored. you are right about the "don't ask", it's for good reason, albeit a pita. – regulatre – 2010-08-23T13:58:41.627

Plesae see new edits on original question – regulatre – 2010-08-23T14:03:29.760

I think my last edit (posted seconds before your comment according to the timestamps so we were probably typing at the same time) may give you the one-liner you are looking for. – David Spillett – 2010-08-23T17:43:41.757

HOORAY!!! It works! 1. for rsa key, can't use tildae, had to use "/root/". 2. here's the final commandline: ssh -R 2200:SERVER2:22 root@SERVER1 "rsync -e 'ssh -p 2200 -i /root/.ssh/id_rsa_ROOT_ON_SERVER2' --stats --progress -vaz /path/to/big/files root@localhost:/destination/path" – regulatre – 2010-08-25T12:18:27.333

Hi @David, thanks for this great hack. However, in order to get it working, I need server 2's key on server 1 using both solutions (in first and second edits). I think it's normal (port forwarding or not, server1 is still trying to authenticate to server2), but you write If you don't mind server1 having a key .... Can you tell me if it is really possible to avoid having server2's key on server1 please? – ssssteffff – 2012-12-10T14:14:37.413

@ssssteffff: Agent forwarding is probably what you are looking for. http://www.unixwiz.net/techtips/ssh-agent-forwarding.html gives a relatively plain English explanation of this.

– David Spillett – 2012-12-12T10:16:01.710

@DavidSpillett this is exactly what I needed, thank you very much! Both solutions in both of your edits have the same behavior (I have to authorize server1 key's on server2). Specifying -A parameter to ssh command seems the only way to avoid this authorization to me. Perhaps it should be clarified in your answer? Many thanks again for replying so quickly, two year after your first answer on this thread! – ssssteffff – 2012-12-21T09:59:00.847

2

Here are a few methods that make the synchronization a simple one-liner, but require some setup work.

  • Set up a reverse ssh tunnel from server1 to your desktop (sorry, I can't tell you the .ssh/config incantation off the top of my head). Chain it with a connection from your desktop to server2. Run rsync from server1.

  • Set up a socks proxy (or an http proxy which accepts CONNECT) on your desktop. Use it to establish an ssh connection from server1 to server2. Run rsync from server2.

  • Use unison instead of rsync. But the workflow is different.

  • Mount the directories from one or both servers on your desktop using sshfs.

Gilles 'SO- stop being evil'

Posted 2010-08-23T12:28:45.240

Reputation: 58 319

1

Why one line? Use a small shell script:

#!/bin/bash
# Run me on server1

# Create the port forward server1 -> desktop -> server2 (i.e.
# the first forward creates a second tunnel running on the desktop)
ssh -L/-R ... desktop "ssh -L/-R ... server2 sleep 1h" &    
pid=$!

# Kill port forward process on exit and any error
trap "kill $pid" EXIT 

rsync -e ssh /path/to/files/ root@localhost:/path/to/files/on/server2

IIRC, you can set the sleep time lower; the first ssh will not terminate as long as someone uses the channel.

Aaron Digulla

Posted 2010-08-23T12:28:45.240

Reputation: 6 035

I'm pretty sure that rsync can not operate between two remote servers over SSH that way (only local->remote or remote->local) - though your use of sleep and trap is neater than the "keep alive and kill" method in my answer. – David Spillett – 2010-08-23T13:52:52.287

please see edits on original question. – regulatre – 2010-08-23T14:04:05.483

Damn, you're right. It's not allowed to specify two hosts as arguments on the command ling. ... hmmm.. – Aaron Digulla – 2010-08-23T14:04:44.967

Okay, I've improved my solution. You need to create two port forwards to make the ssh service of server2 visible on server1. – Aaron Digulla – 2010-08-23T14:09:04.850

Add a couple of ";" to turn it into a one-liner. The idea is easier to understand with a script. – Aaron Digulla – 2010-08-23T14:09:48.763

+1 for keeping things clean and writing a nice script. – regulatre – 2010-08-25T12:25:13.887