3

BackupPC works good on my LAN but has problems backing-up a remote server of mine. I'm using rsync over ssh and have increased max ping to 500 because the remote server is far away.

Here is my error log (user and 1.2.3.4 are masked info):

full backup started for directory / Running: /usr/bin/ssh -p 2222 -q -x -l user 1.2.3.4 /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /

Xfer PIDs are now 5705
Got remote protocol 30
Negotiated protocol version 28
Sent include: /home
Sent include: /home/user
Sent exclude: /*
Sent exclude: /home/*
Sent exclude: /home/user/folder1
Sent exclude: /home/user/folder2
Xfer PIDs are now 5705,5744
[ skipped 22380 lines ]
Read EOF: 
Tried again: got 0 bytes
Can't write 4 bytes to socket
finish: removing in-process file home/user/site15/something/else/here/p4677545.jpg
Child is aborting
Done: 19205 files, 799171349 bytes
Got fatal error during xfer (aborted by signal=PIPE)
Backup aborted by user signal
Saving this as a partial backup

Trying to ssh as user backuppc to remote server works without asking a password. The general LOG is like this:


> 2012-02-29 00:00:00 full backup started for directory /
> 2012-02-29 01:00:12 Aborting backup up after signal PIPE
> 2012-02-29 01:00:13 Got fatal error during xfer (aborted by signal=PIPE)
> 2012-02-29 02:00:01 full backup started for directory /
> 2012-02-29 03:01:09 Aborting backup up after signal PIPE
> 2012-02-29 03:01:11 Got fatal error during xfer (aborted by signal=PIPE)
> 2012-02-29 10:56:16 full backup started for directory /
> 2012-02-29 11:59:18 Aborting backup up after signal PIPE
> 2012-02-29 11:59:20 Got fatal error during xfer (aborted by signal=PIPE)
> 2012-02-29 11:59:20 Saved partial dump 0
> 2012-02-29 12:25:15 full backup started for directory /
> 2012-02-29 13:26:55 Aborting backup up after signal PIPE
> 2012-02-29 13:26:57 Got fatal error during xfer (aborted by signal=PIPE)
> 2012-02-29 16:48:52 full backup started for directory /
> 2012-02-29 17:51:41 Aborting backup up after signal PIPE
> 2012-02-29 17:51:42 Got fatal error during xfer (aborted by signal=PIPE)
> 2012-02-29 17:51:42 Saved partial dump 0
> 2012-02-29 18:13:27 full backup started for directory /
> 2012-02-29 19:15:19 Aborting backup up after signal PIPE
> 2012-02-29 19:15:20 Got fatal error during xfer (aborted by signal=PIPE)
> 2012-02-29 19:15:20 Saved partial dump 0
> 2012-02-29 19:19:55 full backup started for directory /

Almost always ending after 1h01'or 1h02'. These values in seconds are 3660 and 3720. Maybe they appear in some option...? Any remarks?

Thanks

EDIT: What I've tried and didn't work: - ServerAliveInterval=300, ServerAliveInterval=60 - increased PingMaxMsec to 500 - rsync options --timeout and --contimeout set to 20 and 200 seconds (both values tried on both options) - removing block-size option from rsync

Nothing worked. It usually stops in different files, independently of their size.

NEW EDIT: If I manually login to this remote ssh server, and leave my terminal untouched for 1h26" I get disconnected. Looks like this is the reason BackupPC gets disconnected, too. Is there any way to simulate the execution of commands inside BackupPC's session?

Chris
  • 173
  • 1
  • 2
  • 8

3 Answers3

2

You should check for the value of $Conf{ClientTimeout} within your configuration and increase it accordingly.

Due to implementation constraints, BackupPC has little information about what is happening during the transfer of a longer file. To prevent dead transfers to hang around forever, the transfer is cancelled after $Conf{ClientTimeout} seconds - if any single file on your remote server is taking more time to back up than this value setting, the backup transfer gets aborted.

the-wabbit
  • 40,319
  • 13
  • 105
  • 169
  • Thanks for the suggestion! I tried a value of 144000 but I still get the same error. Every backup on that remote server always fails after 1h01' or 1h02'. These values in seconds are 3660 and 3720. Maybe they appear in some option...? – Chris Mar 01 '12 at 09:46
1

You could try to manually issue full backup commands one after another. Every time you do, backuppc will transfer a different chunk of the data. Once the backup succeeds you will know all your data is transferred.

You could also use ssh's compression option to speed up the process and reduce iterations.

Additional incremental backups should not cause any trouble as they are usually much smaller in size, so shorter in time, so your connection will not timeout.

miltos
  • 26
  • 2
1

If you or the provider has a firewall, check the session-ttl of it. As a common value is 3600... If the firewall is not in your hands, you could also use ssh with the option serveraliveinterval as mentioned here: https://unix.stackexchange.com/questions/34004/how-does-tcp-keepalive-work-in-ssh backuppc could be modified accordingly.

Patrick
  • 11
  • 2