My Backups are all flocked up

1

I have cron job that runs an rsync command that does a remote backup every two hours.

In the event that the previous remote backup is still running, I've encapsulated this rysnc command inside of a flock command.

Flock prevents this command from running multiple times simultaneously:

flock -n /location/of/lock_file -c 'rsync -rv /home/localuser/ remoteuser@55.55.55.55:/home/remoteuser/backupFolder'  || echo "Couldn't perform remote backup, because previous remote backup is still in progress."

However, if I reboot the remote server during one of these backups (to simulate a broken connection scenario), flock continues to block future attempts because the previous process (although permanently disconnected from the backup destination) persists.

What's the best way to make flock know that rsync has failed indefinitely, and therefore release these locks that are preventing future attempts from getting started?

On the rsync man page, I see there is a --timeout argument. Is setting that the best way to deal with flock's around-the-clocks locks?

LonnieBest

Posted 2013-12-02T19:12:58.863

Reputation: 1 099

1

Perhaps solo which uses a bound socket to prevent multiple cron instances from running simultaneously; would work better than flock for you?

– Dan D. – 2013-12-02T19:26:14.480

Answers

1

This does not answer your question about flock but might help regardless. There was a similar question about backup strategies a while back and I liked my answer enough to implement it myself.

The basic idea is to have your backup script create a file on the backup destination when it finishes and delete the file as soon as it starts running. Then, you make you script test for the existence of the file and only allow it to run if the file exists:

#!/usr/bin/env bash

## Make sure no backup is currently running
if [ ! -e /path/to/backup/backup_finished.txt ]; then 
  echo "A backup seems to be running, or did not finish correctly, exiting." && 
  exit;
fi
## Delete the file from the remote server
ssh user@remote rm /path/to/backup/backup_finished.txt

## Do da rsync 
rsync /path/to/source/ user@remote:/path/to/daily/backup/

## Create the file on the remote server
ssh user@remote touch /path/to/backup/backup_finished.txt

This is a much more simplistic approach than yours but it has the advantage that i can catch (though not deal with in any graceful way) unfinished backups. You can expand this to test whether a backup is actually running or if an old one did not exit cleanly and react accordingly.

Since you need to monitor processes on both the local and the remote machines, I don't think a system of lock files will work.

terdon

Posted 2013-12-02T19:12:58.863

Reputation: 45 216