1

We use hetzner dedicated servers for virtualization (xen). Each instance got a 100GB free sftp backup storage, buing more is not an option - it's too expensive. Currently we use bacula and mount this storage with fuse, so SD can use it. Such solution is not very reliable, but works. Our problem is that we have much more data now and 100GB is enough for only one Full (and it won't be forever like this, we are growing fast). At home I have a pretty nice internet connection and lots of storage. This is SOHO solution so IP is dynamic and sometimes it's not working (no UPS or BGP).

The question: how to use bacula and push the data to storage on remote host with fast but unreliable internet connection?

My first thought: run first SD locally on dedicated server and them migrate the volumes to second SD but:

Migration is only implemented for a single Storage daemon. You cannot read on one Storage daemon and write on another.

Second solution: after backup finishes manually (rsync) move files/volumes to home server. It's not very useful - the catalog would be outdated, recovery would be a pain.

Third attempt: mount with fuse the home server with fsync and write a bunch of scripts to retry and remount it when the connection drops.

Dear SF: what other solutions should I consider?

neutrinus
  • 1,095
  • 7
  • 18
  • Get another root-server full of hard drives and use it for your backups. – Michael Hampton Jul 21 '14 at 12:12
  • @MichaelHampton: sure, but it will be much more expensive solution. I already have huge storage at home. Adding more disks is also heaper at home. – neutrinus Jul 22 '14 at 06:31
  • Seriously...how much is your data worth? – Michael Hampton Jul 22 '14 at 06:36
  • @MichaelHampton: I can do it using tar and rsync (and some bash magic). Duplicity supports sftp backend storage too. I asked this question because I'm too lazy to migrate bacula->duplicity if there is a native-to-bacula solution that works great. – neutrinus Jul 22 '14 at 21:05

0 Answers0