We use hetzner dedicated servers for virtualization (xen). Each instance got a 100GB free sftp backup storage, buing more is not an option - it's too expensive. Currently we use bacula and mount this storage with fuse, so SD can use it. Such solution is not very reliable, but works. Our problem is that we have much more data now and 100GB is enough for only one Full (and it won't be forever like this, we are growing fast). At home I have a pretty nice internet connection and lots of storage. This is SOHO solution so IP is dynamic and sometimes it's not working (no UPS or BGP).
The question: how to use bacula and push the data to storage on remote host with fast but unreliable internet connection?
My first thought: run first SD locally on dedicated server and them migrate the volumes to second SD but:
Migration is only implemented for a single Storage daemon. You cannot read on one Storage daemon and write on another.
Second solution: after backup finishes manually (rsync) move files/volumes to home server. It's not very useful - the catalog would be outdated, recovery would be a pain.
Third attempt: mount with fuse the home server with fsync and write a bunch of scripts to retry and remount it when the connection drops.
Dear SF: what other solutions should I consider?