What is your experience in using ENCFS with SSHFS for remote backup ?
My main wondering is about long term stability.
4 Answers
Well, I have a friend who makes backup to my server using that very method. According to him it works well.
Dealing with SSHFS and EncFS there are a few potential caveats to be aware of, such as uid mapping, workarounds for rename behaviors etc. Last year I did a writeup on how to use rdiff-backup across SSHFS and EncFS. Those pointers might very well also apply on your backup software.
http://wiki.rdiff-backup.org/wiki/index.php/BackupToSshfsMount
http://wiki.rdiff-backup.org/wiki/index.php/BackupToEncfsAcrossSshfs
Of course, as with any other backup solution is should be properly tested. That also includes doing test restores.
- 6,848
- 28
- 43
-
1I would recommend just using rdiff over SSH directly (I personally use rsync over SSH for most of my backup operations, with hard links for snapshot provision, rdiff/rdiff-backup uses the rsync algorithm but in a slightly different manner to achieve a similar result so the choice is down to your preference) - then there is no need for the extra filesystem wrapper to complicate matters. You could also just use scp/sftp to transfer files, but you'll probably find rsync/rdiff-directly-over-SSH much more efficient in the long run. – David Spillett Jan 19 '10 at 10:35
-
1But rdiff over ssh skips entirely one, possibly the most, important step: backing up to untrusted (remote) space. – XTL Oct 02 '13 at 20:19
I've been using encfs -> sshfs for some months now and have not had to restart it or kill any hung processes etc. However when I layered posixovl on top of those - so that all my local users could have proper ownership and file permissions on the remote file space (which was under a single account in a different username-space) - it hung within a day. When I removed posixovl (fuser -m and umount -l are damned useful!) everything started working nicely again. Didn't need to restart sshfs.
This is how I have the three fuse filesystems set up.
As the user who owns the remote account:
sshfs username@remote-site:/home/username/encrypted ~username/remotesite-encrypted -o idmap=user -o uid=`id -u` -o gid=`id -g` -o reconnect -o allow_root encfs ~username/remotesite-encrypted ~username/remotesite -o allow_root
As root:
/usr/local/sbin/mount.posixovl -F -S /home/username/remote-site/user-directories /remotesite -- -o allow_other
- 51
- 1
-
1The hanging I was experiencing was a red herring. Our firewall kills idle connections way too agressively, and I forgot to add: 'ServerAliveInterval 60" to /etc/ssh/ssh_config. I'd done that on the test system which had been stable for months but forgot to when I moved the test to our production system. "Oops". – Graham Toal Jul 05 '12 at 19:23
A more stable setup would probably be to use EncFS with the --reverse
flag. From man pages:
--reverse
Normally EncFS provides a plaintext view of data on demand. Normally it stores enciphered data and
displays plaintext data. With --reverse it takes as source plaintext data and produces enciphered
data on-demand. This can be useful for creating remote encrypted backups, where you do not wish to
keep the local files unencrypted.
And than either using cp+SSHFS (or rsync+SSHFS...) or better any other backup tool that is capable of copying over SSH (or any other protocol you feel comfortable with). Eg. rdiff-backup
, rsync
...
The main difference between this and previous approaches is that here encryption happens before the backup tool sees the files. Which means that an attacker may get more information about the encrypted files if you are preserving history, because he can see which changes happen often and which do not and maybe figure something out.
Problems: rdiff-backup
seems to have trouble accessing the EncFS-reverse filesystem.
- 131
- 2
-
Note that rsync will only see encrypted files and filenames. This prevents you from using rsync's --exclude --include to filter backups. – ʀᴏʙ Apr 19 '14 at 20:49
I havn't been able to get SSHFS to have a stable connection over a day.
-
Does a long time connection matter? Wouldn't the likly scenario be mount; backup; umount? – andol Dec 15 '09 at 14:30
-
When I tried SSHFS (a fair while ago, so things may have changed quite a bit) it was indeed fine for mount+coy+umount type operations, sometimes shuffling a few hundred Mb over an hour or so, but it always seemed to stop working and need to be restarted after a while if a permanent mount was attempted. – David Spillett Jan 19 '10 at 10:29