Here is the scenario I am trying to make work. I have three machines, J, T and R that need to eventually get the same data (slight temporary differences are fine). J and T are desktops that are turned on and off each day. R is a remote fileserver. The users on J and T can make changes to shared data in one directory (shared via group permissions) and on their own home directories. It is fairly common that only one of the two J and T machines is running at any one time. The Internet connection to R is fairly reliable, but R is far away and the latency makes it undesirable to use as the main file server. I tried with NFS and it was pretty painful. The connection is not reliable enough to depend on it for access at all times.
It is not an option, at this point, to put in a local server and just serve up a shared drive via NFS or CIFS. That would make like much, much easier, but it is not an option.
I tried cross mounted NFS exports, and managed to get a couple simple shell scripts to mount the drives etc. but if J mounted the drive from T and used that as the base file share and then T was turned off, the data changes would be lost. So I need some way to constantly sync data.
I looked at rsync and eventually found csync (not csync2 which is an entirely different project). This allows me to run periodic syncs between the machines. However, it is in cron and not done on demand. I also found lsyncd which looks like it would possibly be good for copying changes when both workstations were up, but not good at figuring out what changed when the local system was down and the other workstation was making changes.
I looked at Ubuntu One, Dropbox etc. and the free file limits are a problem. The budget is very, very limited and recurring costs are to be avoided. iFolder might be an option, but it looks like it needs a server somewhere.
What I would prefer would be something like this:
mount both remote and local copies into a working directory. Make changes there and have the filesystem push them out to the other locations.
On login, start a resync with both other machines to catch up on any changes made while the machine was off.
It is possible for one of the two workstations to be off for days at a time, so a lot of changes can accumulate. There are only two usual users, so the rate of changes isn't that high, but after a week, they do build up.
Is there a FUSE module with something like RAID1 over file systems? Because of the long duration of possible disconnects, things like Gluster FS, AFS and NBD do not seem that appropriate.
If there was a way to do NFS client failover (other workstation went down, use the local copy instead) that would work too. I've done some research on that and other than some mentions that "autofs should support that, but doesn't", I haven't found much.
I would prefer an NFS/CIFS based solution because then I could get file locks and we would not have potential problems with users trying to modify the same file at the same time. But I'm not sure how to solve the client-side failover.
How would you solve this? Again, having a local file server is not currently an option.