I have two Linux servers in two locations connected by VPN (over WAN - 100/100 Mb/s). On each of these servers is Samba DC (the same domain) and Samba File Server.
Samba FS are configured to store ACL in POSIX ACL (not in file as it's by default). On that Samba FS are hosted the user files (Desktop and Documents redirection) and profiles.
Users are migrating between these two locations.
What I'm trying to achieve is the real time synchronization of files between these two servers to make it possible for users to login into domain accounts on both locations and always have the same files.
But, this solution must be fault tolerant: if something happen with connection between these two sites users should still have an access to their files (the latest version stored on the side where user logged in) and if connection will be restored then these two servers should synchronize files modified/created/deleted on the both sides (to have the same state of FS). Of course the obvious question is: what if the same file will be modified on both sides when they are disconnected? First of all in my scenario it's rather not a case (low probability of that) but if it happens it's perfectly fine for me to choose the file version with the latest modification time.
I tried a GlusterFS but it has a one big disadvantage in my scenario: if I write a file to GlusterFS it's consider as saved only when it's saved in both chunks - this cause that any file write (about read I'm not sure) is limited by my WAN VPN connection speed - this is not what I want - ideally file is modified/saved directly on a local server and then asynchronously synced with the second side.
Currently I'm using just an Osync script - it basically meets almost all my conditions (at least it's the best solution I was able to achieve) - it preserves the POSIX ACL, automatically resolves the conflicts - just in one word: it does not need my daily basis attention. But this solution has a big disadvantage - it's not a daemon but just script running in infinite loop, so in every run it scans a whole disk (on both sides) to detect any changes and only then synchronize both servers - it's very disk intensive operation and unfortunately each run took about an hour! (the synced folders have about 1TB of data).
So to sum up, I am looking for a solution that works like the mentioned Osync script but is rather a daemon that listens to the changes on disk and if any change occurs then immediately synchronizes it to the second site.
E.g. sync anything looked very promising but it does not support a POSIX ACL sync (that is needed by Samba FS)
Could you propose a solution?