0

I have two Linux servers in two locations connected by VPN (over WAN - 100/100 Mb/s). On each of these servers is Samba DC (the same domain) and Samba File Server.

Samba FS are configured to store ACL in POSIX ACL (not in file as it's by default). On that Samba FS are hosted the user files (Desktop and Documents redirection) and profiles.

Users are migrating between these two locations.

What I'm trying to achieve is the real time synchronization of files between these two servers to make it possible for users to login into domain accounts on both locations and always have the same files.

But, this solution must be fault tolerant: if something happen with connection between these two sites users should still have an access to their files (the latest version stored on the side where user logged in) and if connection will be restored then these two servers should synchronize files modified/created/deleted on the both sides (to have the same state of FS). Of course the obvious question is: what if the same file will be modified on both sides when they are disconnected? First of all in my scenario it's rather not a case (low probability of that) but if it happens it's perfectly fine for me to choose the file version with the latest modification time.

I tried a GlusterFS but it has a one big disadvantage in my scenario: if I write a file to GlusterFS it's consider as saved only when it's saved in both chunks - this cause that any file write (about read I'm not sure) is limited by my WAN VPN connection speed - this is not what I want - ideally file is modified/saved directly on a local server and then asynchronously synced with the second side.

Currently I'm using just an Osync script - it basically meets almost all my conditions (at least it's the best solution I was able to achieve) - it preserves the POSIX ACL, automatically resolves the conflicts - just in one word: it does not need my daily basis attention. But this solution has a big disadvantage - it's not a daemon but just script running in infinite loop, so in every run it scans a whole disk (on both sides) to detect any changes and only then synchronize both servers - it's very disk intensive operation and unfortunately each run took about an hour! (the synced folders have about 1TB of data).

So to sum up, I am looking for a solution that works like the mentioned Osync script but is rather a daemon that listens to the changes on disk and if any change occurs then immediately synchronizes it to the second site.

E.g. sync anything looked very promising but it does not support a POSIX ACL sync (that is needed by Samba FS)

Could you propose a solution?

Dave M
  • 4,494
  • 21
  • 30
  • 30
Moses
  • 1
  • 4
  • Tried `lsyncd` (based on `inotify` and listens to kernel messages instead of scanning the whole directory each time)? There were even attempts to marry `lsyncd` and `csync2` to achieve a "active-active" synchronization (simultaneously in both directions). The devil is in details; your scenario lacks mention of locking, and you'll *have* problems eventually with two users on both sides writing to the same file and someone's changes *will* be lost. Gluster appears slow not because of limitied bandwidth, but due to high latency of WAN link. – Nikita Kipriyanov Jun 28 '22 at 11:59
  • Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it – djdomi Jun 28 '22 at 17:49
  • @NikitaKipriyanov unfortunately as you mentioned lsyncd support only one way sync but I need something you called "active-active". About locking - this is rather no chance in my scenario to the same file be edited on both sides at the same time – Moses Jun 28 '22 at 20:02
  • @djdomi I think beside the request of product recommendation I described what problem I want to solve, what I tried previously and how it working now (and what are the problems wit that current solution). If I should provide some more informations please tell me what exactly I should provide. – Moses Jun 28 '22 at 20:05
  • 1
    This is just you are now at rejection phase. You'll *have* problems and lost data and angry users if you sacrifice consistency light-hearted, because users don't know about limitations or forget your cautions as soon as you out of the door, and will happily open the same file from different locations. For real. Certainly. This is no joke. Creating reliable multi-master systems is *hard*. For instance, try the mentioned lsync+csync2 just to see this yourself. I tried, I learned. You can learn on my mistakes or make your own. – Nikita Kipriyanov Jun 29 '22 at 06:02
  • Hmmm, ok, but now in that case I'm wondering if there is any reliable solution for my problem without sacrificing anything (can that solution even exists)? Or am I not understand something correctly? Please advise me what should I do (try) in that situation. – Moses Jun 30 '22 at 08:16

0 Answers0