I've currently a setup where 10 database clients access one (Postgres)-SQL server on the same network. The applications have a very low db footprint. They do load a few tables upon startup and do quite few operations during runtime (maybe 100 ops per hour).
But the database server is vital for system operations. All other clients can perform the same task redundantly from each other... as long as the db server lives. And I don't what to just build one redundant db server.
What I'm trying to do: let every client also be a database server and remove the dedicated server from the system. I want do form a distributed database system (maybe a dynamically distributed mesh) across every client. The syncing should be done more or less transparently. And as long as one client lives: one should have a working dataset. And is it not vital that this client has the latest working set from the last dying client, but that the structure remains valid and he can do his next operation.
I said earlier: the current system has a very low db ops footprint and the db code can easily be redone. What's important: finding a database system that is best in building this kind of redundancy.
Any ideas about implementations or at least something I should read while working on that topic. Since I've never done such a thing before.
Thank you