I have two or three somewhat older servers of type HP ProLiant DL380 G6/7 with only x * 1 GBit Ethernet, but quite some CPU power, RAM and capable of a fair amount of local storage. I'm interested in building a small cluster-like setup of two or even three nodes where all of those provide services, pretty much like how I understand the "hyper-converged" buzzword thing currently. The services are especially hosting VMs which itself host web servers for different web apps, some daemons, databases etc. Very different stuff on application level, some I/O bound, some not.
The servers are currently using some entry-/mid-range NAS from Synology and things don't work that good anymore. I have problems getting the NAS to work reliable on heavy I/O-load and day-to-day performance besides some benchmarks isn't quite good as well. So I'm researching on different options like Cluster File Systems, DRBD, ready-to-install solutions like Proxmox and all that stuff.
The main question I'm asking myself currently is if there's some way to get the network as some possible bottleneck out of the way by building "something" which prefers local reads and writes. DRBD for example provides replication protocol A, which is exactly what I have in mind. The timespan of possible data loss might be something which someone decides to be an acceptable risk looking at redundant hardware per server and such. Additionally, one simply might not need the possibility to host applications on all nodes at all given times, but it might be acceptable to only move applications around on the nodes in case of things like node updates and their restarts etc. Such things might be done manually, after some preparing steps or whatever.
The important point is that if nodes would host their own applications most of the time, one could benefit a lot of local reads and writes with async communication of writes afterwards. That's exactly what the DRBD docs say as well:
Regardless, it is perfectly possible to use DRBD, in dual-primary mode, as a replicated storage device for GFS. Applications may benefit from reduced read/write latency due to the fact that DRBD normally reads from and writes to local storage, as opposed to the SAN devices GFS is normally configured to run from.
Are there comparable technologies without DRBD at a block level? Maybe some Cluster File Systems provide such things themselfs already? Additionally, it would be of benefit if whatever is suggested simply works with current Ubuntu distributions out of the box, because that's my OS of choice for the servers currently.