2

we are in a special case where we have 2 Ubuntu offline servers in 2 DCs and should be kept offline when not used. When we need to use them, we power up one of them, do whatever we want physically via KVM and then power it off again. Network connectivity will be absent at any times. We need a way to easily replicate the changes in the 2nd offline server, so that they will have the same data every time. We came up with 3 candidate solutions:

  1. 3-way ZFS mirror on the 1st server. Disc 1, remains attached. Disc 2, is kept in a safe. Disc 3, attached on the 2nd server. When operations are to be made on the 1st server, we plug disc 2 (from the safe), do the operation, unplug disk 2 from the mirror, plug in on the 2nd server and resilver. In short, the 3-way mirror will be always degraded on purpose. Alternatively, avoid plugging/unplugging disks and use ZFS send/receive snapshots stored in an external USB drive as snapshot file.
  2. mdraid (sw raid 1) and do the same as in (1) (unplug-plug disc and resync).
  3. Clonezilla (or any other 3rd party bare-metal solution) to image from the 1st server and apply in on the 2nd (the HW & partition setup will be identical).

Do you think that (1) would be too complex for a simple need like this? Any other opinions?

thouvou
  • 43
  • 3
  • Seems like you're trying to solve a typical problem with non-conventional approach, that's why nobody answered. Long story short: nobody implements HA using offline servers. – drookie Mar 06 '21 at 18:17
  • @drookie we are not talking about HA. We are talking about classified data in servers that need to be literally air-gapped when not in use and at the same time, the 2 locations SHALL have identical data.So, a manual, data sync method is required. – thouvou Mar 07 '21 at 09:26
  • Then it's simple: virtualize them both, say, usin KVM, store their disks on zfs as zvols and then just replicate these zvols. – drookie Mar 07 '21 at 10:15
  • dont know how big or how "binary" the data is, but if it's not too big or mostly text I'd use git. Seriously, think about it. A bare git repo on a disk that is normally kept in a safe. `git bundle create` with a ref that the other side is known to have, carry the bundle file over, fetch and checkout. I've also tried rsync's `--write-batch` and similar, but I was never able to get that to work reliably enough to be confident of using it long term. –  Mar 09 '21 at 10:27

1 Answers1

0

Thank you all for your responses. After some tests, the quickest and simplest way is raid1. One disc of the mirror is used to sync the data between the 2 servers by physically plugging back n forth.

thouvou
  • 43
  • 3