I'm thinking of migrating all our documents and spreadsheets infrastrucure from and old standalone samba server to one of those popular self-hosted solutions and I'm trying to do the best long-lasting error prone easy (re)install. I've made it easy to do the first installation with own docker-compose file based on examples.
It has a multitude of containers, which includes db, web server, certificates generator and validator.
That looks very complex to backup for a Docker newbie like me, specially as some websites says that I shouldn't even touch /var/lib/docker/volumes/ and I'm afraid package managers or docker install/updates could break it.
To me it looks that the faster and easier way is simple to:
systemctl stop docker
docker save(s);docker export(s) containers to tars
clonezilla /dev/sdb1 to an image or disk of same size (if /var/lib/docker/volumes/ is mounted in /dev/sdb1)
and into the new server of future machine when needed:
- restore clonezilla image/device and mount it in /var/lib/docker/volumes
- install and start docker
- docker load(s); docker import(s) tars
I fear maybe the db container wouldn't connect to it's volume and I would lose logins and versioning, but losing files is harder from what it seems.
From what I understand too sector by sector partition copies takes minutes where millions of files cp -R would take hours, and those steps make me feel really safer to re-run the docker-compose to update db and web engines as facing it to the web needs it secure and patched
What you think is safer -fast would be good, but not really necessary- or am I safe enough?
Thank you!