21

We want to take a backup of everything on our Debian server, which is running remotely on the other side of the world (hosted by Linode), without shutting it down.

This system is running shell, email, XMPP/prosody and web, with a couple of simple nginx setups.
We want to backup files related to those things just to be safe. For example, files users have stored in their home directories.

We don't need to exactly copy the existing setup wrt every single /etc file; instead, the reason we're even doing the back up in the first place is so we can move it all to a new setup (newer version of Debian still on Linode).

I see that Linode offers a backup service. But in the long term we also need backups of our own, here, in case they go under or something else weird happens.

The reason this question exists is that when I've tried making backups in the past I've kept making either of these two mistakes:

  • I've went "OK, I'll just copy / and everything under it" and then got stuck in some weird infinite loop either because of the drive I was copying to was mounted under /media/backup and it was copying itself recursively [obv that specific problem not applicable here since we're gonna backup over rsync or similar] or it has gotten stuck trying to copy some "alive" stuff in /proc or /var or whatever, like trying to keep up with ever-changing logs, or
  • I've went "OK, I'll just grab the bare minimum of what we need… hmm, everyone's home directories, and our web server directories (all under /var) and let's snag a copy of /etc and all the old mails under /var/vmail" and then I’ve invariably fucked up file permissions or timestamps (gonna make sure I don't backup unix files to a FAT drive this time) or forgot something ("oh, shoot, I had some custom scripts in /usr/local/bin that I never stored anywhere else, I forgot to get those, guess they're gone now").

So obv copying the entire drive straight up has lead to pitfalls and copying directories selectively has lead to pitfalls. I want to know how to do it right.

The Server Fault question What's needed for a complete backup system? covers philosophy and good practices, but I am looking for these more specific details of:

  • Which directories do I need to copy and which do I exclude (given that it’s a system that’s currently running & and serving up a wiki, XMPP chat, email — with new messages coming in while the copy job is running)
  • What file attributes such as time stamps & owner & group do I need to present and how do I do that? ← I think I can answer this half of the question myself with something like… um… rsync -HXaz I think is a good option for us? The -z obv not really related to the question which is "what do I preserve"

A lot of the backup advice I see, like using dd, seems to presuppose that the drive is unmounted and not in use. But am I not supposed to exclude "living" directories like /proc and some of the subdirectories under /var (however, some of the stuff under /var I know we definitely do need to keep) and /mount? What else is there that I need to think about in this situation? Then I guess I can just snarf it with rsync and using a bunch of --exclude flags.

Or are there better ideas, especially FOSS friendly ones?

200_success
  • 4,701
  • 1
  • 24
  • 42
Sandra
  • 311
  • 1
  • 7
  • I get that this question seems extremely basic but having ran this kind of systems such a long time I've messed this up again and again and I never really grokked how to do it right – Sandra Apr 28 '19 at 02:48
  • https://serverfault.com/q/475849/126632 – Michael Hampton Apr 28 '19 at 03:40
  • For what it's worth, `cp -r -a` will preserve as many file attributes as possible when copying files (based on what the target filesystem supports). The `-a` flag instructs `cp` to preserve attributes. For copying over a network or via a filesystem that doesn't support the required attributes, `tar -c` has always worked for me although I believe there are some edge cases that it doesn't cover and in particular I believe `tar` is by default dependent on usernames matching on both systems. That said, I have copied an entire (unmounted) Linux system using `tar` without any apparent issues. – Micheal Johnson Apr 28 '19 at 14:31
  • Also is there any particular reason why it is necessary to copy the system live? – Micheal Johnson Apr 28 '19 at 14:31
  • Use linode's snapshot service? – ivanivan Apr 28 '19 at 17:50
  • @ivanivan, we’re gonna do that, as I said; we're also gonna make our own backup as well. Just in case♥ – Sandra Apr 28 '19 at 19:13
  • Just to add: if you are using AWS or Azure , you can just do it from the console – Nigel Fds Apr 29 '19 at 00:24
  • The simple answer is backup _data_ not _configuration_. Configuration should be stored externally and applied - use something like Terraform or Ansible - this way you can _test_ the configuration in an automated way. As you're moving servers you have a chance to get it right this time - take the opporuinity that presents itself with both hands. Write code to provision the server. Take a golden rule : there should be no human interaction to restore a server from bare metal. Your own question explains why. – Boris the Spider Apr 29 '19 at 06:46
  • Great edits, thanks!♥ We are interested in preserving data rather than configuration because we’re kind of interested in starting over clean applying some of the lessons we’ve learned since we first started this server, but then get all of our web pages, wiki pages & emails in there. I’ve learned from past mistakes that even if you intent to start over with a new configuration, it can be good to be have a copy of /etc to be able to see "oh, man, how *did* I solve this problem last time, maybe I could do something similar now, but better" – Sandra Apr 30 '19 at 05:01

7 Answers7

15

So you want to backup all your drive without all those nasty mistakes and also filter out all the /proc and other temporary folders?

An option is to mount the root folder onto another folder within the filesystem, like this:

$ cd /mnt
$ mkdir drive
$ mount --bind / drive

This will give you all the files there are on your drive that are not deemed temporary (like the /proc or /sys folders).

Now that you have a clean view of your root folder, you can just copy it to your backup drive using standard cp or rsync. Something along the lines of:

cp -R /mnt/drive /mnt/backupdrive

This solves both your mentioned problems:

  • You don't get into recursion, because the backup disk is not mounted within the drive (point of view)
  • you don't miss any important files, because you are taking them all

See also: man mount(8)

rollstuhlfahrer
  • 268
  • 1
  • 5
  • 6
    Watch out, with this solution you may copy files that are being written, such as databases. I recommend running a script to dump the database in a separate file before copying the files. For example for MySQL you can use mysqldump. – Marco Martinelli Apr 28 '19 at 20:47
10

In Linux everything is a file. It's possible via rsync, but there are things to be aware of, that are (at best) difficult to get around.

You should think about replication first, especially for databases. Also this is a good idea to set up proxy / load balancer in front of your primary server, so you can easly switch back and forth with your primary and mirror servers during transition.

At the hardware level the best situation will be to have mirror-like server on another side, with the same number of ethernet ports, same hdd layout and so on. Everything that differs implies the need of system configuration changes.

i.e. if you have two eth ports, you want to make sure that network configuration, firewall and so on matches the interface name on both servers, and in case it differs you either need to change configuration after rsync or change the device name on the second (destination) server.

Same with partition layout. You should create same partitions as on your primary server, but if you create them from scratch you'll end up with diffrent UUIDs, so you will need to change fstab, grub, mdadm (if soft-raid is involved), and so on.

But, there are also many things that may go wrong, like databases, which can be inconsistent if not previously stopped (before doing rsync).

The best strategy will be to first prepare hardware and filesystem (partitions) - to match primary server's configuration. Then mount empty parititons via intermediary system (like live CD with ssh-server installed temporarily). You create empty /proc, /dev, /sys and then rsync the rest, like so:

rsync -avz -H --delete /etc /bin (...and so on) destserver:/mnt/yourrootfs/

Then you need to install grub on the device and work on configuration, to make it bootable, change network configuration, fstab and other stuff mentioned earlier.

You may also try to install fresh system (with same version that you're using on your primary server), then power it off, mount it via another, temporary system (like live cd), then replace anything other than /proc, /sys, /dev and /boot with rsync.

But it's only general idea. Things might complicate depending on what you actually have on this server, what is your configuration, network and hardware setup. And in the end of the day this might be really hard or impossible to do it without noticable downtime.

Comar
  • 281
  • 1
  • 6
  • Re databases: If you have the appropriate filesystem abstractions in place (e.g. an LVM), you may be able to take a consistent snapshot of the drive without going for full DB replication. However, this requires that your database is `kill -9` safe, or else it might fail to recover. A good database *should* handle that situation, but a surprising number of products don't (or worse, they nearly always recover, but fail once in a blue moon when you really need them to work). So in practice, replication is probably more reliable anyway. – Kevin Apr 29 '19 at 06:19
5

What you actually want is restores. Whatever you do, you must restore test it regularly.


Linode has a backup service. The snapshots can be taken on a limited pre-defined schedule or with an API.

An advantage of snapshot based backups is that they offer a sharp point in time, as data isn't changing while a copy is made. Snapshots also can easily be restored to a different host, a new Linode in this case.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32
  • 1
    I'm not seeing anything about ensuring those backups still work if, eg. Linode goes bankrupt. – Mark Apr 29 '19 at 21:55
  • I found out about Linode's backup service while typing up one of the edits to my question and I talked it over with my colleague and we went for it. It solved our immediate crisis but we're going to try to find a way to store the data in our own homes as well. So + for bringing up that they have that service, I was unaware when I first posted. But restores have the following problem: If our server is misconfigured, a ball of chewing gum and wire hangers, we don’t necessarily want to restore it to exactly that same misconfigured state. We do want our favorite data though. – Sandra Apr 30 '19 at 05:07
  • I had written some more about also exporting this backup to another storage if it suited your recovery point objective and failure domains. But I left that out to be brief. A good business continuity plan, which backups are merely a part of, identifies and deals with such risks. – John Mahowald Apr 30 '19 at 12:20
1

I'm using BackupPC for my small virtual private server, this works reasonably well. BackupPC can use rsync under the hood and supports full and incremental backups. Have a look at it and see whether it would cover your requirements.

1

Run your system on ZFS. Then you can take an instantaneous, atomic snapshot using something akin to:

# zfs snap -r tank@name-of-backup

where tank is whatever your ZFS pool is named. This snapshot is guaranteed to be an instantaneous moment-in-time snapshot of the filesystem and all of its child filesystems.

Once you've created the snapshot, you can transfer it to another host using zfs send and ssh.

Jim L.
  • 645
  • 4
  • 10
0

I my opinion It depends what and where you running server with internal linux command it is not possible, you have to mimic/pipe complete data and libraries . If you running on vmware and configured well it provides live migration. Or else you have to use third party tools. Hope this will help you. Some more references How do I make a backup of a live server?

Rsync is good command to sync the data between servers.

asktyagi
  • 2,401
  • 1
  • 5
  • 19
0

There are 2 solutions available, where you do not need to rely (anymore) on missing bits as well as missing one item off your list because of an incomplete checklist or maybe because just something hot overlooked.

Firstly, if you move this onto a platform with some more control over the underlying hardware platform, you can take disk snapshots of all files while the server is running. For example on AWS you can snapshot an EBS disk and even only pay for the differences when you make another snapshot later.

Secondly, I recommend to script the setup of your complete server with a configuration management system, such as Ansible. This will

  • document everything you have configured in source control

  • allow you to test recreate the server from backup or bare metal to make sure your scripts are up to date

  • allow you to rerun the script on a newer operating system, usually with quite minor changes.

jdog
  • 111
  • 4
  • 28
  • 1
    It turns out you can do snapshots on Linode too. I'm gonna check out Ansible! That’s kind of a side topic to what I originally wanted to know, but something like that—and I had never heard of it [I mean, I had heard of the fictional device from the wonderful Hainish books]—sounds wonderful! – Sandra Apr 28 '19 at 19:11