13

When asking Gitlab support on how to do a 3TB backup on ones on-premise Gitlab they reply use our tool that produces a tarball.

This just seams wrong to me on all levels. This tarball contains the postgres dump, docker images, repo data, GIT LFS, etc config and so on. Backing up TB of static data together with KB very dynamic data doesn't seam right. And then comes the issue of, we want to do a backup every hour.

Question

I'd really like to know from others how they do it, to get a consistent backup.

ZFS on Linux would be fine with me, if that is part of the solution.

Sandra
  • 9,973
  • 37
  • 104
  • 160
  • 3
    Why is this wrong? You back up your Gitlab completely to restore it completely. I don't think this is wrong. Of course it uses much more space than say, incremental backups, but...I wouldn't care about backup size. – Lenniey Feb 05 '19 at 14:05
  • 3
    Having a backup every hour is not unheard of, but it is impossible to make a 3TB in less than hour with their approach. And backups for just one day would be ~100TB, where there might only be 10MB of changes to the data. – Sandra Feb 05 '19 at 14:11
  • OK, this is a different question, not about the backup in general but about frequent backups. – Lenniey Feb 05 '19 at 14:13
  • 5
    In their [official docs](https://docs.gitlab.com/ee/raketasks/backup_restore.html#alternative-backup-strategies) they even mention their method as being slow and suggest alternatives: `If your GitLab server contains a lot of Git repository data you may find the GitLab backup script to be too slow. In this case you can consider using filesystem snapshots as part of your backup strategy.` I can't speak from experience, though. But I may have to include something like this soon... – Lenniey Feb 05 '19 at 14:19
  • Gitlab has options in the config file and backup flags that will allow you to exclude sections, or go so far as to store images and artifacts on an object store – ssube Feb 05 '19 at 19:28
  • @Lenniey, backing up a Gitlab install this way is much like backing up a virtual machine by copying the disk image: yes, it works, but it's an incredibly inefficient way to go about it. – Mark Feb 05 '19 at 22:07
  • @Mark The filesystem snapshots should only copy changes so it won't be too inefficient – Qwertie Feb 05 '19 at 23:37
  • Have you considered specifying the backup skip options? This way you can backup the source code and ignore everything else for hourly backups then a big ole backup once a day or something? – tobyd Feb 06 '19 at 14:15

2 Answers2

14

I would review what you are backing up and possibly use a "multi-path" approach. For example, you could backup the Git repositories by constantly running through Git pulls on a backup servers. That would copy only the diff and leave you with a second copy of all Git repositories. Presumably you could detect new repos with the API.

And use the "built-in" backup procedures to backup the issues, etc. I doubt that the 3TB comes from this part so you would be able to do backups very often at very little cost. You could also set up the PostgreSQL database with a warm standby with replication.

Possibly your 3TB comes from container images in the Docker registry. Do you need to back those up? If so, then there may be a better approach just for that.

Basically, I would recommend really looking at what it is that makes up your backup and backup the data in various parts.

Even the backup tool from GitLab has options to include/exclude certain parts of the system such as the Docker Registry.

ETL
  • 6,443
  • 1
  • 26
  • 47
  • 1
    git pulls is not a perfect incremental backup. `git push --force` will either break the backups or erase history from them, depending on how it's implemented. – user371366 Feb 06 '19 at 03:23
  • @dn3s that's why you always disable git push --force on main repository. If someone want's to change history they can make their own fork, and accept all the risks it brings. – charlie_pl Feb 06 '19 at 06:39
  • 2
    that might be fine for _replication_, but you don't want your backups' integrity to rely on correct application behavior. what happens if there's a bug in the application, or it's misconfigured down the road? what if your server is compromised by a malicious user? if your application has the ability to remove content from the backup host, much of the value of incremental remote backups is lost. – user371366 Feb 06 '19 at 07:46
10

For such a short time between backups (1h), your best bet is to rely on filesystem-level snapshot and send/recv support.

If using ZoL is not a problem in your environment, I would strongly advise to use it. ZFS is a very robust filesystem and you will really like all the extras (eg: compression) it offer. When coupled with sanoid/syncoid, it can provide a very strong backup strategy. The main disvantage is that it is not included into mainline kernel, so you need to install/update it separately.

Alternatively, if you really need to restrict yourself to mainline-included stuff, you can use BTRFS. But be sure to understand its (many) drawbacks and pita.

Finally, an alternative solution is to use lvmthin to take regular backups (eg: with snapper), relying on third party tools (eg: bdsync, blocksync, etc) to copy/ship deltas only.

A different approach would be to have two replicated machines (via DRBD) where you take indipendent snapshots via lvmthin.

shodanshok
  • 44,038
  • 6
  • 98
  • 162
  • What about postgres? Would to stop gitlab and postgres for a minute, so a consistant shapshot could be made? Ideally it would be great if postgres could be put in a read-only mode while the snapshot is made. – Sandra Feb 05 '19 at 15:07
  • 4
    @Sandra restoring from a filesystem snapshots should appear to postgresql (and any other properly written databases) as a generic "host crash" scenario, triggering its own recovery procedure (ie: committing to main database any partially written page). In other words, you do not need to put postgres into read-only mode when taking snapshots. – shodanshok Feb 05 '19 at 16:01