0

I read how important it is to have an updated catalog backup for bacula

This is my solution while setting up a full/incremental/differential archive backup job:

RunAfterJob  = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog"
RunAfterJob  = "mv /var/lib/bacula/bacula.sql /mnt/arch2/bacula-database/%c-%n-job%i.sql"

In this way I have a historical dump backup of my database.
Is it an acceptable solution? or there is some flaw in it?

p.s. I already have a job to backup the catalog that looks like this

# Backup the catalog database
Job {
  Name = "catalog backup"
  Schedule = "catalog schedule"
  Enabled = yes
  Priority = 30 # run after main backup
  Type = Backup
  Level = Full
  Client = my-client
  Storage = my-storage
  FileSet="catalog set"
  Accurate = yes
  Pool = CatalogPool
  # create ASCII copy of the catalog
  RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog"
  # create ASCII copy of catalog and store it
  RunAfterJob  = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog"
  RunAfterJob  = "mv /var/lib/bacula/bacula.sql /mnt/arch2/bacula-database/%c-%n-job%i.sql"
  # don't use variable strings as data is appended to file when !Full backup
  Write Bootstrap = "/mnt/arch2/bacula-bootstrap/%c-%n.bsr"
  Messages = telegram
}

# file set of the catalog
FileSet {
  Name = "catalog set"
  Include {
    Options {
      signature=SHA1
      compression=GZIP
      verify = pins1
      onefs = yes
    }
    File = "/var/lib/bacula/bacula.sql" # working dir specified in /etc/bacula/scripts/make_catalog_backup.pl and /etc/bacula/scripts/delete_catalog_backup
  }
}

as you can see I do a backup of the catalog as advised everywhere (before the job, dump the database and backup the resulting file to the volume pool)

My additional step is to also do a dump of the database (after the job) and store it to keep the very latest database data (and the history)

gekigek99
  • 103
  • 4

0 Answers0