8

I am using mysql

How often do you back up your database?

How do you normally backup your database?

Export all data into sql or cvs format and keep it in a folder??

pauska
  • 19,532
  • 4
  • 55
  • 75

9 Answers9

11

If you want to do MySQL backups properly, without any downtime, you should replicate the database to a spare server. It doesn't need to be hugely powerful, it just needs to cope with the write load of your master database. You shouldn't use this server in production. If the replication never keeps up, you need a more powerful server. You can check by comparing the log file and position from the output of

 > SHOW MASTER STATUS\G

on the master and

 > SHOW SLAVE STATUS\G

on the slave. I think MySQL5 will show the lag from SHOW SLAVE STATUS.

When you're happy that your slave is keeping up, you can do you backups by doing

  1. Stop the replication with SLAVE STOP; on the slave
  2. Do a mysqldump --opt on the slave server.
  3. Start the replication again with SLAVE START; on the slave

If you do this, then you will have a consistent backup of your databases. This method prevents different databases or even worse, different tables in the same database being out of sync and prevents downtime by locking tables for writes while you do the backup.

A nice benefit of this setup is that you have a copy of your database you can use to run long expensive queries on that won't affect your live service.

Couple of random tips:

  • Don't be tempted to do a file based backup of the mysql data files. It's more hassle than it's worth and MySQL dumps are more flexible.
  • Beware mysqldump locking tables during dumping.
  • Beware inconsistences in dumps unless you lock every table during your dump
  • Use mysqldump --opt, as it's usually the fastest way to import the resulting SQL
  • Dump as often as you can. We dump daily because we have 40GB+ of data.
  • Test your dumps on a spare server occasionally to make sure they work.
Mei
  • 4,560
  • 8
  • 44
  • 53
David Pashley
  • 23,151
  • 2
  • 41
  • 71
  • 2
    Seconds_Behind_Master from SHOW SLAVE STATUS is available in 4.1.9 and later. – Dan Carley Jun 23 '09 at 09:05
  • +1 on replication and regular dumps. I found it relatively painless to get working the first time (without much previous mysql experience). I don't fully agree on the file backups, If you have the space to do both then do, the file based backup will be easier to rebuild a new server with if your database server asplodes, individual sql dumps for each database will be more useful if you need to restore data that someone has lost. – theotherreceive Jun 23 '09 at 20:03
3

I use a script that uses mysqldump to extract the data/schema to a file for each database. The data is backed up by the normal netbackup backup to tape. You can obviously add further bells and whistles but this does a simpe basic dump.

#!/bin/sh
# Find out what databases are in mysql and back them up
# Delete old backups


STARTTIME=` date +%Y%m%d-%H%M `

#BACKUP_DIR="/usr/local/db_backups"
BACKUP_DIR="/var/local/db_backups"
LOGFILE="/var/log/db_backups.log"
USER="root"
PASSWD="<password>"
KEEP="7"

(
echo
echo " ---MySQL backups start ${STARTTIME} ---"
#delete any backup written more than ${KEEP} days ago
echo "Removing files over ${KEEP} days old from ${BACKUP_DIR}:"
/usr/bin/find  ${BACKUP_DIR} -mindepth 1 -mtime +${KEEP} -print -delete



echo
echo "Performing today's dumps"
#find each database running in this instance of mysl
for DB in ` echo "show databases;"|mysql -u${USER} -p${PASSWD} mysql |awk " NR>1 {print $1} " `
do
        #generate a backup file name based on the data base name
        BACKUP_FILE="${BACKUP_DIR}/${DB}-${STARTTIME}.sql"
        echo "Processing database ${DB} into file ${BACKUP_FILE}"
        # dump the database data/schema into the backup file
        mysqldump -u${USER} -p${PASSWD} --add-drop-table ${DB} > ${BACKUP_FILE}
        gzip ${BACKUP_FILE}
done

ENDTIME=` date +%Y%m%d-%H%M `
echo
echo " ---MySQL backups complete ${ENDTIME} ---"
echo
) >>  ${LOGFILE} 2>&1
Jason Tan
  • 2,742
  • 2
  • 17
  • 24
1

Usually databases are backed up once a day if they have to be stopped, and then backups are transfered to a storage area for consolidation then they go to tape.

Database backups are done with the native tools provided with the database engine most of the time.

Backups shall not be keep on the servers with the data in case of hardware failure.

It's fairly recommended to have up to date replicas of your database servers when possible, better have a failover macanism for production databases.

For software you can for example take a look at bacula or zmanda

Maxwell
  • 5,026
  • 1
  • 25
  • 31
1

Our standard setup is a HA cluster with two databases one replicating to the other which is read-only.

We do a full back up once a day and then have a per-customer policy of weeding out old backups, generally we keep 4 last daily backups (to survive the weekend ;), 4 last sundays and 4 last first sundays in a month. After that one or two dumps a year are going to the archive to be kept forever.

We also keep the replication logs for however long we can afford to spare the diskspace. They are also pretty useful as the debugging tool as they record exactly who changed what and when.

Theoretically all you need is one full backup and all the replication logs to be able to do a point in time restore, but more frequent full backups will speed up the restore.

One neat trick with backup is to use innodb tables and --single-transaction paramater for mysql dump, that way the backup won't block the database while it runs.

Aleksandar Ivanisevic
  • 3,327
  • 19
  • 24
1

I'm using Percona's Xtrabackup. it's non-blockable backup solution for InnoDB / XtraDB

SaveTheRbtz
  • 5,621
  • 4
  • 29
  • 45
1

The whole purpose of backing up is to be able to restore.

I wouldn't advocate a CSV dump as a backup solution; all it will give you is the raw data. There is a lot more beside that, particularly with a database. Table descriptions, views, stored procs, you name it. If you don't have these as well you won't be able to get it back successfully. There is also the RDBMS application and config to consider. You might have a large number of patches on, which you'll need to also put on your recovery environment to get it to the same level. You might be running some special configuration dictated by the requirements of your applications. You might even have a specific set of OS settings required for your database to run optimally. All of these will also need to be got back, and unless you have a backup solution that's capable of doing them, it's further delays in your recovery time, not to mention no guarantee that you'll get it back the same as it was.

For database backups (and backups in general) I would always prefer to use "real" backup software that can handle all of these.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36
0

Most recently, I've managed MySQL servers in EC2. We set up EBS snapshots on a 15 minute cron job, 3-5 snapshots were kept.

When we did 'traditional' MySQL servers, we backed up daily via MySQl-ZRM. The backups are essentially mysqldumps, which get sent off to tape, SAN, etc, depending on customer's needs.

Both methods can be done without stopping the database.

jtimberman
  • 7,511
  • 2
  • 33
  • 42
0

For MySQL I use automysqlbackup (http://sourceforge.net/projects/automysqlbackup/), since my backup software (Backup Exec) don't support snapshots on Linux systems.

It works OK, but I'm going to monitor this thread for suggestions :)

pauska
  • 19,532
  • 4
  • 55
  • 75
0

We do twice daily backups and also run log backups every 10-15 minutes.

The advantage of this method is that you can restore from one of the twice daily backup and then apply the log files up until the last 15 minutes at the latest. This way you are minimising the amount of data that you can lose.

How often you backup your data is up to you however. How much data are you comfortable with losing? If you can afford to lose a days worth of data then backup once a day. Data never changes? Then you only need one copy!

mjallday
  • 894
  • 2
  • 8
  • 14