28

Right now I am able to do the backup using mysqldump. But I have to take down the web server AND it takes around 5 minutes to do the backup. If I don't take down the web server, it takes forever and never finishes + the website becomes inaccessible during the backup.

Is there a quicker/better way to backup my 22 GB and growing database?

All the tables are MyISAM.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24

7 Answers7

28

Yes.

Setup replication to a second machine. When you need to do a backup, you can lock the secondary machine, perform mysqlhotcopy or mysqldump, and then unlock it. It will catch back up with your master, and you never have to take the master offline.

You could even do this on the same machine, if you don't mind doubling the write I/O, but ideally you should back it up in real-time to a second physical server, and take your snapshot backups as often as you need without disturbing your production server.

It's also theoretically possible to restore a database using a known state and binlogs. I've never done it, so please investigate first, but you could backup a known state of your database, then just backup all new binlogs and replay them if you ever needed to restore. Since binlogs are written linearly, rsyncing new binlogs to a remote computer would be very fast.

Edit: Indeed, it looks like using binlogs for backup is documented.

This question is highly related

  • Yes, this works great and was going to be my answer. The only major downside is that you need to **make sure** that the replication is working correctly on a daily basis. We take snapshots of our slave database during business hours and a nightly backup from the master. –  May 16 '09 at 05:15
5

Pardon me for assuming the OS is Linux. If you are not using LVM, you should be. If you are, here is a very simple way to make backups via snapshot.

# Define these vars to meet your needs. These meet mine.
lvol="/dev/VolGroup03/lvol0"
snap_name="nightly_snapshot"
snap_vol="$(dirname $lvol)/$snap_name"
snap_path="/mnt/$snap_name"
snap_size="10g" # Not the size of your data, anticipated data changes during the backup
backup_path="/backups/$snap_name"

/usr/bin/time -f 'Databases were locked for %E' -- \
mysql <<- MYSQL
# based on http://pointyhair.com/tiki-view_blog_post.php?blogId=1&postId=5
FLUSH TABLES WITH READ LOCK;
\! lvcreate --size $snap_size --snapshot --name $snap_name $lvol
UNLOCK TABLES;
MYSQL
mount $snap_vol $snap_path
rsync -av --delete $snap_path/mysql $backup_path/
umount $snap_path
lvremove -f $snap_vol

This will allow you to make nightly backups without having to add a slave server. I am very much in favor of having a slave server for High Availability, but I don't want you the think that you are stuck until you can create that slave.

Bruno Bronosky
  • 4,429
  • 3
  • 24
  • 32
2

FLUSH TABLES WITH READ LOCK is not something you want to do on a regular (or even semi-regular) basis on a production system. It should be a last resort only.

Set up at least two replication slaves (This will require a FLUSH TABLES WITH READ LOCK, of course). Once they are set up, you can take a backup off one while the other stays in sync as a spare master.

Also, if one of your slaves fails, you can then use a snapshot from that to rebuild a second (or third) slave. If all your slaves fail, you're back to FLUSH TABLES WITH READ LOCK.

Remember to always have a process which regularly checks the data are in sync - use something like mk-table-checksum to do this (this is nontrivial to set up, consult the Maatkit documentation for details).

As 22 GB is relatively small, you will have no problems doing this. Doing it with a large database could be more problematic.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
MarkR
  • 2,898
  • 16
  • 13
1

The solution here is twofold, as described above:

  1. Set up replication of your server to a slave which you can take offline. The best way to do this is to take a dump of the database using mysqldump and the --master-data parameter.
  2. Set up nightly backups on the slave. You can either user mysqldump with the --master-data --flush-logs and --single-transaction flags, or you can stop the mysql server, copy the data files on-disk, and restart it (replication will pick up where it left off).
  3. Run a script on the slave every (e.g 5, 10, 20 minutes) to check and make sure that mysql is still replicating. I wrote a simple python script to do it, which you're welcome to use.

Note that if you are using InnoDB for your tables, you can use the --single-transaction flag to avoid having to do any table locks and still get a consistent dump of the database, even on the master, and thus do your backups without taking down the server. The above solution, however, is a better one.

Also, if you are using LVM on Linux, you can take an LVM snapshot of the partition, then back that up. LVM snapshots are atomic, so if you do 'flush tables with read lock' and then take your snapshot and unlock, you'll get a consistent snapshot.

If you're concerned about I/O contention making the dump take too long, you can add a third machine and run mysqldump on that over the network to avoid thrashing your disks.

Dan Udey
  • 1,460
  • 12
  • 17
0

Have a look at "Best practice for backing up a production MySQL database?". It is a similar post on Stack Overflow.

Charles Faiga
  • 612
  • 2
  • 11
  • 18
0

Depending on your environment, snapshots are usually an excellent way to go. Particularly if you have to backup the master for some reason. We run master and slave pairs, and use snapshot backups on both.

  1. FLUSH TABLES WITH READ LOCK;
  2. Pull a snapshot on the database and log filesystems.
  3. UNLOCK TABLES;
  4. Copy your data files off the snapshot at your leisure.

With InnoDB tables, you'll want to run SET AUTOCOMMIT=0; before you execute the read lock.

jasonjwwilliams
  • 191
  • 1
  • 5
-1

You could do a gradual backup. Backup 1/24th of the records every hour. The only problem with this approach is if it crashes during the first few hours of the day, you'll lose anything from that time to the time of the crash. Either way, it's less than 24 hours of records lost (I don't know how important that is to you).