8

Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables.

Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups.

Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution.

Succinct short version:

  • I have a fairly large MySQL databases I need to backup
  • Current method using mysqldump is inefficient and slow (causing issues)
  • Looking into something such as mk-parallel-dump or LVM snapshots

Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

Zoredache
  • 128,755
  • 40
  • 271
  • 413
WinkyWolly
  • 598
  • 6
  • 19

6 Answers6

5

I've had good success with MySQL replication and nightly tarballs. For smaller db, the mysql database, and schema I use a combination of scripts designed to use mysqlhotcopy and mysqldump.

InnoDB hot backup is a great commercial product but I'm not sure how it handles mixed tables in the same database. pQd's recommendation for XtraBackup may be good to compare against this.

Others like LVM snapshots and I'd say that's definitely something to consider. Ultimately, a combination of solutions would probably be best.

It's also notable this is an old topic. Between the High Performance MySQL book, the MySQL manual, and previous ServerFault questions-- this has been exhausted on a general basis. See:

Warner
  • 23,440
  • 2
  • 57
  • 69
  • +1; with myisam you never know if you have logically consistent backup [ taken by lvm/mysqldump/from lave ] or not. maybe if you application changes only during business hours - you can safely dump it in the night, otherwise - you are never sure and no method will help. – pQd May 07 '10 at 15:18
  • I think you're right about mixing solutions. As mentioned under pQd's reply I'll likely take the LVM snapshots and look into the xtrabackup utility (says it can handle mixed tables). I looked into InnoDB's hot backup however I'm always one for open source projects. Thanks for the references, I looked into 2 of but the replies are pretty generic / doesn't address the issues I'm having / they're referring ot more "normal" and "mundane" databases. – WinkyWolly May 07 '10 at 17:49
4

xtrabackup - at least for innodb.

pQd
  • 29,561
  • 5
  • 64
  • 106
  • Interesting however I'd like a more graceful solution to not worry about my mix of InnoDB / MyISAM tables. – WinkyWolly May 07 '10 at 14:25
  • xtrabackup comes with script that can helps you with taking backups of myisams too, but check my comment under Warner's post – pQd May 07 '10 at 15:19
  • Thanks, looks like I might go this direction and mix in LVM snapshots for good measure. It does state it can "handle" MyISAM tables as well via the "innobackupex" script. I suppose I'll give that a whirl and see exactly what happens. – WinkyWolly May 07 '10 at 17:46
3

The most common way to solve this issue is to set up another MySQL server, which can even be on the same machine, and run master/slave replication. You can then perform the backup on the slave, with zero impact on the master.

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
0

On EC2 EBS, I'm right now using xfs_freeze. I'm looking into possibly switching to xtrabackup at some point, but when I gave it a first test run it was very, very CPU hungry.

user5336
  • 2,369
  • 1
  • 15
  • 9
  • Unfortunately I only use XFS on media servers right now so that's not an option. What sort of experience (aside from CPU hungry, maybe more detail?) did you have with xtrabackup? Were you backing up pure InnoDB tables or a mix? – WinkyWolly May 07 '10 at 18:33
  • My biggest hesitation was that it chewed up CPU for about 30 minutes as it completed (backing up a DB of about 35GB data), rendering the DB server only marginally functional -- certainly not something I'd likely to be running on a production master. I actually had already converted my remaining few MyISAM tables to InnoDB partially for the purposes of this. I think it'd probably be OK to run on a slave as long as it doesn't cause replication to fall substantially behind. – user5336 May 07 '10 at 19:17
0

If you're running replication of a database that is shared across applications, it seems there is an obvious question of whether you can improve performance of many things, including backups, by dedicating database servers to apps. Shared is nice, until it's not.

0

If you're keeping your MyISAM tables for legacy reasons only (you haven't bothered altering them), here's what I use to easily fix that:

    mysql -u root --password=<password> --database=db_name -B -N -e "SHOW TABLES" | awk '!/not_this_db/ && !/or_this_one/ && /^[a-z]/ {print "ALTER TABLE", $1, "ENGINE=INNODB;"}' | mysql -u root --password=<password> --database=db_name

You can exclude and include databases with the awk regex such as only dbs starting with a lowercase letter in my example above. This will of course lock the tables during the alter.

Then use xtrabackup to copy the entire database straight to another server without locking any tables or using too much disk IO (after setting up ssh rsa keys):

innobackupex --throttle=500 --compress --stream=xbstream /doesntneedtoexist | ssh user@otherhost "xbstream -x -C /root/backup/"

and then you can do the apply log step completely separate and save disk space, IO, and CPU on the production server.

Percona's HowTO's for using xtrabackup

Brad
  • 141
  • 1
  • 3