4

Currently I'm using mysqldump to make backups which is slow, but OKish. The big problem is restoring database, which takes a few days. The dump is around 7GB gzipped, so it's not a tiny database, but it shouldn't be outside the range of reasonable with mysql.

So what are my other options? Something like mysqlhotcopy would be perfect.

Nick Kavadias
  • 10,758
  • 7
  • 36
  • 47
taw
  • 547
  • 2
  • 5
  • 13
  • I'm far from an expert, just sharing my experience here; apologies for any technical inaccuracies. As Jarod Elliot indicates, this may be a problem with MySQL using a very slow method for recreating the indexes ("Repair by keycache"). We're having the same problem. According to our host the "relevant MySQL variable here is myisam_max_sort_file_size", however they note that it defaults to 2GB which is probably fine for my purposes. They then indicated that the tmp dir used by MySQL on our server only has a size of 500MB, which is the cause of our problem (resulting in use of Repair by keycache" – Oliver Coleman Jul 30 '12 at 00:03

7 Answers7

8

Take a look at Percona's Xtrabackup, which allows hot backups and is totally free.

Nick Kavadias
  • 10,758
  • 7
  • 36
  • 47
3

If you have --innodb_file_per_table enabled than you can move an .ibd file and the associated table from one database to another, use a RENAME TABLE statement:

RENAME TABLE db1.tbl_name TO db2.tbl_name;

If you have a “clean” backup of an .ibd file, you can restore it to the MySQL installation from which it originated as follows:

Issue this ALTER TABLE statement to delete the current .ibd file:

ALTER TABLE tbl_name DISCARD TABLESPACE;

Copy the backup .ibd file to the proper database directory.

Issue this ALTER TABLE statement to tell InnoDB to use the new .ibd file for the table:

ALTER TABLE tbl_name IMPORT TABLESPACE;

In this context, a “clean” .ibd file backup is one for which the following requirements are satisfied:

There are no uncommitted modifications by transactions in the .ibd file.

There are no unmerged insert buffer entries in the .ibd file.

Purge has removed all delete-marked index records from the .ibd file.

mysqld has flushed all modified pages of the .ibd file from the buffer pool to the file.

You can make a clean backup .ibd file using the following method:

Stop all activity from the mysqld server and commit all transactions.

Wait until SHOW ENGINE INNODB STATUS shows that there are no active transactions in the database, and the main thread status of InnoDB is Waiting for server activity. Then you can make a copy of the .ibd file.

Another method for making a clean copy of an .ibd file is to use the commercial InnoDB Hot Backup tool

JohnZ
  • 235
  • 1
  • 4
  • 10
  • 3
    you could have mentioned that this is a verbatim copy of the docs. –  Jan 22 '11 at 23:37
1

There's the InnoDB/Oracle Hot Backup tool, which costs money. I've never used it, but I'd presume it does what it says on the tin.

Personally, I take an LVM snapshot of my InnoDB tables, and backup from the snapshot -- on restore, it looks like a system crash has occured to InnoDB, which goes through it's normal log replay process on startup. I'm fond of the belt and bracers approach to backups, so I combine the regular snapshot-and-rsyncs with less frequent mysqldumps (and sometimes it's handy to have a fairly recent SQL dump hanging around anyway).

Stephen Veiss
  • 486
  • 2
  • 6
  • Just an additional note: This works well but might not be suitable in all cases as LVM snapshots have a big performance penalty. Whenever an LVM volume has an active snapshot tied to it, write performance becomes pretty bad. – snap Jul 30 '12 at 04:27
0

If you have any MyISAM tables, I would suggest altering them to InnoDB. Here's what I use:

    mysql -u root --password=<password> --database=db_name -B -N -e "SHOW TABLES" | awk '!/not_this_db/ && !/or_this_one/ && /^[a-z]/ {print "ALTER TABLE", $1, "ENGINE=INNODB;"}' | mysql -u root --password=<password> --database=db_name

You can exclude and include databases with the awk regex such as only dbs starting with a lowercase letter in my example above. This will of course lock the tables during the alter.

Then use xtrabackup to copy the entire database straight to another server without locking any tables or using too much disk IO (after setting up ssh rsa keys):

innobackupex --throttle=500 --compress --stream=xbstream /doesntneedtoexist | ssh user@otherhost "xbstream -x -C /root/backup/"

and then you can do the apply log step completely separate and save disk space, IO, and CPU on the production server.

Percona's HowTO's for using xtrabackup

Brad
  • 141
  • 1
  • 3
0

If you are using InnoDB then there are many parameters which you can configure for better performance. If using InnoDB then it is quiet evedent in your case that your innodb_file_per_table is very small. Increase the size of this parameter to at least 50% of your total RAM and check the time taken for restore.

Apart from this you can increase innodb_thread_concurrency parameter.

Also try Percona xtrabackup. I'm using it for my TBs of Db and this works perfectly.

0

Rather than a dump/restore solution, I suspect i know the reason why the current restore is taking so long.

We have some quite large databases (up to 40GB) and i've restored one's your size in well under an hour.

One thing that mysqldump will do to make the inserts faster during the restore is to turn off the indexes, run all the inserts and then apply the indexes to your tables.

If a couple mysql variables aren't big enough, mysql will say it's "repairing by key cache" or something similar ... this is very, very, very slow.

If the right variables are set large enough it will say "repair by sorting" which is much, much quicker.

I can't remember the exact variables off the top of my head but if this sounds like it may be the case, let me know in a comment and i can see if i can track down the details.

As Nick mentioned, Percona's Xtrabackup is definitely worth a look as well.

Jarod Elliott
  • 151
  • 1
  • 4
  • 1
    Did you ever track down those variables? I'd be curious to know how to improve this. – A B Oct 19 '10 at 20:33
0

I've added support for hotcopies to the mysql_manager rubygem, but it only supports hotcopies of the entire mysql directory. It accomplishes this by first doing repetitive rsyncs without table locks until sync duration is tolerable, then obtains a lock using FLUSH TABLES WITH READ LOCK before doing a final rsync. Since it's using rsync, it also supports remote hosts.

Usage is pretty easy.

First install the gem.

gem install mysql_manager

Here's how you would run it.

mysql-manager --hotcopy \
              --hotcopy:data-dir /var/lib/mysql/ \
              --hotcopy:backup-dir user@remote.host.com:/tmp/mysql/ \
              --hotcopy:rsync-args "-av --exclude=*.err" \
              --hotcopy:rsync-ttl 30 \
              --db:user root \
              --db:pass $MYSQL_ROOT_PASSWORD