6

I just spent hours building a new database server to replace 2 failed servers from a mysqldump file created a month ago, then used my server's bin-logs to bring that mysqldump up to date. Now that I have this new database server online, and have had it running for a few days, I need to complete the server set-up and re-implement a backup strategy.

I need to bring (at least) one MySQL Replication Slave online for the new server, as well as start creating usable backups for when things fail again.

Before I move further:

  • I forgot (eep) to enable the bin-log on the new MySQL server before importing the old logical backup and executing the old bin logs (blame it on being awake for 24 hours at that point, I guess).

  • My data consists of several databases, all which have a mix of MyISAM tables AND InnoDB tables.

  • Also, along the same lines of enabling the bin logs on the new server, I forgot to turn on the innodb-file-per-table option in the my.cnf before importing all the data. The binary log is now enabled on the new MySQL server, but the inno file per table option is pretty much a non-fix until I can import another logical backup (I will probably end up bringing up a second slave, and promoting the first slave to master and ditching the current master - as the config just isn't where I need it to be).

So, the problem:
I need to take a backup of the MySQL server, but I can not take the new MySQL server offline. I need to continue taking writes, and serving reads with little to no downtime. This "little-to-no downtime" requirement is defined as less than 10 minutes. My data currently uses about 100GB of space on the server (mysql data files), and the logical backups are about 50GB (that's a lot of indexes..haha). I don't care of this backup is logical, or a copy of the data files from the MySQL data directory. I can create a logical backup off the slave after I get it online.

And, the question:
How would you create this needed backup? I know it's not easy, a lot of people will say it's not possible. But I refuse to believe that it isn't possible, there has to be a way to get it done.

A note about the server: It's running Ubuntu 10.04, MySQL 5.1.41, and the filesystem that the data is stored on is ext3. The server is running on the Rackspace Cloud, so the filesystem pretty much "is what it is" unless I can re-partition the root device and repartition it with another FS (XFS maybe?) to do a snapshot.

I've read about Perconas XtraBackup Tool but it only works for InnoDB tables. They have a MyISAM tool, but I don't really understand how it could (or even if it does) work in conjunction with XtraBackup to create a completely consistent backup.

I've read about mysqlhotcopy but it only works with MyISAM tables.

Obviously I know about mysqldump, but (also, obviously) the problem here is creating a consistent backup without locking the entire database server for the number of hours it will take to output that entire sql backup file.

EEAA
  • 108,414
  • 18
  • 172
  • 242
Jim Rubenstein
  • 1,187
  • 1
  • 10
  • 16

3 Answers3

4

With regard to XtraBackup, or more specifically innobackupex, check out this explanation

Excerpt:

The innobackupex program adds more convenience and functionality by also permitting you to back up MyISAM tables and .frm files. It starts xtrabackup, waits until it finishes copying files, and then issues FLUSH TABLES WITH READ LOCK to prevent further changes to MySQL's data and flush all MyISAM tables to disk. It holds this lock, copies the MyISAM files, and then releases the lock.

The backed-up MyISAM and InnoDB tables will eventually be consistent with each other, because after the prepare (recovery) process, InnoDB's data is rolled forward to the point at which the backup completed, not rolled back to the point at which it started. This point in time matches where the FLUSH TABLES WITH READ LOCK was taken, so the MyISAM data and the prepared InnoDB data are in sync.

Hope this helps.

Cheers

HTTP500
  • 4,827
  • 4
  • 22
  • 31
  • So, I guess the lock is only kept for as long as it takes to copy MyISAM tables. Which might be fast in my situation, because the size of my MyISAM tables isn't that big relative to my InnoDB tables (I don't think...). I'll have to check that out – Jim Rubenstein Jun 01 '11 at 16:37
  • 1
    I used the XtraBackup and innobackupex tools last night. They were awesome. Made a complete backup of the server in less than 10 minutes, with a total locked table time of less than 3 minutes. Granted, I didn't have much MyISAM table (maybe 1GB cumulatively). Copied the files to a new server (took about an hour, using `tar` and `netcat`) started mysql, issued the change master to command, and voila. done. – Jim Rubenstein Jun 02 '11 at 20:30
1

This "little-to-no downtime" requirement is defined as less than 10 minutes

OK 2 obvious solutions:

1) set up a mirrored filesystem to hold the mysql data tables. When you want to do a backup, shutdown mysqld, remove one of the mirrors from the set, then start up mysqld again, then remount the drive you've just taken out of the mirror somewhere else and either copy the raw files off or start up a second instance of mysld (different port, different path to files) and run mysqldump there. When finished, shutdown the 2nd mysqld instance and put the drive back into the mirror set (the RAID system should handle updating the drive before its read from as a prt of the mirror so no need to shutdown the first instance at this point).

2) Similar to 1 but implement mirroring at the application level - set up your current mysqld instance as a master, configure a second instance as a slave. When you want to create a backup, stop the replication on the slave, do the backup then start replication again (0 downtime).

symcbean
  • 19,931
  • 1
  • 29
  • 49
  • So the glaring issue here, is setting all of this up without taking down the server. Changing the filesystem is going to take a long time I think. Also, I don't really have multiple devices available, unless I can create virtual devices or re-partition the Rackspace Cloud Server device into multiple partitions / devices. – Jim Rubenstein Jun 01 '11 at 16:25
  • Due to the nature of slaves to lag behind the master, when the backup is made there is the possibility that it may not be "complete" in the sense of being fully current. The only way I know to do that is with a Master/Master configuration, which would also provide a fail-over solution. – daemonofchaos Jun 01 '11 at 16:25
  • No - master/master replication does not resolve the problem of update lag. The methods I describe provide a mechanism for getting a consistent file image (one NOT being written to by MySQL) without interfering with queries - it does not address the problem of a split transaction where the extent of the transaction exists outwith the DBMS - that's just impossible by **any** means. – symcbean Jun 06 '11 at 12:44
0

Since it is cloud infrastructure and you can't do many things you can on dedicated server (like repartitioning/changing filesystems/using lvm), look for snapshot/backup solution from your cloud service provider.

Quick googling pointed me to this: http://www.rackspace.com/cloud/blog/2010/06/16/introducing-cloud-servers-snapshots-to-cloud-files/

Check this, it might be the best solution for your problem.

rvs
  • 4,027
  • 1
  • 25
  • 30
  • This would be perfect. EXCEPT! Rackspace has put an 80GB limit restriction on the size of the snapshot you're allowed to create. This renders this feature useless for me, because I have much more than 80GB of data that has to be backed up. – Jim Rubenstein Jun 01 '11 at 17:35
  • Oh... Rackspace, too bad. :( – rvs Jun 02 '11 at 07:40