8

I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now.

My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime?

I'm thinking in line of:

  • Request to attach an extra drive to server and take a dump in that drive. [EDIT: It's not possible now.]
  • Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync
  • When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db.

What's your suggestions to improve this or any alternate better approach for this task?

Jagbir
  • 157
  • 1
  • 2
  • 7

4 Answers4

9

If you are considering migrating to another DB Server with the exact same version of MySQL, you may want to rsync the datadir from the old server to the new server.

This will work regardless of InnoDB file layout or even the presence of MyISAM tables.

  1. install the same version of mysql on ServerB that ServerA has
  2. On ServerA, run RESET MASTER; to erase all binary logs before the rsycn process. If binary logging is not enabled, you can skip this step.
  3. On ServerA, run SET GLOBAL innodb_max_dirty_pages_pct = 0; from mysql and about 10 minutes (This purges dirty pages from the InnoDB Buffer Pool. It also helps perform a mysql shutdown faster) If your database is all MyISAM, you can skip this step.
  4. rsync /var/lib/mysql of ServerA to /var/lib/mysql on ServerB
  5. Repeat Step 3 until an rsync takes less than 1 minute
  6. service mysql stop on ServerA
  7. Perform one more rsync
  8. scp ServerA:/etc/my.cnf to ServerB:/etc/.
  9. service mysql start on ServerB
  10. service mysql start on ServerA (optional)

Essentially, here is what such a script would like this

mysql -u... -p... -e"RESET MASTER;"
mysql -u... -p... -e"SET GLOBAL innodb_max_dirty_pages_pct = 0;"
RSYNCSTOTRY=10
cd /var/lib/mysql
X=0
while [ ${X} -lt ${RSYNCSTOTRY} ]
do
    X=`echo ${X}+1|bc`
    rsync -r * targetserver:/var/lib/mysql/.
    sleep 60
done
service mysql stop
rsync -r * targetserver:/var/lib/mysql/.
service mysql start

A fellow member of the DBA StackExchange said I should stay away from FLUSH TABLES WITH READ LOCK; based on something in mysqlperformanceblog.com

I read through and learned that SELECTs against InnoDB tables in the middle of a FLUSH TABLES WITH READ LOCK; can still allow writes to occur in some way. As pointed out in the comment by Arlukin, LVM would work with FLUSH TABLES WITH READ LOCK on InnoDB just fine (+1 for his comment).

For all non-LVM users, you are OK with an all-MyISAM database for use with FLUSH TABLES WITH READ LOCK;. For InnoDB, stick to --single-tranaction usage in mysqldumps please.

RolandoMySQLDBA
  • 16,364
  • 3
  • 47
  • 80
  • 2
    That is how we do it, when we need to manually sync a master-slave setup. Works really good. But on our latest servers we are using LVM snapshots, so we don't need to stop the server. Just before doing the LVM-snapshot we execute "FLUSH TABLES WITH READ LOCK", so the files are ok to copy. The lvm-snapshot is also really good if you would like to copy all files in backup purpose. So setup your new server with LVM, and add some extra space to be able to do snapshots. – Arlukin Nov 06 '12 at 19:15
  • Thanks for detailed answer. What if MySQL versions are different? The older one is 5.1 with ubuntu 10.04 while in newer I'm willing to use 12.04 with default 5.5. What approach in such case you recommend? Also attaching extra drive is out of question now so data needs to be sent remotely whether by dump/rsync or any other way. – Jagbir Nov 07 '12 at 07:45
1

A dump and restore of a database that size would take hours. I would, Depending on the versions of mysql as long as the version number increments and there are no jumps in major revision number. You should be able to take the raw database files in /var/lib/mysql and put them on the new server, set the permissions and fire up the server with the --skip-grant-tables switch. Add the necessary grants for users reflecting the new IP address then restart normally.

I would address the size of your database as it is too big to be efficient.

James Park-Watt
  • 358
  • 1
  • 8
  • There are 4 databases having size of around 50-90 GB making overall size 260 GB. It may be in-efficient but I've to live with it as of now. Also the MySQL versions are different, what you suggest in that case? – Jagbir Nov 07 '12 at 07:47
  • If the mysql versions are different do a dry run first and test thoroughly as long as the version number on the new server is higher than that on the old you should be ok. If it is unsuccessful then you may be stuck using the mysqldump && restore. – James Park-Watt Nov 07 '12 at 09:32
1

You can follow these steps to migrate this huge InnoDB database.

  • Install SSHFS and mount the remote server's relevant partition on the production server
  • Use Percona XtraBackup to get a hotcopy of the InnoDB database and directly save it to the SSHFS mounted directory
  • This task will take several hours. To minimize the affect of the hotcopy script on the live server, set low priority on it using renice

    $ renice -n 5 -p <SCRIPT-PID>

  • Make sure that both the servers are running same release of MySQL server.
  • Once the hotcopy is complete, you may restore it in the new server start the replication process

You may experience slowness during this process but definitely no downtime. Percona XtraBackup will create a hotcopy that is quicker and less resource consuming compared to mysqldump. This is ideal for a huge InnoDB database.

Depending upon the usage patterns and stats, you may run this process when there is minimum traffic on the server. Perhaps doing this over the weekend is a good idea? The above is just an outline of the process. You may need to go through Percona XtraBackup and SSHFS documentation.

vagarwal
  • 845
  • 6
  • 8
1

You could just dump the database straight to the remote server ...

$ mysqldump | ssh user@server 'cat - > dumpfile.sql.gz'

... SQL should compress well so you should get this done a lot quicker with one of these options although it will depend on the amount of RAM you have in the box too ...

$ mysqldump | ssh -C user@server 'cat - > dumpfile.sql.gz'
$ mysqldump | gzip -c | ssh user@server 'cat - > dumpfile.sql.gz'
Dogsbody
  • 656
  • 4
  • 15