1

I am trying to migrate a 50GB MySQL Innodb Database to Google Cloud. Binlog in my current Database is active. So I set my db to read-only, created a snapshot and wrote down the binlog position. Then I turned off the read-only mode and the original mysql DB can be used as usual.

Then it took my system around 2 weeks to get that snapshopt into Google Cloud (mysql -hhost -u.... < my_tables.sql). Took very long, but worked well.

Since this is done now, I wanted to import the difference netween NOW and BINLOG POSITION X into Google Cloud, so Google catches up (while putting the original DB to read-only again). However, I got around 5000 x 100MB binlogfiles.

Problem: it takes a day per 100MB file to import it into Google.

So what is the best practise to get 50GB+ into Google Cloud 2nd generation? I need to use my Database all the time in production. So shutting it down is not an option. Also, with Google SQL 2nd Generation I can not set up Google as MySQL slave. So that is also not an option....

Thanks, Florian

Florian O
  • 11
  • 1

0 Answers0