14

I worked with a linux pro years ago who did this nifty trick. He could do a mysqldump but redirect the output to an scp/ssh connection rather than writing to disk. We utilised this quiet a bit where i used to work but I cannot remember how to do this anymore.

I'm now faced with the issue where my hard drive in my server is on it's last legs and is, for intensive purposes, permanently mounted as read only.

I was hoping to utilise this command line trickery to still be able to back up my databases on to a new server, since writing the dump to the local diska nd transferring it is clearly out of the question.

Is this actually little trick actually possible? If so what is the syntax?

I've since realised due the read only file system issue I'm experience I cannot even connect to mysql to do the dump. But your feed back was great and hopefully of use to someone else in the future

xzyfer
  • 323
  • 1
  • 5
  • 12

3 Answers3

13

mysqldump dbname | ssh root@remoteserver.com "mysql -D dbname"

that should work :-)

Set up keys between the systems as well so you can do so w/o needing a login/pass :-)

Glenn Kelley
  • 1,294
  • 6
  • 10
  • This is a really nifty one, piping the dump directly into mysql on the remote server, essentially cloning the database is one command. Am I correct? – xzyfer Jan 28 '11 at 05:17
  • 1
    you got it ;-) of course you could always simply just push to a file however i like the dumps as they allow for you to roll to a backup if needed :-) in real time just by changing over an ip in your config files ... HUGE benefit. We do this over on our linode stuff all the time – Glenn Kelley Jan 28 '11 at 05:21
  • 1
    I feel like there could maybe be some pipe buffering issues here? – Phil Hollenback Jan 28 '11 at 05:24
11
mysqldump ... | ssh ... "cat > out.dmp"
Ignacio Vazquez-Abrams
  • 45,019
  • 5
  • 78
  • 84
  • This one worked great for me. I set up public key authentication on my Mac OS X laptop and ran `mysqldump -p | ssh @ "cat > .sql"` Exactly what I needed. – harperville Jun 09 '14 at 15:40
4

I like @GlennKelley's answer, but wanted to point out the following:

We had a database that was 450GB, but only 500GB provisioned on the host. We could not export locally, so we exported remotely. We validated the output and there were hundreds of instances where the output was corrupt due to pipe buffering issues.

The best solution here, is, from the target host where you want your dump to end up on, run mysqldump but use the -h option for host. Point the host to the MySQL server and export the data using a >.

mysqldump -u root -p -h 10.1.1.199 --all-databases ...<more options>... > dump.sql
Mike Mackintosh
  • 272
  • 3
  • 12