98

Usually after dumping a MySQL database with mysqldump command I immediately tar/gzip the resultant file. I'm looking for a way to do this in one command:

So from this:

mysqldump dbname -u root -p > dbname.sql
tar czvf dbname.sql.tgz dbname.sql
rm dbname.sql

To something like this:

mysqldump dbname -u root -p > some wizardry > dbname.sql.tgz

Or even better (since I'm usually scp'ing the dump file to another server):

mysqldump dbname -u root -p > send dbname.sql.tgz to user@host

I'm running bash on debian.

pygorex1
  • 1,151
  • 1
  • 9
  • 10

9 Answers9

112
mysqldump --opt <database> | gzip -c | ssh user@wherever 'cat > /tmp/yourfile.sql.gz'

You can't use tar in a pipe like this, and you don't need it anyway, as you're only outputting a single file. tar is only useful if you have multiple files.

James
  • 7,553
  • 2
  • 24
  • 33
  • 6
    You're right about not needing tar, but you could use it in the pipeline if you did, with `mysqldump | tar cf - | gzip -c | ssh ... 'cat > file.tgz'` – Darren Chamberlain Jan 27 '10 at 00:15
  • Does that actually work? I'm pretty sure tar needs a list of filenames to work on. – James Jan 27 '10 at 10:25
  • 3
    I updated this to work locally (not on a remote ssh server) oh, and I use a dynamic name based on date, thanks to original poster & answerer! `mysqldump --opt | gzip -c | cat > $(date +%Y-%m-%d-%H.%M.%S).sql.gz` – electblake Apr 11 '11 at 13:41
  • 6
    @electblake: you don't need to be using 'cat' if it's local. Just `gzip -c > $(date +%Y-%m-%d-%H.%M.%S).sql.gz` – James Jan 03 '12 at 18:58
  • Just for fun, you could use `netcat` instead of piping to ssh. You wouold save a little on the encryption overhead of ssh, if it is being transfered over a secure network (or you don't care about security). Nowadays you might also consider using `xz` instead of `gzip`. – James Nov 05 '12 at 02:10
  • `tar` command is not working, I think it would work after `ssh` is logged in, but when I tried to make in work in one of my remote servers, it failed with error `tar: Cowardly refusing to create an empty archive Try `tar --help' or `tar --usage' for more information. Warning: Using a password on the command line interface can be insecure. mysqldump: Got errno 32 on write ` – Vicky Dev Jul 27 '16 at 06:26
  • This answer creates a `.sql.gz` not `.sql.tgz`. – Christia Dec 08 '17 at 02:23
51

If you are running this locally just use the following command to backup your database & zip it using gzip:

mysqldump -u userName -p (passwordPrompt) yourDatabaseName | gzip -c > output.gz 

(Edit: fixed -c key)

Dax
  • 611
  • 5
  • 3
18

Use a named pipe.

mkfifo mysql_pipe
gzip -9 -c < mysql_pipe > name_of_dump.gz &
mysqldump database > mysql_pipe 
rm mysql_pipe

I use it all the time, it'a awesome.

http://en.wikipedia.org/wiki/Named_pipe

Jon Haddad
  • 1,332
  • 3
  • 13
  • 20
18

I wrote a quick script to suck down a remote mysql database. It uses mysql compression, gzip and ssh compression. Sucked down a multi GB database at an incredible rate.

    ssh -C user@host "mysqldump --opt --compress database <table> | gzip -9 -c" > outputfile.sql.gz

A side benefit is that it requires no free space on the source database server, so you can use it to backup a database on a server with zero free disk space before going in an pruning your data.

Hope it helps somebody.

Tony Dillon
  • 181
  • 1
  • 4
  • I've created a simple shell script: #!/bin/bash if [ -z "$1" ]; then echo "Usage: ${0} [host] [user] [database] [outputFile]" exit else HOST=$1 fi if [ -z "$2" ]; then echo "Usage: ${0} ${1} [user] [database] [outputFile]" exit else USER=$2 fi if [ -z "$3" ]; then echo "Usage: ${0} ${1} ${2} [database] [outputFile]" exit else DB=$3 fi if [ -z "$4" ]; then OUTFILE="${DB}.sql.gz" else OUTFILE=$4 fi COMMAND="ssh -C ${USER}@${HOST} \"mysqldump --opt ${DB} | gzip -9 -c\" > ${OUTFILE}" ssh -C ${USER}@${HOST} "mysqldump --opt ${DB} | gzip -9 -c" > ${OUTFILE} – Tony Dillon Sep 12 '11 at 18:52
  • Two of those compressions are useless: The option to mysqldump compresses the data in the server process and immediately decompresses to again (if the mysqldump is run on the DB server itself). The -C option to ssh activates gzip compression which will waste further CPU-cycles because the data already is gzipped at that point. – Matthias Winkelmann Jan 10 '17 at 19:11
5

Use pv and monitor rate!

mysqldump prod_db -h dbslave | pv | gzip -c > prod_2012_08_20.dump.tgz

Or, if you know the size (3GB), get an accurate estimate:

mysqldump prod_db -h dbslave | pv -s 3g | gzip -c > prod_2012_08_20.dump.tgz
New Alexandria
  • 159
  • 3
  • 9
4

Try this:

mysqldump --all-databases --password=dbpassword | gzip -c | ssh user@servername "cat >/tmp/filename_of_your_choice.gz"

Please not that I am in no way good at these things, I just combined 2 options on the web into one.

It may very well be better in some other way but this is a one-liner that works for me.

It does however require ssh.keys to be installed and accepted if you want to use it in scripts or crontab or alike.

chicks
  • 3,639
  • 10
  • 26
  • 36
2

You can do like:

mysqldump --add-drop-table -h dbhost -u dbuser -p dbname (tablename tablename ... ) | gzip -c > wp.sql.gz

e.g.

mysqldump --add-drop-table -h localhost -u root -p wordpress | gzip -c > wp.sql.gz

quanta
  • 50,327
  • 19
  • 152
  • 213
Min He
  • 61
  • 2
1

I've been working on this bash script below that tries to put together all the good advices I've seen when it comes to dump/restore with mysql. It is targeted at remote operations.

Just reconfig vars and give it a try. :)

Features are:

  • you can pass a list of tables to dump (selective dump)
  • you can be prompted for passwords (MySQL/SSH) or set them in variables
  • network transmission is gzipped
  • you can opt to save gzipped dump to remote server
  • you can reimport dump to remote server on-the-fly (no temp files on local/remote server)
  • you have visual feedback of what is happening (thanks to echo and pv)
  • you can set mysql variables before and after the dump process

What needs improvement:

  • you need to pass a list of tables (cant dump all tables)
  • MySQL password are the same for source and target
  • you need to GRANT PRIVILEGES manually (looks like MySQL dont let do it remotelly)
  • you need to have installed sshpass
  • some innodb huge compressed tables are slow to dump (may be mysqldump's fault)

I share this script here hoping it can be improved by the community. (best viewed with nano or other editor that colors the code)

--------------------------------- cut here ----------------------------------

#!/bin/bash
#set -x

#REQUIRED VARS
SOURCE_USER=root   #MySQL user
SOURCE_HOST=localhost
SOURCE_PASSWORD=yourmysqlpass  #optional
SOURCE_DBNAME=yourdbname
TARGET_HOST=192.168.1.2
TARGET_DBNAME=yourdbname
TARGET_SSHUSER=root
TARGET_SSHPASSWORD=yoursshpass  #optional
TABLES='table1 table2 table3 table4'
TARGET_DIR="/data/dumpfiles"
EXEC_ACTION_TEXT[0]='Reimport TABLES directly into remote MySQL database'
EXEC_ACTION_TEXT[1]='Backup gzipped data to TARGED_DIR on remote TARGET_HOST'
EXEC_ACTION=0

#print config
echo "---------------------------------"
echo " SOURCE_USER:    $SOURCE_USER (MySQL)"
if [ "SOURCE_PASSWORD" != "" ]; then
echo " SOURCE_PASSWORD:<present>        "; else
echo " SOURCE_PASSWORD:<to be asked>    "
fi
echo " SOURCE_HOST:    $SOURCE_HOST     "
echo " SOURCE_DBNAME:  $SOURCE_DBNAME   "
echo " TARGET_HOST:    $TARGET_HOST     "
echo " TARGET_DBNAME:  $TARGET_DBNAME   "
echo " TARGET_SSHUSER: $TARGET_SSHUSER  "
if [ "TARGET_SSHPASSWORD" != "" ]; then
echo " TARGET_SSHPASS: <present>     "; else
echo " TARGET_SSHPASS: <to be asked>    "
fi
echo " TABLES:         $TABLES          "
echo " EXEC_ACTION:    $EXEC_ACTION - ${EXEC_ACTION_TEXT[$EXEC_ACTION]}"
echo " TARGET_DIR:     $TARGET_DIR (only for action 1)"
echo "---------------------------------"
echo "PRESS <ENTER> to continue...";  read;  echo

#read the mysql password from command-line (SOURCE and TARGET uses the same password)
if [ "$SOURCE_PASSWORD" == "" ]; then
     echo -n "Type $SOURCE_USER password for MySQL servers: "; read -s SOURCE_PASSWORD; echo
fi
echo "Creating database $TARGET_DBNAME on $TARGET_HOST if not exists ... "
mysql \
--user=$SOURCE_USER \
--password=$SOURCE_PASSWORD \
--host=$TARGET_HOST \
--execute "create database if not exists $TARGET_DBNAME;"

echo '--------------------------------------------------------------------------------------'
echo "**** ATTENTION ****: execute this command on mysql server at  $TARGET_HOST :"
echo "GRANT ALL PRIVILEGES ON $TARGET_DBNAME.* TO '$SOURCE_USER'@'%' IDENTIFIED BY 'yourpass';"
echo '--------------------------------------------------------------------------------------'
echo "PRESS <ENTER> to continue...";  read;  echo

#read the password from command-line
if [ "$TARGET_SSHPASSWORD" == "" ]; then
     echo -n "Type the password for remote SSH Server (TARGET) ['$TARGET_SSHUSER'@'$TARGET_HOST']: "; read -s TARGET_SSHPASSWORD; echo
fi

for thistable in $TABLES
do
     case "$EXEC_ACTION" in
         0)
         thisaction="gunzip | mysql --user=$SOURCE_USER --password=$SOURCE_PASSWORD -D $TARGET_DBNAME"
         endmessage='remote reimporting has finished'
         ;;
         1)
         thisaction="cat > $TARGET_DIR/`date +%Y.%m.%d`-"$thistable".gz"
         endmessage="$thisaction has finished"
         ;;
         *)   echo "EXEC_ACTION=$EXEC_ACTION not supported" && exit 1
     esac

     echo "---------------------------------------------------------------------"
     echo "-- table $thistable"
     echo "---------------------------------------------------------------------"
     (
       echo -n "-- setting variables... " > /dev/stderr  #talk to user via stderr
       echo "SET AUTOCOMMIT=0; SET UNIQUE_CHECKS=0; SET FOREIGN_KEY_CHECKS=0;"
       echo -n "starting mysqldump... " > /dev/stderr
       mysqldump --opt --user=$SOURCE_USER --password=$SOURCE_PASSWORD --host=$SOURCE_HOST $SOURCE_DBNAME $thistable
       echo -n "done mysqldump, reseting variables... " > /dev/stderr
       echo "SET FOREIGN_KEY_CHECKS=1; SET UNIQUE_CHECKS=1; SET AUTOCOMMIT=1;"
       echo -n "commiting... " > /dev/stderr
       echo "COMMIT;"
       echo "done!" > /dev/stderr
     ) | \
     gzip -c -2 | \
     pv | \
     sshpass -p $TARGET_SSHPASSWORD ssh $TARGET_SSHUSER'@'$TARGET_HOST $thisaction
     echo $endmessage ' with exit status '$?
done
0

You can also store your password in a config file and use this option --defaults-extra-file:

mysqldump --defaults-extra-file=mysqldump.cnf DataBaseName | gzip -c > DBOutputName.sql.gz

The config file can look like this:

[mysqldump]
host = localhost
user = username
password = "password"
linstar
  • 101