8

I want to transfer a mysql dump, compressed, to s3.

I tried:

mysqldump -u root -ppassword -all-databases | gzip -9 | s3cmd put s3://bucket/sql/databases.sql.gz

but then I get:

ERROR: Not enough paramters for command 'put'

How can I do this (in one line)?

zensys
  • 299
  • 3
  • 9
  • 19

5 Answers5

21

This is possible with s3cmd 1.5+ (link):
$ mysqldump ... | s3cmd put - s3://bucket/file-name.sql

daharon
  • 401
  • 4
  • 6
  • 5
    As an added note - it would probably be better in practice to 'double pipe' via a compressor: `mysqldump ... | gzip | s3cmd put - s3://bucket/file.sql.gz`, saves 80+% on S3 storage costs. Using this construct successfully to upload terabytes of vsphere backups to S3 without local storage :) – Niels Keurentjes Dec 11 '15 at 09:56
  • 1
    @NielsKeurentjes How do you do the reverse? `s3cmd get s3://bucket/file.sql.gz - | gzip -d | mysql -u root -p` gives the error 'gzip: stdin: not in gzip format' – KalenGi Mar 10 '16 at 11:59
2

This appear to now be possible. With s3cmd v1.6.1:

curl -v "http://remote-server/file.mp4" | 
s3cmd [-c .s3cfg-aws] put - [-P] s3://my-bucket/[folder/]filename.mp4
Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
1

You are missing the actual file you want to backup to start.

s3cmd put /backup_dir/somefile.sql.gz s3://bucket/sql/

s3cmd takes two basic arguments, the file, and the bucket to backup too.

Secondly, I can't take credit for the following, but its basically doing what you want with an intermediate script. Basically, create a bak.sh file with the following, and then that shell script will be runnable via bash. (Credit: http://www.wong101.com/tech-cloud/configure-s3cmd-cron-automated-mysql-backup)

S3BUCKET="<bucketname>"

# Array of Databases
DBS=("<db1>" "<db2>" "<db3>" "<db4>")

for i in "${DBS[@]}"
do
    DBNAME=$i
    FILE=$DBNAME-`date "+%Y%m%d-%H%M"`.sql.gz
    mysqldump $DBNAME -u[uname] -p[password] | gzip -9> /home/$FILE
    #location of s3cmd may vary, modify if needed
    /usr/bin/s3cmd --config /root/.s3cfg put /home/$FILE s3://$S3BUCKET/DBBackup-$DBNAME/ >> /var/log/mysqlback.log
    sleep 5
    rm /home/$FILE

done
Travis Stoll
  • 341
  • 2
  • 12
  • -1 This answer specifically excludes the piping requested in the question. The piping can be critically important if you're concerned about local disk space. – beporter Jun 02 '17 at 14:33
0

You could try using - to indicate to s3cmd that it should read stdin for the source parameter, it may work. Failing that you could do it with an intermediate step

mysqldump -u root -ppassword -all-databases | gzip -9 >tmp.file && s3cmd put tmp.file s3://bucket/sql/databases.sql.gz ... && rm tmp.file

This will compress the output to tmp.file and if successful (&&) it will put the file to s3 and then if that was successful it will delete the temporary file.

user9517
  • 114,104
  • 20
  • 206
  • 289
  • https://github.com/s3tools/s3cmd/issues/270 suggests that only a very recent version of s3cmd will accept stdin for the upload. OP probably should be running 1.5 (which is still in beta) if he wants to do this. Alternatively, an intermediate file is needed. – cjc Jun 17 '14 at 16:33
  • @cjc Like I said `you could try...` but I was guessing completely based on $experience. I've never used s3cmd but it's good to know I was close and correct/ – user9517 Jun 17 '14 at 16:35
0

Create mysql.sh for mysqldump store direct to s3 via pipline No local files will be stored.

#!/bin/bash

#/home/ec2-user/s3/mysql.sh

#logging
    LOG_ROOT="/var/www/html/laravel/storage/logs/laravel-$(date +%Y-%m-%d).log"

# Basic variables

# Database credentials
USER=root
PASSWORD=password
HOST=localhost
DB_NAME=dbname

#Backup_Directory_Locations

S3BUCKET=s3://bucket/database/$(date +%a).sql.gz

# Timestamp (sortable AND readable)
TSTAMP=$(date +"%Y-%m-%d %H:%M:%S")

#Dump of Mysql Database into S3\
echo "[$TSTAMP] MYSQL_Dump.WARNING: MySQL Dump for DB_NAME started from cron s3/mysql.sh" >> "$LOG_ROOT"
echo "Stack trace:" >> "$LOG_ROOT"
echo "#wil be stored as s3://bucket/database/$(date +%a).sql.gz" >> "$LOG_ROOT"

mysqldump --host=$HOST --user=$USER --password=$PASSWORD  $DB_NAME  --routines  --single-transaction | gzip -9 | aws s3 cp - $S3BUCKET

$TSTAMPEND=$(date +"%Y-%m-%d %H:%M:%S")
echo "[$TSTAMPEND] MYSQL_Dump.INFO: MySQL Dump finished." >> "$LOG_ROOT"
echo "Stack trace:" >> "$LOG_ROOT"
echo "#Backup stored at s3://bucket/database/$(date +%a).sql.gz" >> "$LOG_ROOT"

Create Cronjob for daily backup

#crontab -e
0 0 * * *  bash /home/ec2-user/s3/mysql.sh to  >/dev/null 2>&1
#midnight 12 daily