2

To do backups I created a script that makes an archive of all the folders I need to backup, sends it to S3 (through s3cmd) and then deletes it after upload has been completed.

I'm looking for a way to avoid having to create the archive and then delete it because I don't have enough space to temporary store the archive! Is it possible?

Here's my script:

DBLIST=`mysql -uMYSQL_USERNAME -pMYSQL_PASSWORD --events -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uMYSQL_USERNAME -pMYSQL_PASSWORD --single-transaction --routines --triggers"
BACKUP_DEST="/home/backup/db"
for DB in `echo "${DBLIST}"`
do
    mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip -f > ${BACKUP_DEST}/${DB}.sql.gz &
done
wait
tar -czvf /home/backup/db2/`date +\%G-\%m-\%d`_db.tar.gz ${BACKUP_DEST}
s3cmd --reduced-redundancy put -r /home/backup/db2/ s3://MY-S3-BUCKET/ --no-encrypt
find /home/backup -type f -delete

On a sidenote, I can bet it's not a best practise to store usernames/passwords in plain text in a crontab file.. how can I solve this?

Thanks in advance :)

MultiformeIngegno
  • 1,627
  • 9
  • 24
  • 31

1 Answers1

1

It looks like s3cmd can accept input from stdin at least according to the resolution of this bug on 2/6/2014. If your s3cmd is newer than that you should be able to do:

tar -czvf - ${BACKUP_DEST} | s3cmd --reduced-redundancy put - s3://MY-S3-BUCKET/`date +\%G-\%m-\%d`_db.tar.gz --no-encrypt

Most utilities use - as a filename to indicate writing to stdout or reading from stdin. That will eliminate having the .tar.gz file on your drive.

As far as passwords/keys/etc go, it looks like you can specify a configuration file to s3cmd with -c FILENAME, presumably you'd use the commands generated by adding --dump-config to a complete s3cmd commandline to create the file. You'd still need to protect that file, though. Likewise MySQL has its ~/.my.cnf file (see here for an example) where you can store connection information.

Also, since you are already gzipping the individual database dumps, I suspect that gzipping the tar again is not going to compress the data much more, and will make the whole process take longer. Consider just using -cvf - and .tar for the filename.

DerfK
  • 19,313
  • 2
  • 35
  • 51
  • Thanks for the detailed answer! :) I'm trying right now and will let you know – MultiformeIngegno Jul 23 '14 at 21:02
  • If I'm not wrong this should be the final script..? http://pastebin.com/HhwXnctx I'm getting "mysqldump: Got errno 32 on write", I'm checking my s3cmd version – MultiformeIngegno Jul 23 '14 at 21:38
  • @MultiformeIngegno you won't be able to dump the databases straight to tar, they have to saved in the `${BACKUP_DEST}` folder for tar to read. If you dont' have any local drive space at all, What you COULD do is `mysqldump | gzip | s3cmd` (with appropriate flags for each) and save each database as its own separate compressed `.sql.gz` file and not use tar at all. Not sure if amazon charges per file or anything like that. – DerfK Jul 23 '14 at 22:14
  • I tried with this: http://pastebin.com/sJW3JeDf But I got (for every db): `gzip: /home/backup/db/db1.sql.gz: No such file or directory mysqldump: Got errno 32 on write ERROR: S3 error: 400 (MalformedXML): The XML you provided was not well-formed or did not validate against our published schema` P.S.: I installed the latest version of s3cmd – MultiformeIngegno Jul 23 '14 at 22:40
  • WOW! I did it! http://pastebin.com/8r2pQFiN :D If it can be improved/optimized please tell me – MultiformeIngegno Jul 23 '14 at 22:59
  • If you use an older version of s3cmd you'll see these errors. http://stackoverflow.com/questions/22288271/mysqldump-got-errno-32-on-write/35906334#35906334 – Ryan Mar 10 '16 at 02:10