To do backups I created a script that makes an archive of all the folders I need to backup, sends it to S3 (through s3cmd) and then deletes it after upload has been completed.
I'm looking for a way to avoid having to create the archive and then delete it because I don't have enough space to temporary store the archive! Is it possible?
Here's my script:
DBLIST=`mysql -uMYSQL_USERNAME -pMYSQL_PASSWORD --events -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'`
MYSQLDUMP_OPTIONS="-uMYSQL_USERNAME -pMYSQL_PASSWORD --single-transaction --routines --triggers"
BACKUP_DEST="/home/backup/db"
for DB in `echo "${DBLIST}"`
do
mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip -f > ${BACKUP_DEST}/${DB}.sql.gz &
done
wait
tar -czvf /home/backup/db2/`date +\%G-\%m-\%d`_db.tar.gz ${BACKUP_DEST}
s3cmd --reduced-redundancy put -r /home/backup/db2/ s3://MY-S3-BUCKET/ --no-encrypt
find /home/backup -type f -delete
On a sidenote, I can bet it's not a best practise to store usernames/passwords in plain text in a crontab file.. how can I solve this?
Thanks in advance :)