1

Situation: 1. mysql is backuped to disk with mysqldump (~250 GB) 2. dump is compressed with bpzip2 3. dump is moved to another DC 4. disk usage again in good state

Problem: Filesystem usage peaks, example: I need 1 TB space to fit the data while dumping - i need to get rid of this (paying for unused disk space)

Tried to pipe dump directly to bpzip2 but it's slow (high compression needed), I want to avoid tables lock. Pipe buffer cannot be easily changed with bash (if it is possible) maybe in C, Python as i have read.

Question: Is there a way to handle that peaks? Any ideas will be appreciated.

Damian
  • 21
  • 3

1 Answers1

1

--single-transaction tested, works as expected (thanks to Alexander Tolkachev)

/usr/bin/mysqldump -v --single-transaction --skip-add-drop-table -u'user' -p'password' -h 'host' ${db} 2>/var/log/dump/${db}.log | pbzip2 -p2 > "$sql"

I heard that parallel bzip could have problem with piping, but maybe in some older versions because it works as expected, also it's faster - it took only 3/4 of the original time.

I was worried about piping ~250GB, if file will be corrupted or something, but any error found while testing. (I haven't tried restore, more info about piping in following link) https://stackoverflow.com/questions/2715324/can-a-pipe-in-linux-ever-lose-data

Damian
  • 21
  • 3