3
I want to backup my systems to split tar archives with a script uploading them one by one. It has to create the split archive and then run a script. The script uploads the part and deletes it. This is to ensure backups do not use so much space on the system. I could create the split archives and upload them, but I'd need 50% free space. So I need to create them one at a time. I am looking for advice on the best approach. I have a couple in mind, you can suggest a better one.
Approach one: Split the archives with tar itself, and use --new-volume-script. The problem with this is that I have to calculate how big the backup is going to be. Tar seems to require specific directions for how many parts are going to exist and how big they have to be. This means my script would have to calculate this and generate the parameters to tar.
tar -c -M -L 102400 --file=disk1.tar --file=disk2.tar --file=disk3.tar largefile.tgz
This creates three 100Mb files for each part. If there is a way to do this dynamically with tar automatically naming the files and creating as many as it needs, I would like to know because it would make this approach workable.
Approach two: write my own script that behaves like split. The output from tar is given to it on stdin, and it uploads the files and makes tar wait. This would be the easiest solution.