2

Currently, I use tar to write my backups (ntbackup files) to a tape drive fed by an autoloader.

Ex: tar -F /root/advancetape -cvf /dev/st0 *.bkf (/root/advancetape just has the logic to advance to the next tape if there is one available or notify to swap the tapes out)

I was recently handed the requirement to encrypt our tape backups. I can easily encrypt the data with no problems using GPG. The problem I'm having is how do I write this to multiple tapes with the same logic that tar uses to advance the tapes once the current one is filled? I cannot write the encrypted file to disk first (2+TB). As far as I can tell, tar will not accept binary input from stdin (it's looking for file names). Any ideas? :(

Dan
  • 1,278
  • 18
  • 27
  • Encryption is easy. Key management is not. Make sure you've figured that out before you start. – duffbeer703 Oct 26 '09 at 20:28
  • If it is so easy, would you mind sharing a solution to the above problem? :-) – Dan Oct 26 '09 at 21:09
  • What type of Tape drive? I ask because LTO4 drives have built in hardware encryption – Zypher Dec 15 '09 at 04:44
  • IBM LTO2 :( We may be purchasing a new drive with hardware encryption soon to backup our new storage array... would still be nice to know how to get this done though. – Dan Jan 25 '10 at 16:49
  • Why not just run the bkf files through encryption before writing them to tape? – Chris S Aug 16 '10 at 18:26

3 Answers3

3

I'm using this script:

#!/bin/sh

TAPE="/dev/nst0"
mt-st -f $TAPE setblk 0
mt-st -f $TAPE status
totalsize=$(du -csb . | tail -1 | cut -f1)
tar cf - . | \
        gpg --encrypt --recipient target@key.here --compress-algo none | \
        pipemeter -s $totalsize -a -b 256K -l | \
        mbuffer -m 3G -P 95% -s 256k -f -o $TAPE \
                -A "echo next tape; mt-st -f $TAPE eject ; read a < /dev/tty"

To adapt it for your needs, here are the main points:

  • tar reads from the current directory and outputs to stdout. This way tar doesn't deal with changing tapes or encryption.
  • gpg has compression switched off as this slows the process considerably (100MB/sec+ down to 5MB/sec)
  • pipemeter is used to monitor the process and give an estimated time until all the data has been written to tape - this can be removed if it is not needed
  • mbuffer buffers the data into memory - this example uses a 3GB buffer, adjust as needed - to allow the tape drive to run for longer before running out of data, reducing "shoe shining" of the tape.
  • The -A option of mbuffer handles multiple tapes by ejecting a tape once the end has been reached and waiting for the Enter key to be pressed after the next tape has been loaded. This is where your /root/advancetape script can go.

One issue to be aware of when using this with LTO tapes:

  • The tape block size is set to variable, and mbuffer writes in 256k blocks. This works well for me with an LTO3 drive, however tar likes to use a different block size. This, combined with the fact that mbuffer handles the spanning across tapes rather than tar, means you will need to read the data off the tape again through mbuffer and then pass it through gpg and on to tar. If you try to extract it directly off the tape with tar (even if you skipped encryption) it will likely not work, and will certainly break once it reaches the end of the first tape, without giving you a chance to change to the next tape.
Malvineous
  • 955
  • 7
  • 27
1

I would suggest you look at this option:

 -I, --use-compress-program PROG
       filter through PROG (must accept -d)

You might need to write a script that takes the input from stdin and encrypts it to stdout, but it should work. The -d is for decompression, in which case you'd need to unencrypt the input.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
  • Almost... but.. :( `# tar -F /root/advancetape --use-compress-program=aespipe -cvf /dev/st0 /mnt/array/ tar: Cannot use multi-volume compressed archives Try `tar --help' or `tar --usage' for more information.` – Dan Oct 26 '09 at 21:07
  • Bah, so near, yet so far. It looked so promising too. – David Pashley Oct 26 '09 at 21:20
0

You could potentially implement this in your -F script. Instead of having tar write directly to /dev/st0, use a temporary staging area. Make sure you specify volume size explicitly using -L . Tar will write up to bytes of data to the file and then call your -F script. Your script could then run gpg on the file and send it to tape (and then delete the archive part from your staging area).

This only requires that you have one tape's worth (x2) of space available on your filesystem.

See http://www.gnu.org/software/tar/manual/html_node/Multi_002dVolume-Archives.html#SEC162 for more information on variables available to your -F script.

EDIT: Also note that this is a completely untested idea! I've been thinking of doing something like this in order to provide compression to multivolume archives, but I haven't actually implemented it.

larsks
  • 41,276
  • 13
  • 117
  • 170