Basically, what you need is the possibility to pipe the file into tar, and "lop" the front as you go.
On StackOverflow, somebody asked how to truncate a file at front, but it seems it isn't possible. You could still fill the begining of the file with zeroes in a special way so the file becomes a sparse file, but I don't know how to do this. We can truncate the end of the file, though. But tar needs to read the archive forwards, not backwards.
Solution 1
A level of indirection solves every problem. First reverse the file in-place, then read it backwards (which will result in reading the original file forwards) and truncate the end of the reversed file as you go.
You'll need to write a program (c, python, whatever) to exchange the begining and the end of the file, chunk by chunk, and then pipe these chunks to tar while truncating the file a chunk at a time. This is the basis for solution 2 which is maybe simpler to implement.
Solution 2
Another method is to split the file in small chunks in-place, then delete those chunks as we extract them. The code below has a chunk size of one megabyte, adjust depending on your needs. Bigger is faster but will take more intermediate space when splitting and during extraction.
Split the file archive.tar :
archive="archive.tar"
chunkprefix="chunk_"
# 1-Mb chunks :
chunksize=1048576
totalsize=$(wc -c "$archive" | cut -d ' ' -f 1)
currentchunk=$(((totalsize-1)/chunksize))
while [ $currentchunk -ge 0 ]; do
# Print current chunk number, so we know it is still running.
echo -n "$currentchunk "
offset=$((currentchunk*chunksize))
# Copy end of $archive to new file
tail -c +$((offset+1)) "$archive" > "$chunkprefix$currentchunk"
# Chop end of $archive
truncate -s $offset "$archive"
currentchunk=$((currentchunk-1))
done
Pipe those files into tar (note we need the chunkprefix variable in the second terminal) :
mkfifo fifo
# In one terminal :
(while true; do cat fifo; done) | tar -xf -
# In another terminal :
chunkprefix="chunk_"
currentchunk=0
while [ -e "$chunkprefix$currentchunk" ]; do
cat "$chunkprefix$currentchunk" && rm -f "$chunkprefix$currentchunk"
currentchunk=$((currentchunk+1))
done > fifo
# When second terminal has finished :
# flush caches to disk :
sync
# wait 5 minutes so we're sure tar has consumed everything from the fifo.
sleep 300
rm fifo
# And kill (ctrl-C) the tar command in the other terminal.
Since we use a named pipe (mkfifo fifo
), you don't have to pipe all chunks at once. This can be useful if you're really tight on space. You can follow the following steps :
- Move, say the last 10Gb chunks to another disk,
- Start the extraction with the chunks you still have,
- When the
while [ -e … ]; do cat "$chunk…; done
loop has finished (second terminal) :
- do NOT stop the
tar
command, do NOT remove the fifo (first terminal), but you can run sync
, just in case,
- Move some extracted files which you know are complete (tar isn't stalled waiting for data to finish extracting these files) to another disk,
- Move the remaining chunks back,
- Resume extraction by running the
while [ -e … ]; do cat "$chunk…; done
lines again.
Of course this is all haute voltige, you'll want to check everything is ok on a dummy archive first, because if you make a mistake then goodbye data.
You'll never know if the first terminal (tar
) has actually finished processing the contents of the fifo, so if you prefer you can run this instead, but you won't have the possibility to seamlessly exchange chunks with another disk :
chunkprefix="chunk_"
currentchunk=0
while [ -e "$chunkprefix$currentchunk" ]; do
cat "$chunkprefix$currentchunk" && rm -f "$chunkprefix$currentchunk"
currentchunk=$((currentchunk+1))
done | tar -xf -
Disclaimer
Note that for all this to work, your shell, tail and truncate must handle 64-bit integers correctly (you don't need a 64-bit computer nor operating system for that). Mine does, but if you run the above script on a system without these requirements, you'll loose all the data in archive.tar.
And in any case something other than that goes wrong, you'll loose all the data in archive.tar anyway, so make sure you have a backup of your data.
See my edited question – anonymous coward – 2010-06-25T14:04:08.930
@Charlie Somerville: yes, you left the important part out in the first place. :) – akira – 2010-06-25T15:06:44.107