31

I am using tar to archive a group of very large (multi-GB) bz2 files.

If I use tar -tf file.tar to list the files within the archive, this takes a very long time to complete (~10-15 minutes).

Likewise, cpio -t < file.cpio takes just as long to complete, plus or minus a few seconds.

Accordingly, retrieving a file from an archive (via tar -xf file.tar myFileOfInterest.bz2 for example) is as slow.

Is there an archival method out there that keeps a readily available "catalog" with the archive, so that an individual file within the archive can be retrieved quickly?

For example, some kind of catalog that stores a pointer to a particular byte in the archive, as well as the size of the file to be retrieved (as well as any other filesystem-specific particulars).

Is there a tool (or argument to tar or cpio) that allows efficient retrieval of a file within the archive?

Alex Reynolds
  • 453
  • 2
  • 9
  • 20
  • As others have said most archive formats other than tar use an index, you can also make an external index for uncompressed tar-s; https://serverfault.com/a/1023249/254756 – user1133275 Jun 28 '20 at 16:58

8 Answers8

20

tar (and cpio and afio and pax and similar programs) are stream-oriented formats - they are intended to be streamed direct to a tape or piped into another process. while, in theory, it would be possible to add an index at the end of the file/stream, i don't know of any version that does (it would be a useful enhancement though)

it won't help with your existing tar or cpio archives, but there is another tool, dar ("disk archive"), that does create archive files that contain such an index and can give you fast direct access to individual files within the archive.

if dar isn't included with your unix/linux-dist, you can find it at:

http://dar.linux.free.fr/

cas
  • 6,653
  • 31
  • 34
  • Is there a way to pipe an extraction to standard output? It looks like there's a way to make an archive from standard input, but not a way (at least not directly) to extract to standard output. It's not clear from the documentation if there is a way to do this. Do you know how this might be accomplished? – Alex Reynolds Aug 28 '09 at 05:06
  • 1
    nope, don't know. I don't actually use dar myself...i just know that it exists. i'm happy enough with tar, and tend to just create text files listing the contents for large tar files that i might want to search later. you can do this at the same time as creating the tar archive by using the v option twice (e.g. "tar cvvjf /tmp/foo.tar.bz2 /path/to/backup > /tmp/foo.txt") – cas Aug 28 '09 at 05:32
12

You could use SquashFS for such archives. It is

  • designed to be accessed using a fuse driver (although a traditional interface exists)
  • compressed (the larger the block size, the more efficient)
  • included in the Linux kernel
  • stores UIDs/GIDs and creation time
  • endianess-aware, therefore quite portable

The only drawback I know of is that it is read-only.

http://squashfs.sourceforge.net/ http://www.tldp.org/HOWTO/SquashFS-HOWTO/whatis.html

MauganRa
  • 221
  • 2
  • 4
8

While it doesn't store an index, star is purported to be faster than tar. Plus it supports longer filenames and has better support for file attributes.

As I'm sure you're aware, decompressing the file takes time and would likely be a factor in the speed of extraction even if there was an index.

Edit: You might also want to take a look at xar. It has an XML header that contains information about the files in the archive.

From the referenced page:

Xar's XML header allows it to contain arbitrary metadata about files contained within the archive. In addition to the standard unix file metadata such as the size of the file and it's modification and creation times, xar can store information such as ext2fs and hfs file bits, unix flags, references to extended attributes, Mac OS X Finder information, Mac OS X resource forks, and hashes of the file data.

ricmarques
  • 1,112
  • 1
  • 13
  • 23
Dennis Williamson
  • 60,515
  • 14
  • 113
  • 148
6

The only archive format I know of that stores an index is ZIP, because I've had to reconstruct corrupted indexes more than once.

womble
  • 95,029
  • 29
  • 173
  • 228
5

Thorbjørn Ravn Anderser is right. GNU tar creates "seekable" archives by default. But it does not use that information when it reads these archives if -n option is not given. With -n option I just extracted 7GB file from 300GB archive in time required to read/write 7GB. Without -n it took more than hour and produced no result.

I'm not sure how compression affects this. My archive was not compressed. Compressed archives are not "seekable" because current (1.26) GNU tar offloads compression to external program.

  • according to the tar man page http://man7.org/linux/man-pages/man1/tar.1.html, GNU tar will by default use the seekable format when writing, and if the archive is seekable, will use it when reading (for list or extract). If you are using GNU tar and still seeing the issue, you should file a bug report with GNU. – Brian Minton Dec 22 '14 at 18:20
  • 10
    If I read the manual correctly, it never says it has any sort of index and can jump to any file within the archive given the file name. --seek just means the underlying media is seekable, so that when it reads from the beginning, it can skip reading file contents, but it still needs to read entry headers from beginning. That said, if you have an archive with 1M files, and you try to extract the last one, with --no-seek, you need to read contents of all files; with --seek, you only need to read 1M headers, one for each file, but it is still super slow. – icando Feb 01 '15 at 05:36
2

It doesn't index that I know of, but I use dump & restore with large files, and navigating the restore tree in interactive mode to select random files is VERY fast.

MediaManNJ
  • 131
  • 1
2

You can use the 7z (7zip) archive/compression format if you have access to the p7zip-full package.

On Ubuntu you can use this command to install it:

$ sudo apt-get install p7zip-full

To create an archive you can use 7z a <archive_name> <file_or_directory> and if you do not want to compress the files and want to just "store" them as-is, you can use the -mx0 option like:

$ 7z a -mx0 myarchive.7z myfile.txt

Creating archive myarchive.7z

You can then extract the files using 7z e:

$ 7z e myarchive.7z

Processing archive: myarchive.7z
Extracting  myfile.txt

Or you can list the index of the archive with the 7z l which is handy for searching with grep:

$ 7z l myarchive.7z | grep

2014-07-08 12:13:39 ....A            0            0  myfile.txt

This is also the t option to test integrity, u to add/update a file to the archive, and d to delete a file.

IMPORTANT NOTE
Do not use the 7zip format for linux filesystem backups as it does not store the owner and group of the files contained.

complistic
  • 141
  • 4
1

I belive GNU tar is capable of doing what you want, but I cannot locate a definitive resource saying so.

In any case you need a archiving format with an index (since that will allow you to do what you want). I do not belive ZIP-files can grow that big, unfortunately.

  • ZIP files can grow **big**. – Pacerier Jan 03 '15 at 08:30
  • 1
    If I read the manual correctly, it never says it has any sort of index and can jump to any file within the archive given the file name. --seek just means the underlying media is seekable, so that when it reads from the beginning, it can skip reading file contents, but it still needs to read entry headers from beginning. That said, if you have an archive with 1M files, and you try to extract the last one, with --no-seek, you need to read contents of all files; with --seek, you only need to read 1M headers, one for each file, but it is still super slow. – icando Feb 01 '15 at 05:36
  • 2
    @Pacerier To my understanding the ZIP64 format allows for very large files, but the original ZIP format doesn't. – Thorbjørn Ravn Andersen Mar 10 '15 at 17:45
  • @ThorbjørnRavnAndersen, A single [4 GB](https://en.wikipedia.org/wiki/Zip_(file_format)#Limits) file is **big** dude. – Pacerier Mar 11 '15 at 10:47
  • 3
    @Pacerier 4GB hasn't been big since DVD ISOs came on the scene almost twenty years ago. Terrabytes is big nowadays. – oligofren Dec 18 '18 at 22:44