5

Is there a way to compact a very large directory under EXT2/EXT3 without simply remaking the directory?

I recall that perlfunc cautions that the OS implementations of seekdir and telldir run the risk of directory compaction. which sounds like what I want in this case, but I'm unfamiliar with those semantics in practice.

background: I've a few directories that are themselves many MB in size -- they were overrun with a zillion small files in the past:

$ ls -lh
drwxr-x--- 2 root root 1.3M Oct  5 12:49 big
drwxr-x--- 2 root root 2.3M Oct  5 12:49 this_one_is_empty_now
drwxr-x--- 2 root root 6.1M Oct  5 12:49 yikes
pilcrow
  • 449
  • 5
  • 19

2 Answers2

5

Directories can not be compacted online exactly because of the requirements of seekdir/telldir. They require any program to be able to maintain position within the directory for indefinite time and still only read any given entry once; therefore, the entries can not be moved around while the fs is mounted.

You can compact the directory offline with e2fsck -D.

psusi
  • 3,247
  • 1
  • 16
  • 9
  • I think those are rather soft requirements, as the GNU docs for `dirent` caution that seekdir/telldir may not be reliable. The point about e2fsck is well taken. +1 – pilcrow Oct 05 '11 at 13:55
0

Have you looked into pigz? It's an improved gzip.

John Allspaw talks about it on his blog here:
http://www.kitchensoap.com/2010/04/02/pigz-parallel-gzip-omg/

gWaldo
  • 11,887
  • 8
  • 41
  • 68
  • I think the OP wants to reduce the size of the directory file within the filesystem rather than compress the actual files – user9517 Oct 05 '11 at 13:16