-1

I have this very large directory that I'm deleting with : rm - rf . It is taking very long, so I opened up another terminal and try deleting the same directory with rm -rf parallely.

Is this expected to speed up deleting the directory? Or is there a chance that this actually ends up slowing things down

Abhimanyu
  • 1
  • 1

1 Answers1

0

Try it. Exact performance varies per many variables, so results are specific to your environment. Probably it will not be as good as the 2x you expect, and may even be worse.

Metadata heavy file system operations do a large number of IOs and hit contention in file system concurrency mechanisms. To sum up a related USENIX paper, Understanding Manycore Scalability of File Systems,

  • Fine-grained locks often impede scalability
  • Subtle contentions matter

Also consider making use of some storage tricks to get systems usable faster.

Mount the volume containing to be deleted a second time somewhere else, /mnt/manyfiles, and mount an empty directory on top of the current location, say it is /srv/whatever. You now can delete at your convenience while the application sees an empty directory.

Create new file systems with a mount point at the problem directory /srv/whatever. The next time it needs to be blown away, deleting and recreating a LVM volume is much faster than deleting thousands or millions of files.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32