7

I have been trying to delete a directory from a centos server using rm -Rf /root/FFDC for the past 15 hours and am having great difficulty. I can't do a directory listing because it hangs the system (too many files?) but what I can see is that the directory size is not the usual 4096 bytes but 488MB!

[root@IS-11034 ~]# ls -al
total 11760008
drwxr-x--- 31 root root        4096 Aug 10 18:28 .
drwxr-xr-x 25 root root        4096 Aug 10 16:50 ..
drwxr-xr-x  2 root root   488701952 Aug 11 12:20 FFDC

I've checked the inodes and everything seems fine. I've checked top and rm is still using the cpu after 15 hours at 0.7%. Filesystem type is ext3.

I'm clueless where to go now apart from backup and format.

James
  • 325
  • 2
  • 10
  • 22

4 Answers4

4

Have you considered unmounting the filesystem and then running e2fsck to check the file system for errors? I would try this before backup, format, restore.

jftuga
  • 5,572
  • 4
  • 39
  • 50
4

Is even ls -1f /root/FFDC slow? With -1f the output won't be sorted and the file details will be left out.

If the ls above runs fast, perhaps something like find /root/FFDC | xargs rm -vf would be faster? A normal rm -rf might do all kind of recursion which the findMIGHT be able to skip. Or then not.

Is your filesystem mounted with sync option? If it is, then the write/delete performance is horribly slower than it could be with async. If in doubt, you might try mount -o remount,async / (or mount -o remount,async /root if that's a separate filesystem for you).

Janne Pikkarainen
  • 31,454
  • 4
  • 56
  • 78
1

running fsck on the filesystem will fix the problem. It usually occurs when a directory used to contain many files but now no longer does. The directory size is given as a huge number and the performance suffers.

Sirex
  • 5,447
  • 2
  • 32
  • 54
1

I'm not sure fsck(8) will reorganize directories, you might try the -D flag (as described in e2fsck(8)). If it doesn't, and if there really aren't millions of files in that directory, perhaps something like the following gives a reasonably-sized directory:

cd /root mv FFDC FFCD-old mkdir FFCD # Adjust permissions on FFDC mv FFDC-old/* FFDC # Check/move any .xxx files/directories in FFDC-old rmdir FFCD-old

At least several bash versions get globs like .a[^.]* wrong and include . and .. anyway, else you might try the next to last step as mv FFDC-old/.* FFDC

Filesystems like ext3/ext4 handle directories essentially as linked lists of unmovable nodes with space for filename + inode number, when a file is unlinked the space for the name is freed (and should coalesce with free neighbor entries, if any). So such a gigantic directory with few files can be created, but it isn't easy. Perhaps creating millions of hard links to the same few files? Creating/deleting millions of links with carefully crafted names? Whatever happened to create this merits investigating; is it a prank, a filessystem malfunction of some sort, ...?

vonbrand
  • 1,153
  • 2
  • 8
  • 16