No.
There is no quicker way, appart from soft-format of the disk. The files are given to rm at once (up to the limit of the command line, it could be also set to the xargs
) which is much better than calling rm on each file. So no, there is definitely no faster way.
Using nice
(or renice
on a running process) helps only partially, because that is for scheduling the CPU resource, not disk! And the CPU usage will be very low. This is a linux weakness - if one process "eats up" the disk (i.e. works a lot with it), the whole machine gets stuck. Modified kernel for real time usage could be a solution.
What I would do on the server is to manually let other processes do their job - include pauses to keep the server "breathe":
find . -name "*.gif" > files
split -l 100 files files.
for F in files.* do
cat $F | xargs rm
sleep 5
done
This will wait 5 seconds after every 100 files. It will take much longer but your customers shouldn't notice any delays.
I"m at about 6 gb/hr deletion rate using the "nice find" command below. Probably will take 48 hrs straight to get rid of all the files.
The reason this happened was b/c a scour script failed. I had surpassed the "event horizon" with rm command, then it ran away. – None – 2013-11-23T18:12:27.160
3Would removing the whole dir not be substantially quicker? Just take out the "good" files before nuking the remaining ones... – tucuxi – 2013-11-23T18:42:41.683
Well, every file is bad right now, because it was moved to /dir_old , and I remade the /dir. But won't rmdir run into same limitation as rm * ? – None – 2013-11-23T19:43:08.847
@Corepuncher: I would expect that removing the entire directory (as with
rm -rf
would be faster. It's worth a try. – Jason R – 2013-11-23T19:44:56.710I'm currently running "rm -rf" on the dir. It's been running for over 20 min now...no change in disk size yet. But also it didn't automatically return "arguement list too long" yet either. Only problem is, it's really hammering my machine and making other things slow/fail. Not sure how long to let it go. – None – 2013-11-23T20:01:15.577
this is another way ,using rsync
http://stackoverflow.com/questions/505289/using-rsync-to-delete-a-single-file http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html http://www.pixelstech.net/article/1352825068-Use-rsync-to-delete-mass-files-quickly-in-Linux
@JasonR: “worth a try” – I suppose, but I wouldn’t expect
rm –rf
to be any better thanfind … -delete
, and they would be only marginally better thanfind … -exec rm {} +
orfind … | xargs rm
. The difference is that the first two commands do everything in one process, while the latter two fork and execrm
tens of thousands of times. But fork/exec’ingrm
isn’t the bottleneck; the resource hog is the removal of the files, and evenrm –rf
has to remove each file individually. – Scott – 2013-11-26T23:08:34.503