6

First question here, I'm usually on StackOverflow.

I need to delete all the contents of a directory, but not the directory itself as I don't have permission to delete that actual directory. This seems simple, but I'm unable to find the command for this. Thanks in advance.

Marc
  • 405
  • 2
  • 5
  • 11

8 Answers8

11
rm -r /path/to/directory/*

or

rm -rv /path/to/directory/*

if you want to see what is happening.

David Spillett
  • 22,534
  • 42
  • 66
  • DOH! Of course, thanks, it's not often I'm in the Linux terminal. – Marc Jun 23 '09 at 17:20
  • Note that this won't get .directories, and you definitely do not want to do .* (that will match parent directory). – chaos Jun 23 '09 at 17:54
  • 1
    Yes, the above will skip files/directories starting with "." in the specified directory. It will remove such files/dirs from subdirectories under that directory, but those in the directory itself will need to be done individually. You are most likely to run into this issue with a home directory (which will often contain quite a few "." files and dirs) and directories Apache serves files/scripts from (which are not unlikely to have a .htaccess file in them). – David Spillett Jun 23 '09 at 18:31
  • 1
    to get the dotfiles in bash: set -o dotglob – Ian Kelling Jun 23 '09 at 20:08
  • 1
    That is something I once knew and since forgot. Thanks for the reminder Ian. – David Spillett Jun 23 '09 at 21:36
2

Chaos -- You are incorrect in your concern that rm will ever delete ..

I did a quick search and found the man pages to rm from the 7th edition unix manual at http://plan9.bell-labs.com/7thEdMan/vol1/man1.bun where it says:

DIAGNOSTICS

   Generally  self-explanatory.   It  is forbidden to remove the file ..
   merely to avoid the  antisocial  consequences  of  inadvertently  doing
   something like rm -r .

Given that 7th edition unix is the parent of all modern unixes, and was released in 1979, I would say that it is an emphatically safe thing to do. It doesn't do anything, but it causes no harm whatsoever.

Now, there are other programs like chown that will happily "descend" into .. and cause all sorts of chaos if you do wacky things like "chown -Rh user .*" but rm is not chown.

chris
  • 11,784
  • 6
  • 41
  • 51
1

Easy version if current directory is fine to work with:

find . ! \( -name . \) -print0 | xargs -0 rm -rf

Harder version if current directory is no good:

find /some/dir ! \( -samefile /some/dir \) -print0 | xargs -0 rm -rf
chaos
  • 7,463
  • 4
  • 33
  • 49
0

find /path/dir/ -type f -print|xargs rm

Yordan
  • 106
  • 3
  • 1
    If you are going to use find+xargs, use the "null as separater" options or you'll have trouble if any of the filenames contain spaces: "find /path/dir/ -type f -print0 | xargs -0 rm" – David Spillett Jun 23 '09 at 17:00
  • That will delete only files inside /path/dir – Karolis T. Jun 23 '09 at 17:01
  • Also, the above will only delete files (including files in sub-directories), but will leave any sub-directories in place (but empty). – David Spillett Jun 23 '09 at 17:01
0

This is one of those dark areas of unix that can get sticky fast.

Each of the above examples tickles a long-standing bug in unix that these days people just regard as a cute little personality quirk.

find . | xargs rm

won't work if there are wacky filenames in the directory like newlines or white space. You may even start deleting other files not in the directory if there is a filename with a ; in it. Who knows what happens if there is a filename with a ` in it. Just ask little bobby drop tables. Things can get exciting quickly.

Bill Weiss's comment correctly points out that modern versions of find and xargs that properly use nulls as the field separators for each thing that find finds if you use the -print0 in find and -0 in xargs. Not being a trusting sort, and having cut my teeth on older, randomly broken versions of unix, I tend to be wary of these newfangled gnuisms, even though they work quite well and in this case are the correct answer to this specific problem.

rm -r /path/to/directory/* won't work if you've got 10,000 files in that directory.

Now -- mostly I just don't bother to do this right, so I'll use rm -rf and look at the error if there is an error. If I'm 100% sure there aren't wacky files, I might use find and xargs, though I don't really trust those.

If I'm doing it in a script that runs automatically, and I have no idea how long this is going to be used or who is going to use it, I try to do it the right way.

I can't really think of a quick, tidy, and reliable way to do this but I think I could do it with a bourne shell script like:

for a in * .*
do
  rm -rf "$a"
done

Now -- this is safe because the for loop protects the command line of "rm" from having a billion inputs and the double quotes around the variable protect it from wacky things like escape characters or semicolons or other meta garbage. It is also far slower than the find + xargs.

So I guess the right answer is "There is no program to do that. You have to write a program to do that reliably." I guess that's what stallman et all did with find and xargs...

chris
  • 11,784
  • 6
  • 41
  • 51
  • 1
    find . -print0 | xargs -0 rm – Bill Weiss Jun 23 '09 at 17:51
  • Your shell script is emphatically not safe, and will blow away the parent directory of the current directory. – chaos Jun 23 '09 at 17:56
  • Really? rm -rf .* expands to .. and ., but which versions of rm allow you to unlink .. ? (I will freely admit that I don't have access to any exotic unixes right now to test this). – chris Jun 23 '09 at 18:01
  • Ah, you're right. Forgot that rm protects against that case. You're not in Arie Karhendana's world of hurt, then. :) – chaos Jun 23 '09 at 18:05
  • I've seen all kinds of horrible things go wrong with .. and recursive commands (chown -Rh user .* and now all home directories and their contents are owned by user). rm can't unlink .. because the directory isn't empty and it hasn't been told to enumerate the contents of .. Regardless, Bill Weiss's find + xargs is better (if you trust find and xargs and although I should I don't because I've seen it explode in older buggy versions that aren't likely to be used anywhere anymore but superstition dies hard). – chris Jun 23 '09 at 18:50
  • My system beats 10,000 :-) kbrandt@kbrandt-opadmin:~/scrap/lots$ for i in {1..50000}; do touch $i; done kbrandt@kbrandt-opadmin:~/scrap/lots$ cd .. kbrandt@kbrandt-opadmin:~/scrap$ rm -R lots/* kbrandt@kbrandt-opadmin:~/scrap$ See: http://www.in-ulm.de/~mascheck/various/argmax/ – Kyle Brandt Jun 23 '09 at 19:59
  • Well, the kicker is to do some combination of: while mkdir -p $RANDOM/$RANDOM/$RANDOM/$RANDOM/$RANDOM ; do : ; done & for a in /usr/bin/* ; do mkdir -p ./"$(head -90 < "$a" | tail -7)" ; done Now you've got a real mess on your hands... Oh, and let the while run until it fails (you'll be out of disk space or inodes or you will have hit some other interesting limit). Don't run this as root... – chris Jun 23 '09 at 21:45
0

Why not just:

rm -rf directory

You'll get an error message since you don't have permission to remove the directory. But it will also remove everything in the directory, including those troublesome hidden files.

Keith Smith
  • 769
  • 6
  • 6
  • That would do the trick in an interactive session (and avoid the "." file/dir problem mentioned in other comments) but you'd need to be careful using it in scripts where you don't want the extra error reporting 9you could redirect stderr to /dev/null in that case, though we are getting messy there). – David Spillett Jun 23 '09 at 18:34
0

I believe one could use: rm -rf /test/*

Lucas Kauffman
  • 16,818
  • 9
  • 57
  • 92
Rajat
  • 3,329
  • 21
  • 29
-1

You can try

# rm -rf [dir]/* [dir]/.*

Arie K
  • 1,583
  • 5
  • 18
  • 27