177

I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).

alias rm='rm -i' and --preserve-root by default didn't save me, so are there any automatic safeguards for this?


I wasn't root and cancelled the command immediately, but there were some relaxed permissions somewhere or something because I noticed that my Bash prompt broke already. I don't want to rely on permissions and not being root (I could make the same mistake with sudo), and I don't want to hunt for mysterious bugs because of one missing file somewhere in the system, so, backups and sudo are good, but I would like something better for this specific case.


About thinking twice and using the brain. I am using it actually! But I'm using it to solve some complex programming task involving 10 different things. I'm immersed in this task deeply enough, there isn't any brain power left for checking flags and paths, I don't even think in terms of commands and arguments, I think in terms of actions like 'empty current dir', different part of my brain translates them to commands and sometimes it makes mistakes. I want the computer to correct them, at least the dangerous ones.

Desperatuss0ccus
  • 252
  • 1
  • 4
  • 9
Valentin Nemcev
  • 1,965
  • 2
  • 13
  • 12
  • 15
    FYI, you can also do `rm -rf . /mydir` instead of `rm -rf ./mydir` and kill whatever directory you were in. I find this happens more often. – user606723 Dec 02 '11 at 18:01
  • 29
    To use a gun analogy, this question says please make the gun recognize that I am aiming at my foot and not fire, but I don't want to have any responsibility for not aiming the gun at my foot in the first place. Guns, and computers, are stupid and if you do a stupid thing then you will get these results. Following along the gun analogy, nothing will keep you from hurting yourself except vigilance and practice. – slillibri Dec 02 '11 at 19:01
  • 79
    @slillibri Except that `rm` is not a gun, it is a computer program, it *could* be smart enough to determine that the user is going to delete some important files and issue a warning (like it actually does if you try to do `rm -rf /` without star). – Valentin Nemcev Dec 02 '11 at 19:40
  • 85
    @slillibri Guns have safeties. Asking how to put better safeties on the `rm` command is a perfectly legitimate sysadmin question. – Gilles 'SO- stop being evil' Dec 02 '11 at 20:13
  • 5
    @slillibri this is less akin to asking how not to shoot myself... and more akin to asking how to protect anyone from getting shot. *YOU* may know how not to shoot your own foot... but what about your stupid use... rs... coworke... ers... I mean... 8 year old kid who is acting out a video game? If you have a gun in the house, it best have a two locks and an alarm to alert you... This is no different. Protect your assets (Family and priceless data). – WernerCD Dec 02 '11 at 21:27
  • 2
    Maybe this is a silly suggestion, but why not use a tool like mc (midnight commander)? With mc you are always asked for confirmation when you want to delete a directory. – Giorgio Dec 02 '11 at 21:35
  • @Giorgio I'll try to use vim file-manager more often :) – Valentin Nemcev Dec 02 '11 at 22:56
  • 24
    sudo rm /bin/rm *not recommended, but will prevent most rm's* :-) – Paul Dec 03 '11 at 04:58
  • 8
    @Gilles rm had safeties, by adding -r and -f, those safeties were removed. (-r allows rm to use readdir/rmdir, -f allows rm to use chmod). Using the gun analogy and adding a dash of hyperbole: this is (in my opinion) akin to asking how to avoid shooting somebody when pointing the gun at them (arguments to rm), turning off the safety (-rf) and pulling the trigger (rm). – Kjetil Joergensen Dec 04 '11 at 02:32
  • 1
    @ValentinNemcev Doing an accidental `rm -rf /*` is an age-old Unix rite of passage! Now it's time for you to [learn the `find` command](http://serverfault.com/a/363816/93109) to save yourself from this kind of grief in the future. Certainly you should avoid some of the bad advice that is found in the answers to this question. Suggestions such as using specially-named `-i` file are akin to telling a kid learning to ride a bike to never pedal, just push on the ground with your feet, oh and also make sure to hold in the brake lever all the time. If you want to ride with the big boys, use `find`. – aculich Feb 26 '12 at 16:07
  • 3
    This `rm` & `gun` analogy is horrible. `rm` is something you use a dozen times a day - with & without safety in your regular programming life even if you aren't from Texas. Please don't make it a gun debate ;) – user Jul 01 '13 at 07:23
  • @Paul but what if you do `/path/to/rm -rf /bin/ rm`. The `/path/to` bit is to stop some idiot running it and trying to report me - it had happened! –  Mar 01 '15 at 09:12
  • what do you mean by "--preserve-root by default didn't save me". it should? what went wrong with --preserve-root ? – meso_2600 Apr 11 '16 at 13:44
  • 1
    Limit root and sudo to folks that are cautious. Make backups of your data. Always use `set -u` in your bash scripts. If you are working with folks that blow away `/` often, then consider nfs diskless or initrd ram disk diskless booting. There are other ways to make `/` read-only but it gets tricky depending on your setup. – Aaron Nov 22 '16 at 05:08
  • 1
    type in the command without pressing Enter, check, breath-in, check again, breath-out, check once more, Enter. – InQβ Apr 12 '19 at 08:07
  • 1
    For what it's worth, many (including myself) consider `alias rm="rm -i"` to be a _dangerous_ practice, rather than a safe one. Here's why: it causes a person to _expect_ that rm will always ask them first whether they really want to do the thing. If they're then on some other system, or logged into a different account (perhaps root!), or whatever, and the alias isn't there... they expect it, don't get it, and a catastrophic removal very likely ensues. instead, use `echo rm` or *type* `rm -i` commands. Making these into *habits* is, IMHO, the best way to prevent these sorts of things. – lindes Feb 21 '21 at 00:38
  • 1
    @Paul rm is often a shell built-in, so effects are going to be surprisingly limited. – val is still with Monica Jul 18 '21 at 13:06

31 Answers31

228

One of the tricks I follow is to put # in the beginning while using the rm command.

root@localhost:~# #rm -rf /

This prevents accidental execution of rm on the wrong file/directory. Once verified, remove # from the beginning. This trick works, because in Bash a word beginning with # causes that word and all remaining characters on that line to be ignored. So the command is simply ignored.

OR

If you want to prevent any important directory, there is one more trick.

Create a file named -i in that directory. How can such a odd file be created? Using touch -- -i or touch ./-i

Now try rm -rf *:

sachin@sachin-ThinkPad-T420:~$ touch {1..4}
sachin@sachin-ThinkPad-T420:~$ touch -- -i
sachin@sachin-ThinkPad-T420:~$ ls
1  2  3  4  -i
sachin@sachin-ThinkPad-T420:~$ rm -rf *
rm: remove regular empty file `1'? n
rm: remove regular empty file `2'? 

Here the * will expand -i to the command line, so your command ultimately becomes rm -rf -i. Thus command will prompt before removal. You can put this file in your /, /home/, /etc/, etc.

OR

Use --preserve-root as an option to rm. In the rm included in newer coreutils packages, this option is the default.

--preserve-root
              do not remove `/' (default)

OR

Use safe-rm

Excerpt from the web site:

Safe-rm is a safety tool intended to prevent the accidental deletion of important files by replacing /bin/rm with a wrapper, which checks the given arguments against a configurable blacklist of files and directories that should never be removed.

Users who attempt to delete one of these protected files or directories will not be able to do so and will be shown a warning message instead:

$ rm -rf /usr
Skipping /usr
François Marier
  • 381
  • 2
  • 3
  • 12
Sachin Divekar
  • 2,445
  • 2
  • 20
  • 23
  • 11
    safe-rm looks very good, looking into it now... – Valentin Nemcev Dec 02 '11 at 17:52
  • 51
    safe-rm is neat. Also that's a nifty trick with the `-i` file. Hah. Silly bash. – EricR Dec 02 '11 at 20:00
  • 6
    Amazing what kinda trickery is done in unix. – WernerCD Dec 02 '11 at 21:28
  • I use `Alt` + `#` to comment out commands. Use it a couple of times and it becomes second nature to you. – wnrph Dec 02 '11 at 21:53
  • 4
    The creating file named -i is absolutely pure genius. I could've used that about a year ago when I accidentally ran an rm -rf /etc/* on VPS... (fortunately, I take nightly snapshots, so was able to restore in under 45 minutes). – David W Dec 03 '11 at 04:42
  • 3
    @SachinDivekar: What you call a "regex" is in fact a glob. If "/dir/*.conf" were a regex, it would match "/dir///.conf" and "/dirxconf" but not "/dir/myfile.conf". – bukzor Dec 03 '11 at 07:28
  • 1
    Sachin, at the risk of sounding peevish, it's a bit lame to come back and edit a copy of someone else's answer into your own. Your answer was a very good one without needing to harvest other people's ideas to bulk it out - have the confidence to let it stand on its own merits! – MadHatter Dec 03 '11 at 08:26
  • 1
    @MadHatter sorry and thanks for opening my eyes. I got my lesson. – Sachin Divekar Dec 03 '11 at 08:47
  • Not to worry, and thanks for taking the criticism so well. I look forward to reading lots more of *your* wise answers on SF in the future! – MadHatter Dec 03 '11 at 10:30
  • It's strange there's no recycle bin feature to get around all of this, even if it was a command line that just moved the folder recursively to a ~/rubbish folder – Chris S Dec 03 '11 at 14:50
  • 1
    The `#rm` idea doesn't work that well on `zsh` which will helpfully offer to spell-correct that to `rm` for you! But you do get that one extra chance to hit `^C` – Ben Jackson Dec 03 '11 at 19:56
  • @bukzor +1 you are right. while on command-line, * is not regex, its a glob, used by bash for pathname expansion. – Sachin Divekar Dec 04 '11 at 05:10
  • 1
    a file named "- i"? Genius or Sorcery I'm still trying to decide. – Dark Star1 Dec 04 '11 at 20:32
  • 14
    It is genius. Sorcery would be `touch -- -rf` – Mircea Vutcovici Dec 05 '11 at 19:03
  • While safe-rm is a great idea, it's bound to fail. It works fine on your own systems where you know it's installed, but start managing another system where you assume it is and it is not, and you're in trouble. You effectively train yourself that safe-rm will save you, and you become less careful. So, be careful with all of these tricks. – apgwoz Dec 13 '11 at 13:10
  • 7
    @SachinDivekar how did this answer get voted so high? `safe-rm` and `---preserve-root` are okay suggestions, but prepending `#` doesn't really seem to be all that effective. My real gripe, though, is with this special `-i` file business which is, quite simply, **bad advice**! Littering your filesystem with odd-named files really is not helpful, especially when there is a [simple, effective, general solution using the `find` command](http://serverfault.com/a/363816/93109) to preview and then delete: `find | less` and then `find -delete`. – aculich Feb 26 '12 at 15:57
  • 1
    @aculich, I have +1 your answer, its very simple and effective. I just provided possibilities of what can be done to prevent firing `rm -rf` accidently on wrong files. Somebody can use these tricks somewhere else. – Sachin Divekar Feb 27 '12 at 10:59
  • Simple solution here: http://superuser.com/a/765214/144242. It uses `safe-rm` and asks you before deleting each file. I hope it helps someone :) – Paschalis Jun 08 '14 at 17:16
  • @apgwoz and everyone: you can alias rm (safe-rm, -i, --perserve-root, whatever) to something like "myrm". That way, when you're on another system, you won't be depending on your rm customizations. – Hawkeye Parker Aug 27 '14 at 06:27
  • It's true, you could do that. The suggestion of using `find` instead of `-r` and making sure it's selecting the files you're after is better advice in my opinion though. – apgwoz Aug 27 '14 at 17:01
  • Listen to http://www.bsdnow.tv/episodes/2015_08_19-ubuntu_slaughters_kittens around 1:21:05 for an interesting and fun discussion about `rm -rf /` – user454322 Sep 22 '15 at 08:05
  • 1
    Moreover to this discussion (Linux) -- I also use 'trash-cli' alongside with safe-rm; this give me another layer of protection since any files that are removed via command-line are "trashed" into 'trash can' on the desktop-gui. Indeed - `rm` is the command that must never be underestimated. – Faron Sep 23 '15 at 04:17
  • Is there any way to create `touch ./-i` as a hidden file under all folders? – alper Jul 11 '20 at 22:46
54

Your problem:

I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).

The solution: Don't do that! As a matter of practice, don't use ./ at the beginning of a path. The slashes add no value to the command and will only cause confusion.

./* means the same thing as *, so the above command is better written as:

rm -rf *

Here's a related problem. I see the following expression often, where someone assumed that FOO is set to something like /home/puppies. I saw this just today actually, in the documentation from a major software vendor.

rm -rf $FOO/

But if FOO is not set, this will evaluate to rm -rf /, which will attempt to remove all files on your system. The trailing slash is unnecessary, so as a matter of practice don't use it.

The following will do the same thing, and is less likely to corrupt your system:

rm -rf $FOO

I've learned these tips the hard way. When I had my first superuser account 14 years ago, I accidentally ran rm -rf $FOO/ from within a shell script and destroyed a system. The 4 other sysadmins looked at this and said, 'Yup. Everyone does that once. Now here's your install media (36 floppy disks). Go fix it.'

Other people here recommend solutions like --preserve-root and safe-rm. However, these solutions are not present for all Un*xe-varients and may not work on Solaris, FreeBSD & MacOSX. In addition, safe-rm requires that you install additional packages on every single Linux system that you use. If you rely on safe-rm, what happens when you start a new job and they don't have safe-rm installed? These tools are a crutch, and it's much better to rely on known defaults and improve your work habits.

Stefan Lasiewski
  • 22,949
  • 38
  • 129
  • 184
  • 25
    My friend told me he never uses `rm -rf *`. He always changes the directory first, and uses a specific target. The reason is that he uses the shell's history a lot, and he is worried that having such a command in his history might pop up at the wrong time. – haggai_e Dec 04 '11 at 15:56
  • @haggai_e: Good tip. When I was new to Unix, I ran once ran into a bug where `rm -rf *` also removed `.` and `..`. I was root, and this traversed the into lower directories like `../../..`, and was quite destructive. I try to be very careful with `rm -rf *` ever since. – Stefan Lasiewski Dec 06 '11 at 19:38
  • 6
    `rm -rf $FOO` won't help if you need to `rm -rf $FOO/$BAR`. `cd $FOO && rm -rf $BAR` will help, though it's way longer. – Victor Sergienko Dec 26 '13 at 17:26
  • 8
    @VictorSergienko, with bash, how about specifying `${FOO:?}`, as in `rm -rf ${FOO:?}/` and `rm -rf ${FOO:?}/${BAR:?}`. It will prevent it from ever translating into `rm -rf /`. I have some more info about this in my answer [here](http://stackoverflow.com/a/22843897/832230). – Asclepius Feb 10 '15 at 20:48
  • 1
    @haggai_e: I find this one of the best advises on this topic. I burned my finger by using `rm -rf *` in a for loop which changed to the wrong directory by mistake and ended up deleting something else. If I would have used a specific target it would have had a lot smaller chance to delete the wrong thing. – richk Mar 30 '15 at 14:29
  • "if `FOO` is not set, this will evaluate to empty string" - this is just not true in some cases. `Bash` and `Zsh` have some really useful options. One of them is `-u` (Treat unset variables as an error when substituting.). This one can be set by simply calling `set -u`. This way you can make yourself safe in case when you try to use undefined variable. – Victor Yarema Jan 05 '18 at 01:49
  • @VictorYarema `set -u` is a must have, but you'll still need `${FOO:?}` if there was a previous `FOO=`. – André Werlang Apr 13 '20 at 22:37
  • 2
    @VictorSergienko if `$FOO` is empty and `$BAR` is either `.` or `/`, congratulations on your new empty home directory. – André Werlang Apr 13 '20 at 23:46
  • 1
    I think it's the most common case when rm -rf * is triggered in a script when some folder-variable didn't exist or its value wasn't calculated correctly. Happened to me today. Fortunately I recognized it soon enough, killed the script and the most important content wasn't deleted from my system. Only a regular backup is the solution in my opinion. Most important content - every day, less important - once a week as example. – ka3ak Jul 04 '20 at 18:43
  • This should be the correct answer, don't use `/` at all when doing `rm -rf`. I realized this after 2 years and did it wrong the whole time. There is no need for a trailing or prepended shlash (with or without dot) in 90% of cases. – Alex Jan 05 '22 at 18:48
34

Since this is on "Serverfault", I'd like to say this:

If you have dozens or more servers, with a largish team of admins/users, someone is going to rm -rf or chown the wrong directory.

You should have a plan for getting the affected service back up with the least possible MTTR.

Not Now
  • 3,532
  • 17
  • 18
  • 5
    And you should use a VM or spare box to practice recoveries - find out what didn't work and refine said plan. We are getting into a fortnightly reboot - because there have been power outtages in our building, and every time it has been painful. BY doing a few planned shutdowns of all the racks, we've cut it from a few days of running around to about 3 hours now - each time we learn which bits to automate/fix init.d scripts for etc. – Danny Staple Dec 03 '11 at 10:19
  • 1
    And try this command on a VM. It's interesting! But take a snapshot first. – Stefan Lasiewski Dec 04 '11 at 01:57
  • a great case for snapshottable filesystems, with roll back. ZFS has this feature... https://docs.oracle.com/cd/E19253-01/819-5461/gbcxk/index.html – The Unix Janitor Jul 13 '20 at 14:21
24

The best solutions involve changing your habits not to use rm directly.

One approach is to run echo rm -rf /stuff/with/wildcards* first. Check that the output from the wildcards looks reasonable, then use the shell's history to execute the previous command without the echo.

Another approach is to limit the echo command to cases where it's blindingly obvious what you'll be deleting. Rather than remove all the files in a directory, remove the directory and create a new one. A good method is to rename the existing directory to DELETE-foo, then create a new directory foo with appropriate permissions, and finally remove DELETE-foo. A side benefit of this method is that the command that's entered in your history is rm -rf DELETE-foo.

cd ..
mv somedir DELETE-somedir
mkdir somedir                 # or rsync -dgop DELETE-somedir somedir to preserve permissions
ls DELETE-somedir             # just to make sure we're deleting the right thing
rm -rf DELETE-somedir

If you really insist on deleting a bunch of files because you need the directory to remain (because it must always exist, or because you wouldn't have the permission to recreate it), move the files to a different directory, and delete that directory.

mkdir ../DELETE_ME
mv * ../DELETE_ME
ls ../DELETE_ME
rm -rf ../DELETE_ME

(Hit that Alt+. key.)

Deleting a directory from inside would be attractive, because rm -rf . is short hence has a low risk of typos. Typical systems don't let you do that, unfortunately. You can to rm -rf -- "$PWD" instead, with a higher risk of typos but most of them lead to removing nothing. Beware that this leaves a dangerous command in your shell history.

Whenever you can, use version control. You don't rm, you cvs rm or whatever, and that's undoable.

Zsh has options to prompt you before running rm with an argument that lists all files in a directory: rm_star_silent (on by default) prompts before executing rm whatever/*, and rm_star_wait (off by default) adds a 10-second delay during which you cannot confirm. This is of limited use if you intended to remove all the files in some directory, because you'll be expecting the prompt already. It can help prevent typos like rm foo * for rm foo*.

There are many more solutions floating around that involve changing the rm command. A limitation of this approach is that one day you'll be on a machine with the real rm and you'll automatically call rm, safe in your expectation of a confirmation… and next thing you'll be restoring backups.

  • `mv -t DELETE_ME -- *` is a bit more foolproof. – Tobu Dec 04 '11 at 14:57
  • @Giles Not using `rm` directly is good advice! An even better alternative is to [use the `find` command](http://serverfault.com/a/363816/93109). – aculich Feb 26 '12 at 06:57
  • 1
    And if you need the directory to remain you can do that quite simply by using `find somedir -type f -delete` which will delete all files in `somedir` but will leave the directory and all subdirectories. – aculich Feb 26 '12 at 15:45
  • Would it be safe to write a script for `echo rm -rf /stuff/with/wildcards*` to continue if `y` pressed and perform `rm -rf /stuff/with/wildcards*`? @Gilles 'SO- stop being evil' – alper Jul 11 '20 at 23:35
  • 1
    @alper It wouldn't cause additional harm, but it wouldn't help either, because typing `y` would become a reflex. A failsafe is only useful if it adds a safety check, not if it just adds an automatic step. – Gilles 'SO- stop being evil' Jul 12 '20 at 22:26
  • You are right, I never think in the perspective off automatic reflex. Maybe just copying previous command into clipboard, and paste it right away seems like a better option. @Gilles 'SO- stop being evil' – alper Jul 13 '20 at 14:19
21

You could always do an alias, as you mentioned:

what_the_hell_am_i_thinking() {
   echo "Stop." >&2
   echo "Seriously." >&2
   echo "You almost blew up your computer." >&2
   echo 'WHAT WERE YOU THINKING!?!?!' >&2
   echo "Please provide an excuse for yourself below: " 
   read 
   echo "I'm sorry, that's a pathetic excuse. You're fired."
   sleep 2
   telnet nyancat.dakko.us
}

alias rm -fr /*="what_the_hell_am_i_thinking"

You could also integrate it with a commandline twitter client to alert your friends about how you almost humiliated yourself by wiping your hard disk with rm -fr /* as root.

Naftuli Kay
  • 1,648
  • 6
  • 22
  • 43
17

There's some really bad advice in this thread, luckily most of it has been voted down.

First of all, when you need to be root, become root - sudo and the various alias tricks will make you weak. And worse, they'll make you careless. Learn to do things the right way, stop depending on aliases to protect you. One day you'll get root on a box which doesn't have your training wheels and blow something up.

Second - when you have root, think of yourself as driving a bus full of school children. Sometimes you can rock out to the song on the radio, but other times you need to look both ways, slow things down, and double check all your mirrors.

Third - You hardly ever really have to rm -rf - more likely you want to mv something something.bak or mkdir _trash && mv something _trash/

Fourth - always ls your wildcard before rm - There's nothing crazy about looking at something before destroying it forever.

Sachin Divekar
  • 2,445
  • 2
  • 20
  • 23
eventi
  • 231
  • 1
  • 4
  • @eventi I agree that there is some terrible advice and ugly hacks in this thread. And it's definitely a good idea to look at something before destroying it, but there is an [even better way to do that using the `find` command](http://serverfault.com/a/363816/93109). – aculich Feb 26 '12 at 06:55
  • 2
    I fail to see how find is simpler or safer, but I like your `find . -name '*~'` example. My point is that `ls` will list the same glob that `rm` will use. – eventi Mar 08 '12 at 00:14
17

The simplest way to prevent accidental rm -rf /* is to avoid all use of the rm command! In fact, I have always been tempted to run rm /bin/rm to get rid of the command completely! No, I'm not being facetious.

Instead use the -delete option of the find command, but first before deleting the files I recommend previewing what files you'll be deleting:

find | less

Note, in modern versions of find if you leave out the name of a directory, it will implicitly use the current directory, so the above is the equivalent of:

find . | less

Once you're sure these are the files you want to delete you can then add the -delete option:

find path/to/files -delete

So, not only is find safer to use, it is also more expressive, so if you want to delete only certain files in a directory hierarchy that match a particular pattern you could use an expression like this to preview, then delete the files:

find path/to/files -name '*~' | less
find path/to/files -name '*~' -delete

There are lots of good reasons to learn and use find besides just a safer rm, so you'll thank yourself later if you take the time to learn to use find.

aculich
  • 3,520
  • 1
  • 25
  • 33
  • Very interesting discussion. I like your approach and made a little snippet. It is super inefficient, since it calls find at most 3 times, but for me this is a nice start: https://github.com/der-Daniel/fdel – Daniel Hitzel Mar 09 '18 at 18:33
16

Yes: Don't work as root and always think twice before acting.

Also, have a look at something like https://launchpad.net/safe-rm.

Sven
  • 97,248
  • 13
  • 177
  • 225
11

The solution to this problem is to take regular backups. Anytime you produce something you don't want to risk losing, back it up. If you find backing up regularly is too painful, then simplify the process so that it's not painful.

For example, if you work on source code, use a tool like git to mirror the code and keep history on another machine. If you work on documents, have a script that rsyncs your documents to another machine.

David Schwartz
  • 31,215
  • 2
  • 53
  • 82
  • A copy-on-write filesystem such as btrfs can help as well. You can easily set up a simple automated snapshot rotation that runs locally (in addition to external backup). – malthe Jan 17 '17 at 15:34
11

This is standard of mine specifically for regexps in the context of rm, but it would have saved you in this case.

I always do echo foo*/[0-9]*{bar,baz}* first, to see what the regexp is going to match. Once I have the output, I then go back with command-line editing and change echo to rm -rf. I never, ever use rm -rf on an untested regexp.

MadHatter
  • 78,442
  • 20
  • 178
  • 229
  • 3
    Please compare: http://linuxmanpages.com/man3/regex.3.php http://linuxmanpages.com/man3/glob.3.php – bukzor Dec 03 '11 at 07:32
  • 5
    OK, what am I looking for? Are you making the point that the regexp syntax for file-matching is different (and sometimes called by a different name) from that used in eg perl? Or some other point that I've missed? I apologise for my slowness of thought, it's first thing Saturday morning here! – MadHatter Dec 03 '11 at 07:52
  • 11
    These things that you're calling "regexp" are in fact globs. It's not a different regex syntax; it's not a regex. – bukzor Dec 03 '11 at 16:59
  • 1
    That argument could certainly be made; however, from the wikipedia article on regular expressions, I find that "Many modern computing systems provide wildcard characters in matching filenames from a file system. This is a core capability of many command-line shells and is also known as globbing" - note the use of "also known as", which seems to me to indicate that calling tokens containing metacharacters to match one or more file names regexps isn't wrong. I agree that globbing is a better term because it doesn't mean anything other than the use of regular expressions in filename matching. – MadHatter Dec 04 '11 at 15:11
  • @MadHatter Checking to see what files match before you delete them is good advice, but there is a [safer and more expressive way to do it with the `find` command](http://serverfault.com/a/363816/93109). – aculich Feb 26 '12 at 07:00
  • 2
    @MadHatter Also, globbing, though visually somewhat similar, is very different semantically from regular expressions. In a regex the meaning of `*` has a very precise definition called the [Kleene Star](http://en.wikipedia.org/wiki/Kleene_star) which is a unary operator that matches zero or more elements of the set to which it is applied (in the case of regular expressesions, the character or set of characters preceding the Kleene Star), whereas in globbing the `*` matches anything in the pattern that follows. They are semantically very different even if they seem to have a similar syntax. – aculich Feb 26 '12 at 07:09
  • 1
    @MadHatter The distinction between regex and globs is not merely a matter of which character means what, or whether globs are a "flavor" of regex; standard globs in fact fail the [formal definition of regex](https://en.wikipedia.org/wiki/Regular_expression#Formal_language_theory). – Kyle Strand Feb 05 '15 at 02:06
7

It seems like the best way to reduce this risk is to have a two-stage delete like most GUIs. That is, replace rm with something that moves things to a trash directory (on the same volume). Then clean that trash after enough time has gone by to notice any mistake.

One such utility, trash-cli, is discussed on the Unix StackExchange, here.

nnutter
  • 170
  • 2
  • It's the first thing I install on every machine. It should be the default removal tool, with rm being only used when you need to absolutely remove something right now. I'm sad that it has not yet taken off, but one day it will. Probably after a very public instance of rm causing a huge problem which could not have been addressed by backups. Probably something where the time taken to recover plays a huge factor. – Gerry May 30 '16 at 06:49
  • +1 After using linux for 20 years, I still think there should be some kind of trash-can behaviour for `rm`. – Shovas Mar 07 '18 at 16:35
3

One important key factor to avoid such type of mistakes is to not login using root account. When you login using normal non-privileged user, you need to use sudo for each command. So, you should be more careful.

Khaled
  • 35,688
  • 8
  • 69
  • 98
  • 5
    I'm not convinced sudo would prevent something like this. You can make the same typo as the OP, even if you type "sudo" before the "rm". – cjc Dec 02 '11 at 17:29
  • 1
    Mentioned working as root in edit – Valentin Nemcev Dec 02 '11 at 17:32
  • If you are still not convinced about using `sudo` and backups. Have a look at this page: http://forum.synology.com/wiki/index.php/How_to_create_a_Recycle_Bin_%28or_Trash_can%29_for_the_CLI_rm_command. It talks about creating a recycle bin. Hope this helps! – Khaled Dec 02 '11 at 17:44
  • 1
    @Khaled I'm using sudo and backups, I just want something better for this specific problem – Valentin Nemcev Dec 02 '11 at 17:54
3

When I delete a directory recursively, I put the -r, and -f if applicable, at the end of the command, e.g. rm /foo/bar -rf. That way, if I accidentally press Enter too early, without having typed the whole path yet, the command isn't recursive so it's likely harmless. If I bump Enter while trying to type the slash after /foo, I've written rm /foo rather than rm -rf /foo.

That works nicely on systems using the GNU coreutils, but the utilities on some other Unixes don't allow options to be placed at the end like that. Fortunately, I don't use such systems very often.

Wyzard
  • 1,143
  • 6
  • 13
3

I like the windows approach of the recycle bin.

I usually create a directory named "/tmp/recyclebin" for everthing I need to delete:

mkdir /tmp/recyclebin

And never use rm -rf, I always use:

mv target_folder /tmp/recyclebin

Then later on, I empty the recyclebin using a a script or manually.

Basil A
  • 1,910
  • 2
  • 17
  • 18
2

It may be complicated, but you can setup roles within SELinux so that even if the user becomes root via sudo su - (or plain su), the ability to delete files can be limited (you have to login directly as root in order to remove files). If you are using AppArmor, you may be do something similar.

Of course, the other solution would be to make sure that you have backups. :)

Rilindo
  • 5,058
  • 5
  • 26
  • 46
2

My deletion process on Unix based machines is as follows.

  • Type ls /path/to/intented/file_or_directory in the terminal window and then hit return (or Tab, as desired), to see the list of files.

If everything looks good,

  • click the up arrow key to bring ls /path/to/intented/file_or_directory from the terminal history again.

  • replace ls with rm or rm -r or rm -rf, as required. I personally don't like to use -f flag.

This process of validation also prevents the premature execution of the rm command, something which has happened to me, before I started following this process.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
  • Previewing the files first before deleting them is a good idea, and there is an even safer and more expressive way to do it using the `find` [as I explain in my answer](http://serverfault.com/a/363816/93109). – aculich Feb 26 '12 at 06:53
2

Avoid using globbing. In Bash, you can set noglob. But again, when you move to a system where noglob is not set, you may forget that and proceed as if it were.

Set noclobber to prevent mv and cp from destroying files too.

Use a file browser for deletion. Some file browsers offer a trashcan (for example, Konqueror).

Another way of avoiding globbing is a follows. At the command line, I echo filenamepattern >> xxx. Then I edit the file with Vim or vi to check which files are to be deleted, (watch for filename pattern characters in filenmates.) and then use %s/^/rm -f/ to turn each line into a delete command. Source xxx. This way you see every file that is going to be deleted before doing it.

Move files to an 'attic' directory or tarball. Or use version control (as said before me).

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
  • +1 for using some method of previewing your files before you delete them, however there are [simpler and safer ways to do that using the `find` command](http://serverfault.com/a/363816/93109). – aculich Feb 26 '12 at 07:14
2

The ZSH asks me (as default) before performing a rm -rf *.

And ZSH also provides plugin (zsh-safe-rm) to add safe-rm functionality so that rm will put files in your OS's trash instead of permanently deleting them.

alper
  • 165
  • 1
  • 7
math
  • 443
  • 3
  • 10
1

Outside of chattr, there's not a whole lot of safeguards from letting root run such a command. That's why proper groups and careful commands are important when running privileged.

Next time; scope out the files you plan on deleting - omit 'f' from rm -rf, or use find and pass it to xargs rm

thinice
  • 4,676
  • 20
  • 38
  • It is good you suggest using `find`, but [I recommend a safer way of using it in my answer](http://serverfault.com/a/363816/93109). There is no need to use `xargs rm` since all modern versions of `find` have the [`-delete` option](http://www.gnu.org/software/findutils/manual/html_mono/find.html#Delete-Files). Also, to safely use `xargs rm` you also need to use `find -print0` and `xargs -0 rm` otherwise you'll have problems when you encounter things like filenames with spaces. – aculich Feb 26 '12 at 05:54
  • My point wasn't about the nuances about xargs but rather using find first, without deleting files and then continuing.. – thinice Feb 26 '12 at 07:33
  • Yes, I think that scoping out files using `find` is a good suggestion, however the nuances of `xargs` are important if you suggest using it, otherwise it leads to confusion and frustration when encountering files with spaces (which is avoided by using the `-delete` option). – aculich Feb 26 '12 at 08:05
1

Some safety aliases for other commands, to prevent similar disasters, found here:

# safety features
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -I'                    # 'rm -i' prompts for every file
alias ln='ln -i'
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

Notice the uppercase -I, it is different from -i:

prompt once before removing more than three files, or when removing recursively. Less intrusive than -i, while still giving protection against most mistakes

Valentin Nemcev
  • 1,965
  • 2
  • 13
  • 12
1

Just use ZFS to store the files you need to resist accidental removal and have a daemon that:

  • regularly makes snapshots of this file system
  • removes older/unnecessary snapshots.

Should files are removed, overwritten, corrupted, whatever, just rollback your file system to a clone of the last good snapshot and you are done.

jlliagre
  • 8,691
  • 16
  • 36
1

If you're not in the mood to acquire new habits right now, .bashrc/.profile is a good place to add some tests to check if you are about to do something stupid. I figured in a Bash function I could grep for a pattern that might ruin my day and came up with this:

alias rm='set -f; myrm' #set -f turns off wildcard expansion need to do it outside of           
                        #the function so that we get the "raw" string.
myrm() {
    ARGV="$*"
    set +f #opposite of set -f
    if echo "$ARGV" | grep -e '-rf /*' \
                           -e 'another scary pattern'
    then
        echo "Do Not Operate Heavy Machinery while under the influence of this medication"
        return 1
    else
        /bin/rm $@
    fi
}

The good thing about it is that it's only Bash.

It's clearly not generic enough in that form, but I think it has potential, so please post some ideas or comments.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
kln
  • 11
  • 1
  • It's good you're trying to preview your files before deleting them, however this solution is overly-complicated. You can instead accomplish this very simply in a more generic way [using the `find` command](http://serverfault.com/a/363816/93109). Also, I don't understand why you say "the good thing about it is that it's only Bash"? It is recommended to [avoid bash-isms in scripts](http://unix.stackexchange.com/questions/24146/avoiding-bash-isms-in-shell-scripts). – aculich Feb 26 '12 at 07:17
  • To prevent us from "rm -rf /*" or "rm -rf dir/ \*" when we mean "rm -rf ./*" and "rm -rf dir/*" we have to detect the patterns " /*" and " \*" (simplistically). But we can't just pass all the command line arguments through grep looking for some harmful pattern,because bash expands the wildcard arguments before passing them on (star will be expanded to all the contents of a folder). We need the "raw" argument string.That's done with set -f before we invoke the "myrm" function which is then passed the raw argument string and grep looks for predefined patterns. \* – kln Feb 26 '12 at 17:57
  • I understand what you are trying to do with `set -f` which is equivalently `set -o noglob` in Bash, but that still doesn't explain your statement that "The good thing about it is that it's only Bash". Instead you can eliminate the problem entirely and in a generic way for any shell by not using `rm` at all, but rather [using the `find` command](http://serverfault.com/a/363816/93109). Have you actually tried that suggestion to see how it compares with what you suggest here? – aculich Feb 26 '12 at 18:27
  • @aculich by only bash I mean no python or perl dependencies, everything can be done in bash. Once I amend my .bashrc I can continue working without having to break old habits. Every time I invoke rm bash will make sure I don't do something stupid. I just have to define some patterns that I want to be alerted of. Like " \*" which would remove everything in the current folder.Every now and again that will be exactly what I want but with a bit more work interactivity can be added to "myrm". – kln Feb 26 '12 at 18:29
  • @aculich OK gotcha.No I haven't tried it.I think it requires significant change in workflow. Just checked here on Mac OS X my .bash_history is 500 and 27 of those commands are rm. And these days I don't use a terminal very often. – kln Feb 26 '12 at 18:40
  • How is `find -delete` or `find dir -delete` a significant change in workflow? It accomplishes exactly the same thing as `rm -rf ./*` and 'rm -rf dir/*` without being prone to globbing errors or needing rubegoldberg-esque functions defined in `.profile`, plus if you want to preview the list of files before you delete them, just remove the `-delete` from the `find` command. – aculich Feb 26 '12 at 19:02
  • It's not true that those are equivalent. BSD find doesn't default to the current dir and \* doesn't expand to dot files so find -delete is not equivalent to rm -rf ./* , find dir -delete is equivalent to rm -rf dir and not rm -rf dir/* . Those are very minute differences, but I still need to adjust to a new way of thinking about things. Another thing, let's say I'm editing a big project tree with thousands of files and dozens of levels in the file hierarchy, "find dir1/dir1/dir2 dir1" is pretty close to "find dir1 dir1/dir/2 dir/1" – kln Feb 26 '12 at 20:00
  • I don't preview a list of files anywhere in my solution. If there is no \* after a space on the command line (any other pattern can be defined) the user won't notice anything different. – kln Feb 26 '12 at 20:09
  • It's clear you don't preview a list of files in your solution... that is exactly the point I'm making with `find` that you can do that easily simply by leaving off the `-delete`. – aculich Feb 26 '12 at 20:21
  • Sure, so BSD `find` doesn't allow you to omit the directory, so you have `find . -delete` instead of `find -delete`. Also, the '*' glob may or may not expand dot files... it depends on a setting, which by default matches the behavior you describe, but if the system or someone has set `shopt -s dotglob` then it will expand dot files, too. If you are actually dealing with thousands of files you may also run up against the "Argument list too long" error, but that's another one you can avoid by using `find`. – aculich Feb 26 '12 at 20:33
  • Aha, I think now I understand what you mean. In essence you want to get rid of globbing errors by removing the need to use the \* . Is that right? You build removing, which is a potentially dangerous thing, to be inconsistent with the rest of the system, which relies on globbing, thus telling the user to be careful with it. – kln Feb 26 '12 at 22:48
  • My intent is to provide a safe, general, effective, extensible answer to the original question: "How do I prevent accidental rm -rf /*?" Using `find . -delete` is safe(r) because it avoids this very common accidental mistake. It is also safer because makes it easy to preview the file list before deleting. It is a general method that works on any unix system and is not shell-dependent. It is effective because it accomplishes the same thing as `rm -rf ./*` but is more extensible, for example adding `-iname '*~' makes it easy to delete all *~ files in all subdirectories. How would rm do that? – aculich Feb 26 '12 at 23:28
  • well,if you just want to prevent an accidental "rm -rf /*" ,all you need to do is to tell the shell to look for the pattern '-rf /*' (line 6 above).After that you NEVER EVER have to worry about it.And you won't need to reaccustome to some totally new way of doing things.And as for previewing the file list,you could easily do that with less typing.But if you're going to bother to do that,why use find.Your solution is not a straight drop-in replacement for rm .Yeah,find can find all '*~' but what if you mess up the regex.You still have to check.Take another look at my answer.It's very extendable – kln Feb 27 '12 at 13:02
1

In addition to all other solutions here, when doing a large rm usually use the -v flag to see what is being deleted and have a chance to ^C quickly if I have the slightest doubt. Not really a way to prevent bad rm's, but this can be useful to limit the damage in case something goes wrong.

a3nm
  • 859
  • 5
  • 11
1

Sadly, I cannot leave a comment above due to insufficient karma, but wanted to warn others that safe-rm is not a panacea for accidental mass-deletion nightmares.

The following was tested in a Linux Mint 17.1 virtual machine (warning to those unfamiliar with these commands: DON'T DO THIS! Actually, even those familiar with these commands should/would probably never do this to avoid catastrophic data loss):

Text version (condensed):

$ cd /
$ sudo safe-rm -rf *
$ ls
bash: /bin/ls: No such file or directory

Image version (full):

enter image description here

Miles Wolbe
  • 151
  • 5
1

I think this is a powerful prevention tip, with * expansion shortcut in shell:

First, type rm -rf * or rm -rf your/path/*, DON'T type Enter key. (of course, you should have a habit of caring not to press Enter fast/accidentally when using rm -rf)

Then, press Alt-Shift-8 (i.e. Alt-Shift-*) to expand the "*" wildcard explicitly in bash. This also avoid re-entering a "rm -rf *" command when navigating the history.

Finally, after checked the expansion has the right files/directories, press Enter.

Done.

Johnny Wong
  • 171
  • 2
1

In case this helps someone out there for their own case:

1. Use rmsafe:

It moves files to a "trash" folder and you always have the chance to bring them back with a simple mv:

$ rmsafe /path/to/important/files

Source: https://github.com/pendashteh/rmsafe

2. Use safe:

You can set an alias for rm using safe:

$ alias rm="safe rm"

Now if you run rm /* you get this in response:

$ rm /*
Are you sure you want to 'rm /bin /boot /data /dev /etc /home /initrd.img /lib /lib64 /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var'? [y/n]

and I believe you won't type y!

Source: https://github.com/pendashteh/safe

Alexar
  • 256
  • 1
  • 3
  • 13
0

If you really are that careless at the shell prompt, or just having a bad day.. then a shell alias of rm to mv can save you from time to time.

https://unix.stackexchange.com/questions/379138/aliasing-rm-to-create-a-cli-recycle-bin

The Unix Janitor
  • 2,388
  • 14
  • 13
0

Hehe (untested and somewhat facetiously!):

$ cat /usr/local/bin/saferm

#! /bin/bash

/bin/ls -- "$@"

echo "Those be the files you're about to delete."
echo "Do you want to proceed (y/N)?"

read userresponse

if [ "$userresponse" -eq "y" ]; then

  echo "Executing...."
  /bin/rm -- "$@"

fi

And then:

alias rm="/usr/local/bin/saferm"

Realistically, you should have a mental pause before executing that sort of operation with a glob, whether you're running as root, prepending "sudo" to it, etc. You can run an "ls" on the same glob, etc., but, mentally, you should stop for a sec, make sure you've typed what you wanted, make sure what you want is actually what you want, etc. I suppose this is something that's mainly learned by destroying something in the first year as a Unix SA, in the same way that the hot burner is a good teacher in telling you that something on the stove may be hot.

And make sure you have good backups!

cjc
  • 24,533
  • 2
  • 49
  • 69
  • I try thinking twice before doing dangerous things, but somehow it doesn't always work, I've destroyed thing in the past because of inattention like this. – Valentin Nemcev Dec 02 '11 at 17:57
0

Also, not as a safeguard, but as a way to find out what files were deleted before you hit ^C, you can use locate database (of course only if it was installed and survived rm)

I've learned about it from this blog post

Valentin Nemcev
  • 1,965
  • 2
  • 13
  • 12
0

not so much an answer but a tip, i always do rm (dir) -rf not rm -rf (dir) i.e: don't go nuclear until the last possible moment.

It helps mitigate situations in which you fat finger the dir name in such a way that it's still a valid deletion, such as slipping and hitting the enter key.

Sirex
  • 5,447
  • 2
  • 32
  • 54
  • Smart, but would not work on BSD `rm`, where options must come before file names. – gnucchi Jun 18 '18 at 11:44
  • yeah. i found that out using apple's recently. The fix is to install the gnu tools and set aliases for everything :) and/or preferably throw the apple in a trashcan. :) – Sirex Jun 18 '18 at 21:55
  • [Why throw it when you can hack it? ;)](https://penguindreams.org/images/macbook-linux/gentoo-mac.jpg) – gnucchi Jun 22 '18 at 14:47
  • 1
    if i was allowed to i would, in a nanosecond. It's garbage compared to linux. – Sirex Jun 24 '18 at 21:23
0

I simply did my own script that can simply warn and ask confirmation for every kind of situation, just up to you on how to improve it

#!/bin/sh
# just sudo mv /usr/bin/rm to /usr/bin/rm-original

path=${!#}

color_yel="\x1b[1;33m"
color_rst="\x1b[0m"

function get_yn() {
    read i
    if [ "$i" != "y" ]; then echo "aborted."; exit 1; fi
}

if [ -d ${path} ]; then
    echo -e "${color_yel}You are deleting a folder, it's potentially dangerous, are you sure (y/n) ?${col>
    get_yn
    /usr/bin/rm-original $@
fi

Further idea to improve it:

  • check particular flags and paths
  • move to a trash folder and remove
  • empty old stuff in the trash after some time

So a simple customizable script allows to have any special behavior and avoid any additional package to be installed.

With some bash knowledge, you can trash only certain kind of things.