34

Command line and scripting is dangerous. Make a little typo with rm -rf and you are in a world of hurt. Confuse prod with stage in the name of the database while running an import script and you are boned (if they are on the same server, which is not good, but happens). Same for noticing too late that the server name where you sshed is not what you thought it was after funning some commands. You have to respect the Hole Hawg.

I have a few little rituals before running risky commands - like doing a triple take check of the server I'm on. Here's an interesting article on rm safety.

What little rituals, tools and tricks keeps you safe on the command line? And I mean objective things, like "first run ls foo*, look at the output of that and then substitute ls with rm -rf to avoid running rm -rf foo * or something like that", not "make sure you know what the command will do".

deadprogrammer
  • 1,661
  • 7
  • 24
  • 25

30 Answers30

45

One that works well is using different background colors on your shell for prod/staging/test servers.

andyhky
  • 2,652
  • 1
  • 25
  • 26
  • 6
    Yes, and also use bright screaming red or orange whenever you have root privs. – Adam D'Amico May 22 '09 at 16:35
  • 1
    Is there any way to automatically set the colour of remote machine terminals to be different to your at login time? Using Gnome - Maybe this should be a seperate question. – Jona May 22 '09 at 17:30
  • 2
    Just have a switch statement that changes your PS1 variable depending on the hostname of the machine. – Neil May 22 '09 at 22:06
  • 2
    For any Windows folks, there's this gem from Sysinternals that will display host info prominently on the wallpaper. http://technet.microsoft.com/en-us/sysinternals/bb897557.aspx?wt.svl=related – squillman May 23 '09 at 14:03
  • Yes yes yes - my production iSeries session is now white on red to prevent me from: pwrdwnsys option(*IMMED) restart(*YES) – Peter T. LaComb Jr. Jun 21 '09 at 02:00
  • helpful advice yes, but a command line trick? no – ericslaw Jun 24 '09 at 17:54
14

Have a back out plan in mind before you start.

  • ZIP up a file/directory instead of deleting it right away
  • set the (cisco) router to reboot in 'x' number of minutes and don't 'wr' right away
  • make sure the interface you are changing is not the one you entered the system on. This could be the router interface you telnet'd to or the ethernet port VNC'd to.
  • never login as 'root'
  • make a backup. check that it is good. make another one.
  • ask someone you trust 'Am I about to do something dumb here?'
Peter
  • 5,403
  • 1
  • 25
  • 32
  • 3
    +1 cisco ios don't save until sure it works. Damn I remember the Amiga days when all OS dialogs had "Use", "Save" and "Cancel" - where "Use" would only apply the settings but not save them for the next reboot. That was extremely useful! – Oskar Duveborn May 22 '09 at 23:27
  • An even better solution today I guess is having unlimited undo for all system changes instead - that way you're a lot safer. Of course, if the setting you changed made the system undo feature unusable, you'd be screwed anyway.. hmm ^^ – Oskar Duveborn May 22 '09 at 23:29
  • 1
    +1 for "reload in 5". Saved my butt more than a few times when an ACL change locked me out of a remote router/switch. – Greg Work May 27 '09 at 12:33
  • +1 for the last point - sanity check. Easy to do, and then at least you've got two people with vested interest to fix any problems that occur ;) – Ashley Nov 09 '11 at 08:27
10

I have a low-tech solution to some of these.

I have developed an inate habit of doing the following (when planning to work as root):

  • First, logging in as a normal user, then using sudo su - root to switch to root. I do this as a mental preparation, a reminder to me that I have mentally walked into a very dangerous area and that I should be alert and on my guard at all times. Funny as it sounds, this little ritual alone has save me a ton of grief by simply reinforcing that I cannot be careless.
  • Each command is typed but the [Return] key is never pressed. Never.
  • No command is ever executed without understanding exactly what it does. If you are doing this without knowing what it does, you are playing Russian roulette with your system.
  • Before pressing the [Return] key, the command that was banged out on the CLI is carefully examined by eye. If there is any hesitation, any hint of potential issue, it is re-examined again. If that hesitation persists, the command is left on the line and I alt-F2 to another console to consult man pages, etc. If in a graphical session, I launch a browser and do some searching.
  • No common user is ever handed sudo on my systems, not because I'm a BOFH, but because without preparation and training, this is like giving a loaded gun to a monkey. It's amusing and fun at first, until the monkey looks down the barrel and squeezes...

When using rm, I always cd to the directory first, then use a prefix of ./ to ensure that the directory is correct, i.e.

cd /usr/some/directory ; rm ./targetfile

or I specify the entire path of the file

rm /usr/some/directory/targetfile

which is a PITA but...better safe than sorry.

Avery Payne
  • 14,326
  • 1
  • 48
  • 87
  • 1
    I only give out sudo for a preselected list of commands, like apache2 reload. Otherwise, users have to go through me. It's a pain in the ass but it's the best defense for running a devbox for 15 people. – Artem Russakovskii May 23 '09 at 18:29
  • 2
    Quote: "but because without preparation and training, this is like giving a loaded gun to a monkey. It's amusing and fun at first, until the monkey looks down the barrel and squeezes..." Actually, it's still funny after that point... just rather messy – Mikeage May 27 '09 at 06:33
  • You should use sudo -i instead of sudo su, and generally using sudo to run specific commands is a fiar bit safer. – LapTop006 May 27 '09 at 13:22
  • How do you execute the command if you NEVER hit the return key? – g . Jun 23 '09 at 07:40
  • Surety. You will at some point press it. The idea is to use this mental process as a "safety switch". Only after clearing these hurdles in your head do you press enter. – Avery Payne Jun 23 '09 at 12:44
  • 1
    && is your friend! Rather than doing cd /usr/some/directory ; rm ./targetfile you should cd /usr/some/directory && rm ./targetfile That way you're never going to wind up rm'ing targetfile in your original directory if the cd failed. Doing the full path rm is better, though. – Mike G. Jun 23 '09 at 15:50
  • && is nice, but there are times when you have commands that do not return "standard" error codes, and as such, you are not guaranteed to have a zero result as a successful command completion. I usually do use it when doing other work, but when it's something dangerous, I want to be sure. – Avery Payne Jun 24 '09 at 05:34
10

This one is specific to Windows Powershell.

As a policy we add the following the the machine profile.ps1 on each server. This ensures that the following are true:

  1. Admin powershell console windows have a dark red background color
  2. Administrator is added to the title
  3. The message "Warning: Powershell is running as an Administrator." is written at startup
  4. The title bar is prefixed with "Administrator: "
  5. Standard utilities (like corporate shell scripts, vim and infozip) are in the path.
$currentPrincipal = New-Object Security.Principal.WindowsPrincipal( [Security.Principal.WindowsIdentity]::GetCurrent() )
& {
    if ($currentPrincipal.IsInRole( [Security.Principal.WindowsBuiltInRole]::Administrator ))
    {
        (get-host).UI.RawUI.Backgroundcolor="DarkRed"
        clear-host
        write-host "Warning: PowerShell is running as an Administrator.`n"
    }

    $utilities = $null
    if( [IntPtr]::size * 8 -eq 64 )
    {
        $host.UI.RawUI.WindowTitle = "Windows PowerShell (x64)" 
        $utilities = "${env:programfiles(x86)}\Utilities"
    }
    else
    {
        $host.UI.RawUI.WindowTitle = "Windows PowerShell (x86)"
        $utilities = "${env:programfiles}\Utilities"
    }
    if( (Test-Path $utilities) -and !($env:path -match $utilities.Replace("\","\\")) )
    {
        $env:path = "$utilities;${env:path}"
    }
}

function Prompt
{
    if ($currentPrincipal.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator))
    {
        if( !$host.UI.RawUI.WindowTitle.StartsWith( "Administrator: " ) )
        { $Host.UI.RawUI.WindowTitle = "Administrator: " + $host.UI.RawUI.WindowTitle }
    }
    'PS' + $(if ($nestedpromptlevel -ge 1) { '>>' }) + '> '
}
Brian Reiter
  • 860
  • 5
  • 8
  • That's cool - I wish you could easily do something like that in linux across all your servers. – Jason Tan Jun 21 '09 at 02:16
  • 1
    In powershell this is done by editing $pshome/profile.ps1 (machine profile). Why can't you do something equivalent on Linux in /etc/.bash_profile? – Brian Reiter Jun 21 '09 at 14:02
  • 2
    Also useful to change $ConfirmPreference to "medium" (defaults to high), and more things will prompt for confirmation. – Richard Jun 22 '09 at 09:15
6

I can agree with all the above answers but I have to stress this very, very important tip:

Know when to avoid multitasking.

ojblass
  • 636
  • 1
  • 9
  • 17
5

There are few important things to be aware of before making a server change:

  • Make sure I'm on the right server

  • Be aware of **how many people will be affected by this action* (if you make a mistake or not)

  • Before typing the 'enter' key, be aware of the possibility to undo

  • Ask yourself whether this command has potential of disconnecting your session (fw rule, bad shutdown, etc...). Make sure you there's a failover to come back in (specially if you're offsite)

l0c0b0x
  • 11,697
  • 6
  • 46
  • 76
5

I make sure the hostname of the system I'm on is in the bash (or other shell) prompt. If I'm chrooted, I make sure that makes it in there somehow, too.

I was once installing a Gentoo system from within another live Linux distro and accidentally ran a rather destructive command (can't recall what it was ATM - some variant of rm) in the wrong shell, causing a bunch of stuff on the live system to be deleted, rather than stuff from within the chroot. From then on, I always did

export PS1="(chroot) $PS1"

whenever I was working within a chroot.

Tim
  • 1,148
  • 1
  • 14
  • 23
  • 1
    +1 - also, I find it helpful to have the current working directory (or the last n layers of it, if you're working in deeply nested filesystems) in the prompt. – Murali Suriar May 22 '09 at 17:19
  • the official Gentoo handbook suggests exactly that, when chrooting from the live CD to the newly created Gentoo! – cd1 Jun 21 '09 at 03:49
  • CD1: yes, but the x86 quick install guide (http://www.gentoo.org/doc/en/gentoo-x86-quickinstall.xml) doesn't, and that's what I was using at the time. But now I do it reflexively :) – Tim Jun 21 '09 at 04:46
4

If you haven't done it already alias rm to rm -i

trent
  • 3,094
  • 18
  • 17
  • 7
    No, no, no, no. This is one of the worst things you can do. One day you will find yourself on a box that don't have it aliased or you somehow trashed our env. Learn to use rm -i instead. – olle Jun 22 '09 at 20:37
  • No, no, no, no. Do do this. One day you'll accidentally do the wrong thing and save yourself. More often than you'll forget to put -i on the line and screw up and delete the wrong thing. – Jerub Jun 25 '09 at 03:41
  • I wouldn't do it if I ever work on more than one machine... Working with new machine until it is set-up is to big a challenge. – slovon Aug 17 '09 at 09:27
  • The terrible thing is that both @olle and @Jerub are right. Maybe it would be clever to put some, possibly colored, flag in the PS1 that indicates 'safety off'/'safety on'... – ikso May 10 '11 at 10:59
4

Rule 1 - make backups

Rule 2 - NEVER add "molly guard" wrappers to standard commands, make your own version, sure, but don't take over the name, it'll just bite you when you're on a system you didn't set up.

Prompt tricks like different colour for root and (partial) directory tree are great helpers, but again, ensure you can work without them.

LapTop006
  • 6,466
  • 19
  • 26
4

This may seem counter-intuitive and less 'ardkore, but the best command-line safety tip I have is: If a GUI-mode alternative is available and practical, then USE IT.

Why? Quite simple. GUI-mode usually has a built-in safety net, in the form of "warning - you are about to snargle the freeblefrop, are you sure you want to do this?" Even if not, it slows you down, giving more room for think time. It lets you double-check options more easily before committing to them, you can screenshot before and after states, it protects you from typos; all good, useful and helpful stuff.

In the classic case of the dreaded "rm -rf", do you think it's easier to accidentally issue it from a GUI or a CLI?

In the end, there's no shame in resorting to GUI. It won't infallibly prevent major disasters; it's just as possible to be trigger-happy in a GUI as it is in a CLI; but if it saves you once it's proved itself worthwile.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36
3

Use common sense, and don't run commands you don't understand. That's all good advice. If you feel like paining yourself with writing out the absolute path of everything you pass to rm, or running anything via sudo, feel free. I'd prefer su -c then. At least it doesn't cache the password. I wouldn't feel comfortable with any regular user being allowed to run things with root privileges without password verification.

There is a few things you can put in your ~/.bashrc to make things a bit more safe, such as:

alias srm='rm -i'

Allowing you to have a safe alternative of rm,...

But in the end, you can and will always screw up. The other day I had a defunct configure script chown my entire /usr/bin folder, breaking several things. Yes, a simple 'make install' of any type of software with a bug in it, may break your system. You are NEVER safe, whatever you do. What I'm getting at is, the MOST important thing:

Keep regular backups.

jns
  • 514
  • 4
  • 7
  • 2
    Again- DO NOT EVER ALIAS "rm". You WILL be screwed by it, eventually, when you work on a system that doesn't have it aliased. – SilentW Jun 24 '09 at 18:59
  • A _very_ bad idea. If you ever find yourself on Mac OS X (maybe other platforms?) `srm` is **secure remove**! – morgant May 02 '12 at 19:15
3

Instead of aliasing rm to rm -i , wouldn't it be better to alias to say remove or saferemove (and use those as your preferred deletion tool). Then when you use a box that hasn't had this set up, no damage is done.

DBMarcos99
  • 41
  • 4
2

Make sure that you never run command you find online unless you fully understand what they are doing.

Your system may be different from the poster's, and that could cause a world of hurt.

jjnguy
  • 261
  • 4
  • 11
2

An obvious one for command-line safety from a Unix/Linux perspective is the proper use of the root account.
An rm -rf as root is generally more dangerous than as a user, and using built-in things like sudo rather than logging in as root is vital. A nice simple whoami will usually help for the schizophrenia or multiple personalities.

That and prepending echo to any filechanging commands, especially if you want to make sure that you got a glob or regex match right.

Andy
  • 1,493
  • 14
  • 14
2

Having a secondary connection into the machine you're working on can be handy in case you kill your primary session, or do something silly that locks it up... heavy processing etc.

That way you still have access into the machine and can kill your primary session.

Most of the above comments refer to rm, but I've done some stupid things with other commands too...

ifconfig to take down the network - ouch, that requires physical presence to fix.

As for scripting I generally work in two windows. The first I use to write the script, the second to test out each line as I write it. Going slowly and carefully I can make sure that every line works as I expect as I write the code, taking care to maintain the same variables, etc.

Personally, I don't find the extra prompts for things like rm -i really help. I make most of my mistakes when v. tired, stressed etc, which are the times when I'll just be banging out y and ignoring the prompt anyway. Bad practice perhaps.

Alex
  • 1,103
  • 6
  • 12
2
# Allow only UPDATE and DELETE statements that specify key values
alias mysql="mysql --safe-updates"`

A highly recommended alias to have around if you ever use the mysql CLI.

jldugger
  • 14,122
  • 19
  • 73
  • 129
2

If you use bash, try this:

TMOUT=600

in your /root/.bashrc or similar. It logs you out automatically after 10 minutes, reducing the chance that you'll flip to a root terminal you've accidentally left open and type something stupid.

Yes, I know you should use sudo to execute root commands - this is just an extra safety net in case you decide to play it risky one day.

Andrew Ferrier
  • 864
  • 9
  • 21
1

Instead of ls I use echo so I can see the full command after the shell has expanded everything. Also, always double quote variables that represent files so your stuff works with file names that might have tabs or spaces.

Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444
1

I avoid the * glob as its own argument whenever possible. Even if I really do mean "remove everything in this directory" I try to be more specific, ie. rm *.php. It's preemptive damage control in case I accidentally run the same command out of history in another directory.

  • I learned to never "cd dir; rm -rf *", but instead always "rm -rf dir", as specific as possible. – slovon Aug 17 '09 at 09:25
1

If you use multiple variants of an operating system, be very aware of the differences in syntax; what is reasonably safe on one unix variant is extremely dangerous in another.

Example: killall

Linux/FreeBSD/OSX - kills all processes matching the parameter passed. eg: "killall apache" kills all apaches, leaving all other processes alone.

Solaris - kills all processes. No, really. From the man page: killall is used by shutdown(1M) to kill all active processes not directly related to the shutdown procedure.

alanc
  • 1,500
  • 9
  • 12
Greg Work
  • 1,956
  • 12
  • 11
1

A great way to make you think about what you're doing is add something like this to root's bashrc (cshrc, whatever):

unset PATH

That way, you've got to do /bin/rm instead of just "rm". Those extra characters might make you think.

Bill Weiss
  • 10,782
  • 3
  • 37
  • 65
1

For complex regex, especially 'find' commands put echo in front and capture them to a file. Then you can check that you are really deleting/moving/etc exactly what you think you are before executing the file with 'source'.

It's also handy for manually adding those edge case that the regex didn't pick up.

Martin Beckett
  • 317
  • 1
  • 2
  • 11
1

A little bit meta to some of the other posts: I use the usual echo/ls steps suggested first, to make sure that the command is selecting the set of files I want or otherwise being interpreted by the shell as intended.

But then I use the command history editing features of the shell in order to retrieve the previous command, and modify only the parts that need to vary.

It doesn't help at all to type each of these commands independently ...

$ ls *.bak
$ echo rm *.bak
$ rm * .bak

... because I accidentally typed a space in the last line, and deleted all the files. I would always retrieve the previous line and simply remove the 'echo'.

Zac Thompson
  • 1,023
  • 10
  • 10
1

root user:
Dont be root unless you have to.
If the vendor says it needs to run as root, tell them you are the customer and that you want to run it as non-root.
How many software packages off the shelf wants root 'just because it's easier'?

habit:
never ever use '*' with remove without looking at it three times It is best to build the habit of using ls -l TargetPattern, then use 'rm !$'. The biggest threat is not being where you think you are. I almost type 'hostname' as often as 'ls'!

crutches:
a standard prompt helps alot, as do aliases like "alias rm='rm -i'", but I often do not have full control over the machines I'm on so I use an expect wrapper script merely to set your path, prompt, and aliases with '-i'

find issues:
using a full path helps, but in cases where that is not possible, cd into a more safe location and ALSO use '&&' to insure that the 'cd' succeeds before you do your find, remove, tar, untar, etc:
example: cd /filesystema && tar cf - | ( cd /filesystemb && tar vxf -)
use of '&&' can prevent a tar file from being extracted on top of itself in this case (though in this case 'rsync' would be better)

removes:
never remove recursively if you can help it, especially in a script. find and remove with -type f and -name 'pattern' I still live in fear of feeding 'nothing' to xargs... tar and untar to move stuff around (use rsync instead)

ericslaw
  • 1,562
  • 2
  • 13
  • 15
0

Instead of

rm foo*

use

rm -i foo*

This is practical with a handful of files, but not with, say, a whole tarball. That's why aliasing rm will get in your way.

gbarry
  • 615
  • 5
  • 11
  • that's what the -f switch is for: overriding any previous -i switches. If you put this into your .bash_profile or similar shell-initialization script, you won't have to worry about it. Just make sure you want to do that, but it's been said before, and more eloquently. – Kevin M Jun 22 '09 at 18:20
0

Running the command with echo first is a good idea, but it's still prone to typos.

Try using it with an expansion such as !$.

echo foo*
rm -rf !$

The !$ expands to the last word of the last command, so it's equivalent to

echo foo*
rm -rf foo*

There is also !*, which expands to all arguments to the last command.

Indeed, you could do it this way if you prefer

echo rm -rf foo*
!*

(If you're using ksh, not bash, you can type Esc+period to insert the last word instead.)

Mikel
  • 3,727
  • 2
  • 19
  • 16
0

Use bash and set PS1='\u@\h:\w> '. This expands to username@hostname:/full/working/directory/path> As mentioned in other answers, you can use expect to set up the environment whenever you log in if you can't update the .profile or .bash_profile files. Changing background colours is easily the best answer though :-)

dr-jan
  • 434
  • 7
  • 16
0

In the case of something like:

ls *.php
echo rm *.php
rm * .php

You could use the substitution operator, like this:
$ ls *.php
<dir listing>

$ ^ls^echo rm (this replaces ls in the previous command with echo rm, keeping the rest of the command line the same)

$ ^echo rm^rm (replace echo rm with just rm, thus you don't have to retype *.php and throw in a space at the wrong time)

^ = shift-6, for those not familiar with it.

Red Five
  • 11
  • 2
  • Ick. I'd rather use my arrow keys and edit the previous line. That way I can see what command is about to run when I press Enter, instead of blindly trusting myself to get the substitution pattern right. – Marius Gedminas May 10 '11 at 12:32
  • For example, after "echo rm *.php" you do and then . – Marius Gedminas May 10 '11 at 12:32
-1

Ah yes. That age old trick of sending someone a file throught IRC named "-rf" so it ends up in their ~ directory. One little "rm -rf" later (instead of "rm -- -rf") and much laughter ensued as they learn a harsh lesson about not running IRC as root.

x0n
  • 339
  • 2
  • 7
-1

Instead of using rm -rf <dir> put the -rf at the end like so: rm <dir> -rf. Think of it as removing the safety after you have aimed and before you fire. That way you are protected if you have a slip of the enter key while typing the directory name (or using tab completion) and have similarly named directories.

Starfish
  • 2,716
  • 24
  • 28