33

After administering Unix or Unix-like servers, what tools (command-line preferably) do you feel you cannot live without?

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
John T
  • 1,059
  • 1
  • 15
  • 19

42 Answers42

51

GNU screen - essential when you're managing large numbers of systems and don't want to have a dozen terminal windows open.

Murali Suriar
  • 10,166
  • 8
  • 40
  • 62
34

Some I know that I cannot live without...

  • tee - allows simultaneous writing to STDOUT (standard output) and a file. Great for viewing information and logging it for later.

  • top - the task manager of UNIX, gives a great overview of the system.

  • tail -f - allows you to view appended data as a file grows, great for monitoring log files on a server.

  • grep - Global Regular Expression Print, great for searching the system for data in files.

  • df - reports disk usage of current filesystems.

  • du - reports disk usage of a certain file/directory.

  • less - needed to view man pages! also useful for viewing output of commands in an easily seekable manner.

  • vim/Emacs/nano/pico/ed - whatever your text editor of choice may be, self explanatory of why it's needed.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
John T
  • 1,059
  • 1
  • 15
  • 19
  • For those who don't know, 'less' is an updated version of 'more'. more was limiting because you could only move forwards over a file whereas less can scroll backwards too. Ah the humour... :-) – Iain Holder May 04 '09 at 11:18
  • 8
    Another useful but little known feature of less is that you can always use the 'v' command to start editing the file you're currently looking at. Mnemonic is 'v' for 'vi'. – dr-jan Jun 01 '09 at 14:20
  • And if you don't like `more` or `less`, you can always try `most`. – drybjed Jun 01 '09 at 14:41
  • My preferred PAGER now is actually w3m. It has all the features of less, and can double as a text mode web browser :-) I have also used a bash function to use vim in read only mode (this way I get nicely colored diffs for instance). The bash function is used just to decide whether to provide '-' (for read the stdin) or not (in case we are paging a file). Works like a charm, except for man pages where nroff does overstrike... – njsf Jun 04 '09 at 20:21
  • hrm. Cannot edit stdin. Damn! That would be /excellent/ – Matt Simmons Jul 15 '09 at 20:40
  • 1
    `htop` is a "better" version of `top`. – Alexander Bird Nov 10 '11 at 05:49
26

lsof to determine which processes are using a file or directory (useful when trying to figure out what is preventing a device from being umount'd)

netstat to determine which processes are using network connections (especially useful when trying to figure out which daemon is bound to a certain port)

erichui
  • 270
  • 1
  • 5
  • 13
19

Learn all the basic tools, but learn Perl.

Perl is ideal for manipulating text, and since un*x operators live on text files, pipes, input and output, Perl is a great fit.

The added bonus is Perl is cross platform and if you have to do some work on a windows box you have an easily installable (just drop a Perl directory on the server) language that you already know.

And on that train of thought, get Cygwin as well. If you are a un*x admin and have to work on a windows box (even your desktop) having ls, rm, grep, sed, tail etc save you a lot of time when switching OS's.

Mark Nold
  • 285
  • 3
  • 9
18
  • sed
  • awk

The forgotten grandfathers of modern systems scripting. I know Perl gets most of the love (along with Bash scripting, Python, Ruby, and [insert your favorite scripting language here]), and don't get me wrong, I love Perl. I make use of it almost daily.

But sed and awk should not be forgotten, overlooked, or ignored. For a lot of cases, sed and awk are the best tools for the job. Quick examples are command line filtering with sed, and quick and dirty log processing with awk. Both could be done in Perl, but will require more work and development time.

Christopher Cashell
  • 8,999
  • 2
  • 31
  • 43
13

rsync, especially in concert with ssh. It allows simple efficient copying of files from host to host. How did we ever cope without ssh and rsync? :-)

pQd
  • 29,561
  • 5
  • 64
  • 106
dr-jan
  • 434
  • 7
  • 16
12

Face it - sooner or later you'll deal with the network as well. mtr, tcpdump and tshark are really useful for seeing what's happening.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
pQd
  • 29,561
  • 5
  • 64
  • 106
12

Netcat.

  • Test if TCP services are listening.
  • Perform transactions against plaintext protocols such as SMTP.
  • Quick insecure data transfers between machines.
  • Telnet client emulation.

The network swiss army knife, as they say.

Dan Carley
  • 25,189
  • 5
  • 52
  • 70
  • I've recently started using socat as netcat's replacement and I've been amazed at the number of options it gives you. Definitely worth checking out, despite scary amount of switches and weird syntax. – Marcin Nov 23 '10 at 14:54
9

I use most of the tools already listed, but here's one no one has touched on yet:

Puppet - system for automating system administration tasks

Shaun Hess
  • 504
  • 3
  • 5
9

For quick scripts, automation, etc:

  • bash
  • perl

To connect to your *NIX server:

  • Open SSH (Linux client)
  • Putty (Windows client)
Paul
  • 133
  • 1
  • 6
6

A couple of handy tools I haven't seen mentioned yet:

  • dstat --nocolor (overview of cpu-, disk-, net-usage)
  • iftop (nice dynamic overview of network traffic)
  • ccze (colour logfiles nicely)
  • ssh tunnels (can be useful once in a while; see the manual; -R)
  • expect (automate interactive, chatty dialogy interfaces, nice if you're in a pinch)
asjo
  • 1,228
  • 8
  • 6
6

For scripting:

Pablo Santa Cruz
  • 1,084
  • 4
  • 18
  • 24
6

Most of the standard ones are included in other answers, so I'll go fo non-standard ones:

  • htop — great for process management;
  • pinfo — lynx like browser for info and man pages.
vartec
  • 6,137
  • 2
  • 32
  • 49
5

ClusterSSH

ClusterSSH controls a number of xterm windows via a single graphical console window to allow commands to be interactively run on multiple servers over an ssh connection.

Tom Feiner
  • 16,758
  • 8
  • 29
  • 24
4

ssh, Vim, htop, su, Python, ls, cd, screen, du, tar :)

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
4

pv: Displays the progress of long operations that can be redirected. http://www.ivarch.com/programs/pv.shtml

Useful then you want to monitor something that is going to take ages, like copying/compressing a raw block device over the network (which is how I take paranoia backups of my 8Gb netbook before tinkering with anything major like tweaking with file system settings).

Also: I'll second votes for ssh, rsync, screen, htop and netcat as mentioned by people above - all of which are more important than pv but pv had not been mentioned yet. In fact pv is often a useful addition when piping stuff to or from to netcat.

David Spillett
  • 22,534
  • 42
  • 66
  • `pv` can be a handy tool, but beware of overusing it. Passing the data through it absolutely has a performance impact (all the data has to pass through **another** program). At my last job, we did a lot of log processing. One of the guys decided to start putting pv in all the log processing scripts, until we discovered that it added about 15% to the processing time for them. Now it's only used for jobs that take less than a few minutes, or that have an existing resource limitation (such as crossing a slow network connection). – Christopher Cashell Jan 25 '12 at 23:09
  • Good point Christopher, though I've never seen it cause as much as a 15% performance change (then again, most of what I use `pv` for is disk or network I/O bound rather than CPU/memory bound). The same arguement is the key one against excess use of `cat` too (I sometimes use cat when not actually needed just to make things read nicely left-to-right, but the extra in-memory data copying via the pipe and context switching can have a measurable performance impact). – David Spillett Jan 26 '12 at 11:52
  • Yeah, I do the same thing. Most of my excessive `cat` use comes from starting with `cat foo`, followed by hitting the "up" arrow and then adding `| [command]` to my previous line. I know I'm taking a (small) performance hit by keeping the cat in there, but leaving it requires less effort than rewriting/retyping the command to be `[command] < foo`. Not a concern for (most) ad hoc command line work, but not ideal for scripts (same as how I feel towards `pv`, I guess). – Christopher Cashell Jan 30 '12 at 18:39
4

vmstat 1

Gives you a great overview of system behaviour.

3

Most of these tools are made much more powerful using Bash "programmable completion" - so you can tab-complete things like commandline options, or say the name of a package with "apt-get install". It will also limit what you tab-complete for relevant files - for example, "unzip" will only complete supported archive files.

It really is the mutts - if you have never tried it you probably just need to fiddle your .bashrc:

if [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
 fi

Certainly this is true on Ubuntu and Debian. You may need to get the package on some Linux distributions.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Tom Newton
  • 4,021
  • 2
  • 23
  • 28
3

tar pipe!

piping the output of tar to another utility, to tar running on the same box, or to tar running over SSH is my favorite old-school Unix move for moving files from one place to another.

This also gives you the Windows-style option of copying one folder to another and ending up with all of the files in the source and destination directory.

Jason Luther
  • 408
  • 3
  • 6
3

zsh as a shell

It is especially efficient with grml.org's extensions/setup.

cstamas
  • 6,607
  • 24
  • 42
3

sudo.

Seriously though, tail -f is useful.

Sophie Alpert
  • 1,639
  • 1
  • 12
  • 16
3

iotop, is a top-like program to monitor I/O accesses to your disks.

Emilio
  • 55
  • 1
  • 13
3

Some that haven't been mentioned before:

  • head/tail
  • diff
  • pstree
  • tar
  • gzip/bzip
  • watch
CK.
  • 1,163
  • 6
  • 10
2

A few things overlooked I wanted to mention.

  • vim -d split screen console diff that makes it very easy to see the differences in a file
  • pdsh allows you to easily run a command over as many systems as you want either serial or parallel(I am a cluster admin. I can't function without it.)
  • nmon is like top on crack. It gives you a great idea of what is going on on a system on a single screen. You can see disk I/O, network I/O CPU usage, and memory usage real time. At the very least a real fun thing to play with when profiling a system.

Oh, and I forgot to mention, when scripting, I believe you should always use Korn. I hate Korn(Not the band. I love the band:-P) but it's literally everywhere. You can take a script and move it between Solaris, AIX and Linux and not have to worry about whether or not the admin had the decency to install Bash.

Jackalheart
  • 91
  • 1
  • 5
2

The shutdown command.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
user237419
  • 1,663
  • 8
  • 8
2

One tool sometimes very handy is nohup. I use it to run scripts that last for a long time using remote SSH clients.

2

I love AWK as well as "for" on the command line.

Especially to build up a list of commands I want to run and then execute them all at once.

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Brian G
  • 375
  • 2
  • 5
  • 19
2

  • vi
  • find
  • ssh
  • AWK
  • sed
  • netcat
  • tar
  • ps
  • Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    Maxwell
    • 5,026
    • 1
    • 25
    • 31
    2

    man - to read the man pages.

    elinks - to check google, cause I sure as hell cant remember everything.

    And attention to detail & tenacity, because without them I just waste time.

    pjd
    • 131
    • 1
    • 6
    2

    screen is a must, especially with a good .screenrc file. I have it configured to display visually which window I'm in and can move between them with Ctrl+Arrow. For a single ssh session and multiple shells, it is a life saver.

    Nasko
    • 727
    • 3
    • 5
    2

    Some additional answers can be found in this similar question

    Vagnerr
    • 1,265
    • 1
    • 15
    • 20
    1
    • Bash
    • Vim
    • iostat
    • ps
    • top
    • lsof
    • strace
    • tcpdump
    • netstat
    • find
    • grep
    • Perl
    • sed
    • tail
    • dig
    • traceroute

    Where possible the GNU versions of the above over the propritary versions.

    Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    Jason Tan
    • 2,742
    • 2
    • 17
    • 24
    1
    • rsync running over ssh to keep things consistent... in multiple directions (-gloptru[n]c)
    • Vim and vimdiff to edit with 'folding' and viewing differences in scripts, logs, etc.
    • Perl and (Ba)sh for scripting and analysis
    • cURL (and maybe Wget) for posting/fetching data from ...
    • Apache to webify them all (or at least create point-n-click admin tools)
    Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    ericslaw
    • 1,562
    • 2
    • 13
    • 15
    1

    Perl and Vim. In that order. Anything else, I can use Perl to emulate somehow.

    Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    1

    All the standard commands and utilities (Bash, grep, sed, AWK, find, xargs, ssh, Vim, etc.)

    • Lsof, awesome in so many ways, I love to use it for finding open ports AND the files associated with that process.
    • Screen, for multi-session awesome.
    • Tcpdump, its funny how many application problems are really weird network issues
    • Ruby, makes more sense to me than Perl, becoming wildly popular for SA work.
    • Chef, configuration management system.
    • Capistrano, ssh in a for loop, but less crappy. And in Ruby.
    • Rake, more sensible than make.
    Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    jtimberman
    • 7,511
    • 2
    • 33
    • 42
    1

    These are the tools I use on a daily basis (as a developer more than a system administrator)

    • zsh
    • lsof
    • ps
    • ack (or grep)
    • find
    • svn
    • Python
    • tar
    • which
    • fortune (a guy has to keep his sanity somehow)
    Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    1

    Simple, basic but still essential:

    ps - report a snapshot of the current processes.

    free - Display amount of free and used memory in the system.

    w - Show who is logged on and what they are doing.

    gimel
    • 1,193
    • 7
    • 9
    1

    pkill or ps for killing processes.

    If you want to use ps to kill any process with a given name or under a certain directory blah (or any matching string you require) you can:

    kill `ps -ef | grep <blah> | grep -v grep | awk '{print $2}'`
    
    Iain Holder
    • 240
    • 2
    • 7
    • Wouldn't killall(1) do the same thing? – agnul May 04 '09 at 09:29
    • Yes, but this can kill all processes installed under a certain directory too. It's more flexible I think and therefore better to use is scripts. I have edited my answer slightly. – Iain Holder May 04 '09 at 11:15
    • 1
      Or, better yet, use the full power of awk and lose the greps (also modified to use the newer subprocess notation instead of the older and deprecated backticks): kill $(ps -ef | awk '/program_to_kill/ && !/awk/ {print $2}') – Christopher Cashell May 04 '09 at 21:59
    • 1
      To add to my previous comment, rather than explicitly excluding your command from the process listing (done by the original poster with 'grep -v' and by me above with '!/awk/', you can also pick a letter in your original search pattern and surround it by brackets. That will also cause it to not show up in the results. For example, my above command line modified would be: kill $(ps -ef | awk '/[p]rogram_to_kill/ {print $2}') – Christopher Cashell May 04 '09 at 22:01
    1

    nmon

    Haven't seen anyone mention this yet.

    The nmon tool is designed for AIX and Linux performance specialists to use for monitoring and analyzing performance data, including:

    • CPU utilization
    • Memory use
    • Kernel statistics and run queue information
    • Disks I/O rates, transfers, and read/write ratios
    • Free space on file systems
    • Disk adapters
    • Network I/O rates, transfers, and read/write ratios
    • Paging space and paging rates
    • CPU and AIX specification
    • Top processors
    • and more

    Can be run in file mode which generates a big CSV file. IBM also provide an Excel macro for parsing this and turning it in to awesome graphs, although you do need a Windows VM for that.

    nagios and munin for monitoring and graphing.

    Kura
    • 223
    • 2
    • 7
    0
    • atop - yet another top alternative, great for monitoring changes in processes
    • strace/ltrace - for tracking down those REALLY annoying bugs
    • ldd - track down broken library dependencies
    • cron, logrotate ;)

    Of course, beyond command line, you need Nagios/Cacti/MRTG/etc...

    allaryin
    • 323
    • 4
    • 10
    0

    Learn Vim or Emacs in and out!!
    For text editing
    Grep
    Sed
    AWK


    For network tools
    Nmap
    dig

    Peter Mortensen
    • 2,319
    • 5
    • 23
    • 24
    XTZ
    • 183
    • 1
    • 1
    • 10
    0

    munin is a great tool for doing capacity analysis and review, but you need to set it up before you need it. We install it as a standard part of every server install we do.

    Sean Reifschneider
    • 10,370
    • 3
    • 24
    • 28