How to analyse disk usage in command line linux?

99

37

du and df are nice, but I don't know how to filter the data they provide the way I do with SequoiaView. I would like to know which are the largest folders and the largest files in a glance.

Jader Dias

Posted 2011-06-22T12:30:06.673

Reputation: 13 660

Have you trued ncdu? – SDsolar – 2017-09-09T07:44:26.517

Answers

146

You might also want to try the NCurses Disk Usage aka ncdu.

Use it like ncdu -x -q if you're invoking it remotely (e. g. via ssh) and ncdu -x otherwise.

ncdu 1.6 ~ Use the arrow keys to navigate, press ? for help
    --- /home/geek -----------------------------------------------------------------
       27.6MiB  /qm test 1 rework
      312.0kiB  /sidebar
       88.0kiB  /rackerhacker-MySQLTuner-perl-6add618
        8.0kiB  /.w3m
        4.0kiB  /.cache
    e   4.0kiB  /.ssh
      160.0kiB   ng.tar.gz
       76.0kiB   plowshare_1~svn1673-1_all.deb
        4.0kiB   .bashrc
        4.0kiB   .bash_history
        4.0kiB   .profile
        4.0kiB   .htoprc
        4.0kiB   .bash_logout
        0.0  B   .lesshst

This is available under Mac OS X too.

The following flags to the command line might be helpful:

-q Quiet mode, doesn't update the screen 10 times a second
   while scanning, reduces network bandwidth used

-x Don't cross filesystem borders (don't descend into a
   directory which is a mounted disk)

Thanks to Sorin Sbarnea.

heinrich5991

Posted 2011-06-22T12:30:06.673

Reputation: 1 628

1Available under OS X too, via brew. It may be a good idea to call it using ncdu -x -q – sorin – 2012-12-13T12:46:48.503

1awesome! the best option for me was ncdu -q, even in ssh. – Valter Silva – 2013-04-19T14:36:38.027

47

Use some combination of the commands and options:

du --max-depth=1 2> /dev/null | sort -n -r | head -n20

to view only the largest few. If you'd like to use it a lot, then bind it to an alias, e.g. in bash by adding to ~/.bashrc

alias largest='du --max-depth=1 2> /dev/null | sort -n -r | head -n20'

Jaap Eldering

Posted 2011-06-22T12:30:06.673

Reputation: 7 596

3You can use sort -h to sort values with human readable suffixes. – allo – 2015-11-30T15:27:50.703

My modified version of this to display values in human readable format: du -h --max-depth=1 2> /dev/null | sort -h -r – Jose B – 2015-12-13T22:29:55.813

And for OSX du -d 1 -xh 2> /dev/null | sort -h -r | head -n20 – Samy Bencherif – 2018-12-02T05:21:58.560

2To view the largest few, you need the -r option on sort. – RedGrittyBrick – 2011-06-22T13:23:59.213

1I submitted @RedGrittyBrick suggestion and an error redirection to /dev/null as an edit subject to approval. – Jader Dias – 2011-06-22T13:39:43.620

I would also use the du -H option, but it breaks the sort behavior – Jader Dias – 2011-06-22T13:54:56.803

What does 2> do? – jumpnett – 2013-06-05T21:40:28.120

2@jumpnett: it redirects standard error (in this case into the black hole that is /dev/null). – Jaap Eldering – 2013-06-06T21:46:57.753

4

You probably want xdu.

du -ax | xdu -n

There's also the more sophisticated KDE-based Filelight.

Teddy

Posted 2011-06-22T12:30:06.673

Reputation: 5 504

3

I usually use

du -hsc * | sort -h

What each option means for du:

  • h: show sizes in human readable format (1K, 1M, 1G, ...)
  • s: summarize: display only a total for each argument
  • c: also display a grand total

The -h option on sort makes it understand the -h format (human readable) on du. This option is relatively new on sort, so maybe your system does not support it and forces you to use du -sc | sort -n instead.

If you do it on a remote machine and the process takes a long time, you probably want to execute this process backgrounded or inside a screen or something similar to prevent a connection loss.

emi

Posted 2011-06-22T12:30:06.673

Reputation: 163

3

I would like to recommend dutree, which offers a hierachical visualization.

You can select more or less levels of detail, and exclude paths for better control of visualization. You can also compare different paths.

enter image description here

It is implemented in Rust, fast and efficient.

$ dutree -h
Usage: dutree [options] <path> [<path>..]

Options:
    -d, --depth [DEPTH] show directories up to depth N (def 1)
    -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
    -s, --summary       equivalent to -da, or -d1 -a1M
    -u, --usage         report real disk usage instead of file size
    -b, --bytes         print sizes in bytes
    -f, --files-only    skip directories for a fast local overview
    -x, --exclude NAME  exclude matching files or directories
    -H, --no-hidden     exclude hidden files
    -A, --ascii         ASCII characters only, no colors
    -h, --help          show help
    -v, --version       print version number

nachoparker

Posted 2011-06-22T12:30:06.673

Reputation: 131

1

du -h 2> /dev/null | sort -hr | head -n20

du -h gives a human readable list estimate of disk space with a total
2> /dev/null suppresses any errors such as read access denied
sort -hr sorts the human readable file size in reverse order
head -n20 reduce the list to 20

Be aware that read access denied directories and files are excluded

D-B

Posted 2011-06-22T12:30:06.673

Reputation: 11

0

To know which are the largest folders and the largest files in a glance, you can also use the command line tool 'Top Disk Usage' (tdu):

https://unix.stackexchange.com/questions/425615/how-to-get-top-immediate-sub-folders-of-folder-consuming-huge-disk-space-in/501089#501089

Joseph Paul

Posted 2011-06-22T12:30:06.673

Reputation: 101