39

I have a fileserver where df reports 94% of / full. But according to du, much less is used:

# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             270G  240G   17G  94% /
# du -hxs /
124G    /

I read that open but deleted files could be responsible for it but a reboot did not fix this.

This is Linux, ext3.

regards

Andreas Kuntzagk
  • 761
  • 1
  • 6
  • 9
  • Combine @TCampbell and @Kyle Brandt's answers - reboot and if that doesn't fix it, boot from a rescue CD and run fsck on the unmounted partition. – Paul Tomblin Aug 21 '09 at 11:49
  • I already rebooted before. Extensive fsck running right now. – Andreas Kuntzagk Aug 21 '09 at 12:08
  • 1
    Ok, after an hour of counting, multiplying allocated blocks with block sizes, counting inodes, the pure truth came out - in my case the huge difference was caused not by a mount, but a hidden .trash folder! It was right under my nose. –  Jun 01 '13 at 14:21
  • This apparently unwelcome "duplicate question" has MUCH better answers as the "original" it points to :) – Chris Oct 26 '18 at 14:31
  • 2 scenarios:[1] df shows more than du: say your block size is 1Kb, and you have three files: 100B, 200B, 500B. They will occupy 3 blocks. DF will report 3K used, du will report 800Bytes used. -----second-case [2]: you have 100 1KB blocks in total and all have 300B files written on them. Then DF will report 100% used, but du will report 30KB used. ------There is third scenario where DU reports much higher then DF. This will be either "deleted but open" files //OR// it will be a hidden mount which has all those files !!! – rajeev Jul 10 '19 at 16:51

7 Answers7

35

Ok, found it.

I had a old backup on /mnt/Backup in the same filesystem and then an external drive mounted in that place. So du didn't see the files. So cleaning up this gave me back my disk space.

It probably happened this way: the external drive once was unmounted while the daily backup script run.

Andreas Kuntzagk
  • 761
  • 1
  • 6
  • 9
  • Interesting, didn't think of that. Mounting a fs on a non-empty directory can do funny things... – sleske Aug 21 '09 at 14:20
  • 1
    Ya, that was one of the ones in the link I gave you. That is called an 'overlay mount'. – Kyle Brandt Aug 21 '09 at 15:13
  • Youre right Kyle. I totally missed that in this long page. – Andreas Kuntzagk Aug 21 '09 at 15:35
  • Andreas, it also doesn't make it that clear, I didn't think of it either. – Kyle Brandt Aug 21 '09 at 17:08
  • 7
    chmod mountpoints to 000, so you get errors from scripts instead of them silently filling your root partition –  Aug 23 '09 at 00:07
  • @user1686 Lets see: `mkdir x && chmod 000 x && date > x/a` Nope, no warnings for root user. Backup scripts usually run as root. Actually, the trick is to `mount -t devpts devpts x` underneath your actual mount - it's always readonly, even for root. – kubanczyk Aug 25 '19 at 17:12
17

I don't think you will find a more thorough explanation that then this link for all the reasons it could be off. Some highlights that might help:

  • What is your inode usage, if it is almost at 100% that can mess things up:

    df -i

  • What is your block size? Lots of small files and a large block size could skew it quite a bit.

    sudo tune2fs -l /dev/sda1 | grep 'Block size'

  • Deleted files, you said you investigated this, but to get the total space you could use the following pipeline (I like find instead of lsof just because lsof is a pain to parse):

    sudo find /proc/*/fd -printf "%l\t%s\n" | grep deleted | cut -f2 | (tr '\n' +; echo 0) | bc

However, that is almost 2x off. Run fsck on the partition while it is unmounted to be safe.

Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444
  • df -i does not report anything unusual, will go the fsck way now. – Andreas Kuntzagk Aug 21 '09 at 11:58
  • 2
    quite many ~blocksize/2 size files can make this problem by filling up entire blocks, creating enormously big unavailable space (in the unavailable part of remaining space in the blocks) so do you store lots of small files there? – asdmin Aug 21 '09 at 12:36
  • How would I find out the number of files with such a size? – Andreas Kuntzagk Aug 21 '09 at 13:51
  • 2
    First you need to find you block size, so if it 4096, You want a file less then 4KB, so find / -size -4k | wc -l – Kyle Brandt Aug 21 '09 at 14:26
  • 1
    Used to fill up volumes with several 100 thousand 3k files... changing filesystem from xfs (on sgi) to reiserfs helped make diskspace more efficient. Not an option for many, but worked for us. – ericslaw Aug 21 '09 at 21:55
  • If you're using ext to store lots of small files and are running out of inodes before you run out of blocks, then create your filesystems with `mke2fs -i [NUM]`. This flag is "bytes per inode", and if you make it equal your block size then you will always have enough inodes. But you'll have to experiment with the value to see what maximizes use of your space. – ACK_stoverflow May 05 '15 at 18:45
13

It looks like a case of files being removed while processes still have them open. This disconnect happens because the du command totals up space of files that exist in the file system, while df shows blocks available in the file system. The blocks of an open and deleted file are not freed until that file is closed.

You can find what processes have open but deleted files by examining /proc

find /proc/*/fd -ls | grep deleted
TCampbell
  • 2,014
  • 14
  • 14
7

I agree that

lsof +L 1 /home | grep -i deleted

is a good place to start, in my case I notice that I had lots of perl scripts that was running, and keeping a lot of files alive, even though they was supposed to be deleted.

I killed the perl functions, and this made du and df almost identical, case closed.

Bart De Vos
  • 17,761
  • 6
  • 62
  • 81
Sverre
  • 723
  • 2
  • 12
  • 23
7

The most likely reason in your case is that you have lots of files that are very small (smaller than your block size on the drive). In that case df will report the sum of all used blocks, whereas du will report the actual sum of file sizes.

wolfgangsz
  • 8,767
  • 3
  • 29
  • 34
1

By default, when you format a filesystem with EXT3, 5% of the drive is reserved for root. df accounts for this reserve when it reports what is available, while du shows what is actually in use.

You can view the reserved blocks by running:

tune2fs -l /dev/sda|grep -i reserve

and you will get something like:

Reserved block count:     412825
Reserved GDT blocks:      1022
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)

If you would like to adjust that to a lower percent, you can do so with something like

tune2fs -m 1 /dev/sda

You can reduce it to 0, however since this is your root filesystem I would be wary of doing that. If the filesystem actually filled it may make maintenance tasks required to clean it up difficult.

Alex
  • 6,477
  • 1
  • 23
  • 32
  • 1
    That is true, but seems beside the point. The "in use" size reported by du and df differs, and that is independent of reserved blocks. – sleske Aug 21 '09 at 12:52
  • The size of disappeared space is not comparable to 5% – drAlberT Aug 21 '09 at 12:54
  • Yeah, it still doesn't add up, but I figured that was adding to the discrepancies. – Alex Aug 21 '09 at 13:46
  • In my case the size of used space shown by du and df was comparable and there was a lack of 23-24GB. Setting reserved blocks number to 1% freed those 23GB. Thanks! – mkll Sep 21 '16 at 12:12
0

Is it possible that perhaps du doesn't add in the size of the directories as well? Still, seems like a HUGE difference though, that can't be responsible for all of it.

Brian Knoblauch
  • 2,188
  • 2
  • 32
  • 45