16

I'm trying to free up some disk space - if I do a df -h, I have a filesystem called /dev/mapper/vg00-var which says its 4G, 3.8G used, 205M left.

That corresponds to my /var directory.

If I descend into /var and do du -kscxh *, the total is 2.1G

2.1G + 200M free = 2.3G... So my question is, where is the remaining 1.7G ?

Codecraft
  • 283
  • 2
  • 5
  • 15
  • What does `du -shx /var` say? – Kyle Smith Apr 16 '12 at 11:31
  • 1
    You could also have deleted files that have open file handles. The OS won't release the space until the handles are closed, but you won't see them with "du". You can run "lsof /var |grep deleted" (or something similar) to see those. This would actually not be a surprising finding in, say, /var/log, if the logs are rotated but the logging process isn't HUP'ed in the right way. – cjc Apr 16 '12 at 11:33
  • I had been deleting some log files that had gone crazy, it seemed as though they hadn't been rotating, but anyway, I had a one word email from a friend 'reboot' - figured he was being sarcastic but apparently not :) I have found my disk space again.... disaster averted (for now). – Codecraft Apr 16 '12 at 11:40
  • @Codecraft Yeah, rebooting will definitely clear any open file handles, although that's sort of like cracking an egg with a hammer. – cjc Apr 16 '12 at 11:48
  • @cjc as long as I get at the yolky goodness...! Any suggestions on how I could clear open file handles without hammering my egg? – Codecraft Apr 16 '12 at 11:56
  • LSOF with sudo/root, then look and see what still has the files open that you deleted. Close or restart those processes. That will release the file handles. – Bart Silverstrim Apr 16 '12 at 12:23
  • @Codecraft Basically what Bart said, though, if you know you're deleting the logs for process foo, it's a good bet that you'll also need to restart/HUP process foo. `lsof` will definitely show you everything, though. – cjc Apr 16 '12 at 13:20

1 Answers1

25

You probably have some deleted big log file, database file or something similar lying around, waiting for the process holding the file releasing it.

In Linux, a file deletion simply unlinks the file. It actually gets deleted when there's no file handles connected to that file anymore. So, if you have a 2 GB log file which you delete manually with rm, the disk space will not be freed until you restart syslog daemon (or send HUP signal to it).

Try

lsof -n | grep -i deleted

and see if you have any deleted zombie files still floating around.

Janne Pikkarainen
  • 31,454
  • 4
  • 56
  • 78
  • I didn't get to running your command, but what you said appeared to be bang on - I had been manually killing some logs-gone-crazy, and in the end, a reboot caused the disk space to be recalculated and show correctly. – Codecraft Apr 16 '12 at 11:42
  • It worked for us with Apache logs filling up the /var/log/apache/ directory. So you may not have to restart your whole server, nor syslog, just the service you'll find in the output of the command above. – Yvan Sep 30 '14 at 14:08
  • Just had this ourselves. We had the tomcat6 catalina.out not get caught by logrotate, do we deleted it when it reached 4Gb and fixed logrotate. Weeks later we wondered why the 4Gb hadn't come back. That `lsof` command showed we had a lot of tomcat files pending deletion. Restarting tomcat and suddenly we have tonnes of space back! – Nick Mar 10 '16 at 09:31
  • Finally an answer that worked for me. PostgreSQL had crashed and left open connections to 400GiB (!!!) of unlinked files on a 1TiB disk. Restarting Postgres fixed it. – sudo Apr 09 '17 at 03:26