0

On some of our servers /var/log, which is a separate ext4 partition, shows that 100% of the 4.8G of space is in use. But it actually occupies around 200M of disk space. Application can still write logs to the directory. What could be the cause of this bug?

Other information:
Debian version: 9.9
Inodes use 1%

aardbol
  • 1,463
  • 4
  • 17
  • 25
  • 1
    It's probably /var/log/lastlog. This file is a sort of database with 1 record per uid on the machine. If it 'saw' a uid of 1 million or something it will reserve room for 1 million record through the 'sparse' file mechanism. `ls -s` shows true allocated size. – Gerrit Apr 03 '20 at 11:50

1 Answers1

2

This usually happens when you delete opened file. Let's say you had big file and a process writing to it and you delete it - the space remain occupied until file would be closed, because process can not be notified of file been deleted.

First you need to find the process which caused the problem - try lsof | grep deleted, modern linux will tell you that. If not - use lsof to find open files which are not listed in directory.

Second, you need to flush the process, usually kill -HUP helps, files should be reopened. If not - restart corresponding service.

Next time you need to free space - use truncate --size 0 aaa.log or just > aaa.log. This will truncate file, but leave it intact.

kab00m
  • 398
  • 1
  • 9