0

Somehow my work-in-progress Python server filled the rootfs partition by logging 50gb of data to /tmp/blabla.log, which I noticed by basic commands failing like this: root@server:/# crontab -e /tmp/crontab.FfvjqH: No space left on device

So, I did rm -rf /tmp/blabla.log and the file disappeared, cannot be seen with ls or tail... but the insufficient space errors persist. df -h still reports that rootfs is 100% used and does not reflect that I removed the 50gb file.

I could free some more space by moving some files to another parition and the system is OK but I don't have my 50gb of free space back.

What could be the problem?


my own answer: After moving some 3gb of files from the rootfs partition, I dared to restart the server, at the risk of having the server nog restart due to disk-space problems, but luckily it rebooted successfully and after reboot df -h reported the correct amount of free space. So system reboot seemed to be the answer.

2 Answers2

1

You have deleted the file but the file descriptor is still open and used by the running script. Stop the script and you should get the space back. It's a better idea to truncate the file with > /tmp/blabla.log or cp /dev/null /tmp/blabla.log

b13n1u
  • 980
  • 9
  • 14
0

you might want to check with lsof if the file is still being held by the process/service. For instance with tomcat if u delete the catalina.out while the service is running it will "delete" the file but it will still be locked by the tomcat service until you restart. You can find this with lsof | grep <filename> and it will say (deleted) behind the hit if you have one.