221

I have file servers which are used to store files. Files might reside there for a week, or for a year. Unfortunately, when I remove files from the server, df command doesn't reflect the freed up space. So eventually, the server gets filled up (df shows 99%), and my script doesn't send any more files there, except there might be a few dozen GB of free space on there.

I got noatime flag on the mounted partitions if that makes any difference.

Aminah Nuraini
  • 1,029
  • 1
  • 8
  • 15
  • Is this happening on a single partition or on all partitions? – Khaled Feb 08 '11 at 07:40
  • Well, its happening on my main data partition, which is the only one I care about, since I only write/remove files onto it. –  Feb 08 '11 at 08:09
  • Please enlighten me with the solution, or a link to one. –  Feb 08 '11 at 08:51
  • What filesystem(s)? DF does a stat of the superblock, it may be that your filesystem is not updating the sb inode. Have you tried flushing cache? – beans Feb 08 '11 at 17:32
  • Using ext4. How do you flush caches? –  Feb 08 '11 at 18:22
  • This is a duplicate of ["After deleting a large file, how long does it take `df` to pick up the change?"](http://serverfault.com/questions/229454/after-deleting-a-large-file-how-long-does-it-take-df-to-pick-up-the-change). – JdeBP May 17 '11 at 11:06
  • Joined this Stack today just to upvote this question and its answers because I was stuck and this got me unstuck. – shoover Mar 02 '17 at 19:20
  • Does this answer your question? [df says disk is full, but it is not](https://serverfault.com/questions/315181/df-says-disk-is-full-but-it-is-not) – AncientSwordRage Jan 28 '22 at 11:24

15 Answers15

361

Deleting the filename doesn't actually delete the file. Some other process is holding the file open, causing it to not be deleted; restart or kill that process to release the file.

Use

lsof +L1

to find out which process is using a deleted (unlinked) file.

Tombart
  • 2,013
  • 3
  • 27
  • 47
Ignacio Vazquez-Abrams
  • 45,019
  • 5
  • 78
  • 84
  • 2
    Files that are removed were not accessed in over a month, and the only process that accesses them is nginx, so its doubtful. –  Feb 08 '11 at 08:08
  • 53
    +1. Also, "lsof +L1" will tell you which program is holding the files open. – pehrs Feb 08 '11 at 08:10
  • 4
    as root run "lsof -n | grep file," you'd be surprised at how long files can stick around due to processes keeping them open for whatever reason. If all else fails, reboot, I feel bad suggesting it but it will definitely make sure nothing is holding on to the file. Per pehrs, lsof +L1 is probably the better way to go. – ScottZ Feb 08 '11 at 08:11
  • That returns a single file, which is being encoded with ffmpeg. –  Feb 08 '11 at 08:16
  • 5
    You just saved me! Deleted a 93G log file and didn't get the space back and couldn't work out why. Thanks. – Luke Cousins Apr 16 '14 at 14:14
  • 1
    Along the same lines and in case this helps others, I erased a large nginx access.log file but was only able to reclaim the space after restarting nginx: service nginx restart – Nick Jun 20 '14 at 12:47
  • I have same issue with apache2 access logs, ihad even stopped n started appache still no space returned i am using ext4 – Ashish Karpe Nov 10 '15 at 11:40
  • Never knew about this. Learning something every day. – kontinuity Sep 04 '16 at 07:47
  • 1
    It helps to know that you need to run this as root/sudo. I could guess and try, but I generally don't try stuff with sudo if I don't know the consequences. – aross Nov 04 '16 at 12:58
  • 1
    I got a error `-bash: lsof: command not found` – Chaminda Bandara Oct 02 '18 at 12:18
  • I got an easier to read format in MB with this command; lsof +L1 | numfmt --field=7 --to=iec --invalid=ignore – chiappa Apr 30 '20 at 15:43
  • ... so how do you close those processes? – J.Ko Jun 05 '20 at 23:49
  • Run `sudo lsof +L1` to see non-user processes, as well, and then `sudo kill [pid]` the relevant PID. – enharmonic Oct 27 '21 at 21:06
48

as Ignacio mentions, deleting the file won't free the space until you delete the processes that have open handles against that file.

Nevertheless, you can reclaim the space without killing the processes. All you need to do is to remove the file descriptors.

First execute lsof | grep deleted to identify the process holding the file

[hudson@opsynxvm0055 log]$ /usr/sbin/lsof |grep deleted
java       8859   hudson    1w      REG              253,0 3662503356    7578206 /crucible/data/current/var/log/fisheye.out (deleted)

Then execute:

cd /proc/PID/fd

then

[hudson@opsynxvm0055 fd]$ ls -l |grep deleted
total 0
l-wx------ 1 hudson devel 64 Feb  7 11:48 1 -> /crucible/data/current/var/log/fisheye.out (deleted)

The "1" will be the file descriptor. Now type "> FD" to reclaim that space

> 1

You might need to repeat the operation if there are other processes holding the file.

Adrián Deccico
  • 583
  • 4
  • 4
  • 2
    what does the `> FD` do? – FilBot3 Jul 06 '15 at 13:19
  • it removes the file descriptor – Adrián Deccico Jul 13 '15 at 01:12
  • 2
    does this `>` command have a name? i had to switch from zsh to bash in order to be able to use it. Is it possible to run it on zsh? – ariera Nov 13 '15 at 17:35
  • 3
    it is a output redirect and therefore truncates the file. The long from would be "echo -n > 1" or "true > 1". It does not really remove the FD, it just points to a empty file afterwards. – eckes May 08 '17 at 10:52
  • +1. Note to ppl like me: Before doing `> 1` it's important that we are in the /proc//fd dir, as it's mentioned in the answer. I usually avoid going deep in dirs and use appropriate paths in commands, which doesn't work in this case. – 0xc0de Jun 13 '22 at 08:23
14

If partition has been configured to reserve certain portion of disk space only for root usage, df will not include this space as available.

[root@server]# df -h
Filesystem            Size  Used Avail Use% Mounted on
...
/dev/optvol           625G  607G     0 100% /opt
...

Even after space will be reclaimed by deleting files/directories, non-root user won't be able to write to particular partition.

You can easily check if that's your case by trying to create a file on a device as root and non-root user.

Additionally you can check filesystem configuration by running

tune2fs -l <device> | egrep "Block count|Reserved block count

and calculating actual % on your own.

To change disk % reserved for root-only usage, execute

tune2fs -m <percentage> <device>
luka5z
  • 241
  • 2
  • 4
12

One possibility is that the file(s) you deleted have more references in the filesystem. If you've created hardlinks, several filenames will point to the same data, and the data (the actual contents) won't be marked as free/usable until all references to it has been removed. Before you delete files, either stat them (Entry named Links) or do ls -l on them (should be the second column).

If it does turn out that the files are referenced elsewhere, I guess you'll have to ls -i the file(s) to find the inode-number, and then do a find with -inum <inode-number> to find the other references to that file (you probably also want to use -mount to stay within the same filesystem as well).

Kjetil Joergensen
  • 5,854
  • 1
  • 26
  • 20
12

The file is still locked by the process opening it. To free up space, do these steps:

  1. Run sudo lsof | grep deleted and see which process is holding the file. Example result:

    $ sudo lsof | grep deleted
    COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF      NODE NAME
    cron     1623 root    5u   REG   0,21        0 395919638 /tmp/tmpfPagTZ4 (deleted)
    
  2. Kill the process using sudo kill -9 {PID}. In above sample, the PID is 1623.

    $ sudo kill -9 1623
    
  3. Run df to check if space is already freed up. If it's still full, maybe you need to wait a few seconds and check again.

Aminah Nuraini
  • 1,029
  • 1
  • 8
  • 15
2

One reason for missing disk space (a scenario I just encountered myself because I obviously didn't do as I normally do when creating a new array), is this...

Usually I free all available disk space, also the 5% reserved for root (tune2fs -m0), since it's a single user file server system. However, I was suddenly stuck with 0 bytes free according to df, but the difference between total and used space said something else.

Since I was certain I had freed those 5% reserved as a default in at least Fedora (and having some empty folders instead of having the files I had just copied... or thought I had), I started to sweat and search desperately for a way to fix this issue, rebooting, trying "lsof" and that kind of stuff.

Then finally I decided to run tune2fs -m0 on the file system even though I was certain that was not the cause - but it was! A little over 400G became available as it should be. Yeah, I know... my mistake, but nonetheless my comment might be useful to others who forget about this reserved space or strongly believe they've freed it. c",)

Bruno
  • 21
  • 1
2

Since I know a ton of you are doing this for redhat in /var and gzipping files expecting the FS to shrink, but instead it grows, just make sure you service syslog restart. and

lsof -v file

would show you this anyhow.

  • 1
    This doesn't really add much; the accepted answer covered the logic behind that in 2001. When you have 50 rep, use comments if you want to add qualifiers to the existing answers. – Andrew B Mar 23 '13 at 17:34
1

The other answers are correct: If you delete a file, and space does not get freed, it's usually either because the file is still kept open, or there are other hardlinks to it.

To help in troubleshooting, use a tool that tells you where the drive space is being spent: You can use du to get an overview of where space is going. Even better, use a graphical tool like xdiskusage (there are many like this) to hunt down the culprit. xdiskusage and friends let you drill down into the biggest space hogs to find where space is going.

That way, you'll quickly find files that still occupy space because of a second hardlink. It will also show space occupied by deleted, but open files (as (permission denied), I believe, since it cannot read the file name).

sleske
  • 9,851
  • 4
  • 33
  • 44
1

One more option: The disk might be full due to a process that is continuously creating data: logs, cores and the like. It is possible that space is actually being freed but is immediately filled up. I have actually seen such a case. df in this case simply doesn't give the hole picture. Use du to learn more.

Chen Levy
  • 283
  • 3
  • 12
0

Instead of deleting files we can truncate the file something like cat /dev/null> file.log. This will make the size of file as 0 bytes. Even though other processes might be holding this file open but it size is what we will reduce to claim disk space.

0

I`m using EXT2, FSCK helped me in this situation. Try shudown -F now , after some restarts and fscks, I see half used space.

  • 1
    Dear Marcellus, your solution is encompassed by the accepted answer; and sometimes you don't want to do a reboot if you are not forced to... – Deer Hunter Jan 16 '13 at 09:31
-1

To check which deleted files has occupied memory enter the command

 $ sudo lsof | grep deleted

It will show the deleted files that holds memory.

Then kill the process with pid or name

$ sudo kill <pid>
$ df -h

check now you will have the same memory

If not type the command below to see which file is occupying memory

# cd /
# du --threshold=(SIZE)

mention any size it will show which files are occupying above the threshold size and delete the file you will find the memory retained

-2

If you have Windows 10 as a dual boot, you can try booting into Windows 10, then go to Disk Cleanup, select the proper drive, then click "Clean up system files". This worked for me. Good luck.

Mythos
  • 1
-3

One line:

kill -9 $(lsof | grep deleted | cut -d " " -f4)
ZaPa
  • 1
  • 3
    This will just kill a bunch of programs. You won't be able to know which ones in advance. It's also not clear why you would want to do this. – Michael Hampton Nov 23 '20 at 19:30
  • Script copy-paste is not a very well going thing, I think. I suggest to explain, what is your command doing and how. – peterh Nov 26 '20 at 13:10
-4

open terminal try this command df -Th next use this command sudo du -h --max-depth=1 / in this command you will find disk usage detail then open as root user delete the file (root-local-share-trash) and delete your file

rilson
  • 1