9
df -i
Filesystem-----Inodes-----Iused-----IFree-----IUse-----Mounted on
dev/sda2-------732960-----727804-----5156-----100%---- /

Only these 2 are having higest inodes, rest all are too low. what can be done to free up inodes?

Proc 10937 inodes

Sys 22504 inodes

apt-get -f install says no space left

df -i output image

enter image description here

apt-get -f install output error image

enter image description here

inodes search output image -

enter image description here

var log is only 26Mb (highest in var directory)

Pierre.Vriens
  • 1,159
  • 34
  • 15
  • 19
anon
  • 91
  • 1
  • 1
  • 3
  • 2
    Welcome to Server Fault! Please use [Markdown](http://serverfault.com/editing-help) and/or the formatting options in the edit menu to properly type-set your posts to improve their readability. Also use cut-and-paste for posting console output and format it as "`code`" rather than posting screenshots. That improves readability, attracts better answers and allows indexing by search engines, which may help people with similar questions. – HBruijn May 04 '16 at 06:46
  • Could not copy from vmware console. So took a screenshot. – anon May 04 '16 at 07:09
  • 3
    You free up inodes by deleting files. That is all. – Michael Hampton May 04 '16 at 07:10
  • There aren't files that can be deleted. Is there anyway to increase inodes limit or any default files that can be deleted? I've deleted few logs and it didnot help – anon May 04 '16 at 07:19
  • 1
    You can make a new filesystem with more inodes, then. – Michael Hampton May 04 '16 at 07:32
  • same issue here, can't find any directory occupying 1million files .. – neobie Aug 17 '20 at 16:31

5 Answers5

19

I was experiment the same issue some weeks ago, and this procedure was solve the problem.

First, search where is the most space use

    for i in /*; do echo $i; find $i |wc -l; done

Pay attention when some directories take more time to be readed. In my case was the /var/ where take more time searching.

So run that:

    for i in /var/*; do echo $i; find $i |wc -l; done

After that, run the same command to /var/log/* and detect a lot small files on the squid3 logs.

After run an rm -rfv /var/log/squid3/access.log* (and restart squid3) the problem was solved, and the IUSE% change from 100 to 13.

Regards.

1

If you use docker, remove all images. They used many space.... Work to me

#!/bin/bash
# Stop all containers
docker stop $(docker ps -a -q)
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)
chicks
  • 3,639
  • 10
  • 26
  • 36
omrqs
  • 29
  • 2
  • After some testing my inodes where full. It looks like docker rm does not clean up everything. Apart from creating a new drive and mounting `/var/lib/docker` to it, this is the only solution that helps. But it lags a warning, that your containers will be gone (Since I only use `docker-compose` it was a pice of cake to recreate everything from scratch). – Jürgen Steinblock Aug 28 '17 at 09:14
  • If you need to cleanup some space but don't want to create/recreate your containers, just run `docker rmi $(docker images -q)`. Docker will prevent removing images from your running images and print `Error response from daemon: conflict: unable to delete ... (cannot be forced) - image is being used by running container ...` – Jürgen Steinblock Aug 28 '17 at 10:03
  • 1
    You now can also use `docker system prune` to remove unused data. Also see https://stackoverflow.com/a/32723127 – luckydonald Jan 31 '18 at 13:15
0

One option you can delete or move files to another drive. Or mount a higher capacity drive to a new directory on /dev/sda2 drive and move the files.

vembutech
  • 390
  • 1
  • 8
0

I can see two options.

You can backup the whole filesystem, than recreate it with higher number of inodes.

Or you can mount another drive to the path with many files and move the files to that drive, so you will keep the structure as mentioned by @vembutech . Sadly, I can't upvote that one yet.

Petr Chloupek
  • 254
  • 1
  • 6
0

Just to add a point to the above comments, better to use a slightly modified version of the above command, especially in of symlinks-

For example: /bin -> usr/bin

for i in /*; do echo $i; find $i |wc -l; done
/bin
1

Just use a slash after "find $i/" and then you can see the better result.

for i in /*; do echo $i; find $i/ |wc -l; done
/bin
865

Similary for "lib", "lib64" also you can observe the similar behaviour.