4

According to this answer it is possible to mount at least tmpfs with "infinite" inodes.

Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:

  • The tmpfs partition is 50% used by volume
  • 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)
  • tmpfs was mounted with nr_inodes=1000
  • all 1000 of those inodes are taken up by the inodes currently written

This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.

It seems to me that setting nr_inodes=0 (aka infinite inodes) would make this situation go away.

  • Is there a reason that infinite inodes is not the default?
  • What reasons are there to limit the number of inodes on a filesystem?
quodlibetor
  • 277
  • 2
  • 4
  • 11
  • 2
    tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data. – MadHatter Jul 18 '13 at 14:45
  • I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal – quodlibetor Jul 18 '13 at 17:58
  • There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space). – jlliagre Jul 18 '13 at 21:12

3 Answers3

7

Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.

Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.

Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.


Edit: narrowing the answer to the updated question.

With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inodes parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:

if nr_inodes=0, inodes will not be limited.  It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.

However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:

size:      The limit of allocated bytes for this tmpfs instance. The 
           default is half of your physical RAM without swap. If you
           oversize your tmpfs instances the machine will deadlock
           since the OOM handler will not be able to free that memory.

Many people will be more concerned about the default amount of memory to an amount that matches with what their application demands.

Florian Heigl
  • 1,440
  • 12
  • 19
jlliagre
  • 8,691
  • 16
  • 36
0

Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.

  • They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange. – quodlibetor Jul 18 '13 at 17:56
  • No, the number is frozen at file system creation time, for most file systems, as jiliagre says. – MadHatter Jul 20 '13 at 15:45
  • @quodlibetor in some filesystems the inode tables are dynamically handled with little to no overhead, i.e. in VxFS. but it's not common for Linux-centric FS to have that design. so the omission is not as surprising; furthermore, tmpfs tries to be very simplistic, and inode tricks might violate that design goal. – Florian Heigl Mar 26 '20 at 00:17
0

The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.

A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.

$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs      4.0K    0  4.0K   0% /mnt

Therefore, to prevent memory exhaustion, both size= and nr_inodes= have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.

anx
  • 6,875
  • 4
  • 22
  • 45
  • I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1. – anx Jun 10 '19 at 09:50
  • 1
    Bingo, This should be the correct answer, especially the words in **bold**. I was having the same doubt as OP and this solved mine. This can be varified by `touch /tmp/empty_files_0{0000..9999}` and the `df` used size never increase. However, in the case of tmpfs, used size will be increased and can be verified by `free`. In the case of /tmp we tend to allow many process to write into it (rather loose in security). Even if we limit /tmp usage by only 50% of RAM, an evil/runaway process may use up 100% of RAM by creating empty files. So even if I am using ZFS, it is also wise to set nr_inodes. – midnite Apr 11 '22 at 11:43