36

I am using Ubuntu 12.04 and can't write to any file, even as root, or do any other operation that requires writing. Neither can any process that needs to write, so they're all failing. df says I've got plenty of room:

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       30G   14G   15G  48% /
udev            984M  4.0K  984M   1% /dev
tmpfs           399M  668K  399M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            997M     0  997M   0% /run/shm

All of the results I find for "can't write to disk" are about legitimately full disks. I don't even know where to start here. The problem appeared out of nowhere this morning.

PHP's last log entry is:

failed: No space left on device (28)

Vim says:

Unable to open (file) for writing

Other applications give similar errors.

After deleting ~1gb just to be sure, the problem remains. I've also rebooted.

df -i says

Filesystem      Inodes   IUsed  IFree IUse% Mounted on
/dev/xvda1     1966080 1966080      0  100% /
udev            251890     378 251512    1% /dev
tmpfs           255153     296 254857    1% /run
none            255153       4 255149    1% /run/lock
none            255153       1 255152    1% /run/shm
Giacomo1968
  • 3,522
  • 25
  • 38
felwithe
  • 826
  • 1
  • 9
  • 14

2 Answers2

58

You are out of inodes. It's likely that you have a directory somewhere with many very small files.

EEAA
  • 108,414
  • 18
  • 172
  • 242
  • This looks like it was it, thanks very much. The problem turned out to be runaway PHP session files. There were so many that `rm sess*` wouldn't even work, and I'm deleting them a chunk at a time with `rm sess_a*, rm sess_b*`, etc. – felwithe Oct 25 '15 at 16:47
  • 9
    Just wanted to add that I didn't even know `rm` _could_ fail. This has been an education. – felwithe Oct 25 '15 at 18:23
  • 1
    @felwithe: you could try to do a `rsync -a --delete /yourdirfulloffiles /anemptydir` – WoJ Oct 25 '15 at 19:22
  • 2
    @felwithe, I can imagine that `find . -name sess\* -exec rm {} +` would have worked. – Carsten S Oct 25 '15 at 21:51
  • 3
    @felwithe What others have suggested. `rm` *probably* worked fine, but the shell expanded the `*` glob into far too much data, and barfed before it even got to the point of *invoking* rm. – user Oct 26 '15 at 08:20
  • 8
    @CarstenS: Or `find . -name sess\* -delete` which I find easier to remember, and is generally more efficient. – MSalters Oct 26 '15 at 09:01
  • @MSalters, thanks, you are right of course. I was hunting for that option in the man page but somehow could not find it. (So it is not necessarily easier to remember ;) – Carsten S Oct 26 '15 at 11:55
  • 1
    @felwithe It's not rm failing, but bash. The star is being converted by bash into an argument list in memory to pass into rm. The entire argument list is stored in memory at one time, so with two million files, it's likely exceeding a static buffer size. `rm -rf` on the directory will handle it just fine. – Kaslai Oct 26 '15 at 16:08
  • 2
    @Kaslai the limit there is not RAM, but the system limit ARG_MAX. The POSIX standard doesn't specify precisely how command line arguments are measured against ARG_MAX unfortunately. Some implementations have no limit and so do not define ARG_MAX, but this is not a popular option as it makes too many programs fail to compile. – James Youngman Oct 26 '15 at 18:18
  • 1
    @JamesYoungman I'm sure you stated that for the benefit others, but to clarify, I did say `exceeding a static buffer size` which by no means implicates the exhaustion of system memory :) – Kaslai Oct 26 '15 at 18:49
7

Apparently, the OP has an answer for their particular problem. However, for completeness, the OP's symptoms can also occur if the filesystem has been remounted read only. This has happened to me using a Linux VM whose storage was on a clustered disk system suffering rare intermittent faults. Occasionally, the faults would cause the filesystem(s) to be remounted read only. The eventually observable external symptom was various services becoming nonresponsive as RAM filled (with unflushable disk writes).

At the time, the only resolution was to reboot the system (losing whatever unwritten logs there were). Attempts to remount RW failed. (Unfortunately, I do not recall the error messages returned when attempting these remounts.)

So, ..., not the OP's problem, but someone else arriving on this page may benefit from this information.

Eric Towers
  • 273
  • 1
  • 6
  • 5
    No actually; when the filesystem has been remounted read only you get an error that states the filesystem is read only, not out of space. – psusi Oct 25 '15 at 23:18
  • 1
    @psusi : I did not. I got various errors, including "filesystem full". If that has changed in the last two or three years, that would be a good thing. – Eric Towers Oct 26 '15 at 03:53
  • 1
    I tried to move a file into a read-only ZFS file system on Linux just the other day. The error quite clearly said "read-only file system". – user Oct 26 '15 at 08:22
  • Nope; been that way for 30+ years. A write to a read only fs returns -EROFS; a write to a full fs returns -ENOSPC. – psusi Oct 26 '15 at 17:43
  • @psusi : Well, if I saw OS return codes, that might matter. As I see userland error messages, the mapping is not reliably bijective. – Eric Towers Oct 26 '15 at 21:56
  • Actually the mapping is quite linear and reliable. Each error code translates to a specific string when handed to perror(), hence, you get the appropriate error message telling you whether the disk is full, or simply mounted read only. – psusi Oct 27 '15 at 01:15
  • 4
    @psusi : I see that you live in the fantasy universe where programmers always do the right thing instead of making up their own error messages. I don't seem to live there. – Eric Towers Oct 27 '15 at 04:39