12

On Ubuntu Precise, I'm low on space in /run:

admin@foo:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        19G  6.6G   12G  38% /
udev             10M  8.0K   10M   1% /dev
none             50M   40M   11M  79% /run
none            5.0M     0  5.0M   0% /run/lock
none            249M     0  249M   0% /run/shm

Should I allocate more? How?

EDIT: Here's my fstab:

admin@foo:~$ cat /etc/fstab
proc            /proc       proc    defaults    0 0
/dev/sda1       /           ext3    defaults,errors=remount-ro,noatime    0 1
/dev/sda2       none        swap    sw          0 0
lgarzo
  • 272
  • 2
  • 6
Brian
  • 717
  • 2
  • 9
  • 14
  • [Related answer](http://askubuntu.com/a/183224/28369) on AU, that presents a workaround using `mount` in `/etc/rc.local`. – lgarzo Jan 26 '13 at 20:43
  • @lgarzo: While it seems strange to configure the size in that script, the question and answer you posted to discuss the relatively small size for /run and one guy's way to increase it. Yours is the best answer yet; please make it an answer so I can accept it. – Brian Jan 26 '13 at 21:08

5 Answers5

15

In a post on Ask Ubuntu, korrident suggested a possible workaround:

Adding a mount command to the /etc/rc.local file:

mount -t tmpfs tmpfs /run -o remount,size=85M

Make sure that the script will "exit 0" on success or any other value on error. (Excerpt from the file.)

lgarzo
  • 272
  • 2
  • 6
  • 2
    You can use this command on ubuntu or debian for change size without reboot - mount -o remount,size=2G,noatime /run – James M Jan 03 '18 at 08:54
  • `tmpfs` "_has maximum size limits which can be adjusted on the fly via `mount -o remount ...`_". Also, "_tmpfs has three mount options for sizing_" where **size** is "_The limit of allocated bytes for this tmpfs instance. The default is half of your physical RAM without swap. If you oversize your tmpfs instances the machine will deadlock since the OOM handler will not be able to free that memory._" (https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt) – toraritte Jun 11 '20 at 15:32
  • Hi, I have same problem, but after resize /run immidiately full again and at same time df -h show that only 2MB is used. But mc show usage 10BG of 10GB and postgress unable to start - cant create pid file. – Hayate Sep 15 '22 at 04:17
4

I do not think that increasing the size of /run is necessary, but in case you do need to increase it try editing your /etc/fstab file. All mountpoints and most partitions are listed there. If your /run partition is a tmpfs(which it should be, at least according to https://askubuntu.com/questions/57297/why-has-var-run-been-migrated-to-run, I would confirm before following these instructions) then you can simply change the fstab line of your /run mount to something akin to the following:

none /dev/shm tmpfs defaults,size=8G 0 0

See how the size is declared right after defaults? Try doing that. You can use megabytes as well by using M:

none /dev/shm tmpfs defaults,size=100M 0 0

Reboot the computer after this and the changes should take place.

Edit: Scratch that, looks like Ubuntu creates the run partition using files in /etc/init and /etc/init.d and not via fstab. You'd have to look through those files and find the mount command that it uses to create run and edit it manually. I don't have a box to test this on right now, but try running this:

find /etc/init* -type f | xargs grep "mount"

OR

find /etc/init* -type f | xargs grep "run"

If it's being mounted via a bash script then this should find the file and line that does the mounting.

2

Temporary increase tmpfs filesystem

1) Open /etc/fstab with vi or any text editor of your choice,

2) Locate the line of /dev/shm and use the tmpfs size option to specify your expected size,

e.g. 512MB:
tmpfs      /dev/shm      tmpfs   defaults,size=512m   0   0

e.g. 2GB:
tmpfs      /dev/shm      tmpfs   defaults,size=2g   0   0

after then

mount -o remount /dev/shm
stambata
  • 1,598
  • 3
  • 13
  • 18
Mansur Ul Hasan
  • 264
  • 3
  • 9
0

I had this error due to journald.conf misconfiguration where I had used Storage=volatile and RuntimeMaxUse=1G, where /run was only 200M in size. /run partition is an in-memory partition. Storage=volatile tells journald to store logs in-memory in /run/log/journal, and with RuntimeMaxUse=1G we get easy >200M logs overflow.

The options prefixed with "System" apply to the journal files when stored on a persistent file system, more specifically /var/log/journal. The options prefixed with "Runtime" apply to the journal files when stored on a volatile in-memory file system, more specifically /run/log/journal.

-- From journald.conf documentaion

Solution for me was to configure journald.conf with:

[Journal]
Storage=persistent
RuntimeMaxUse=50M
SystemMaxUse=1G

More details:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           199M  199M    0M 100% /run

$ du -sh /run/log/journal
199.0M  /run/log/journal

$ cat /etc/systemd/journald.conf

[Journal]
Storage=volatile
RuntimeMaxUse=1G
0

This doesn't strictly answer the question as asked because this feature wasn't in Ubuntu 12.04, but in case it helps people with similar questions, as of Debian buster or Ubuntu 18.10 you can use the initramfs.runsize= boot parameter; the default is initramfs.runsize=10%, but you might use e.g. initramfs.runsize=20% or initramfs.runsize=128M instead.

This feature was added in response to Debian bug #862013.