13

I got the seemingly-common "too many file descriptors" error on nginx. After much searching, the solution is clearly to increase the number of file descriptors available to nginx. But there isn't enough info out there for me to feel comfortable doing this in a meaningful and safe way. Here are the main points that most forum/email threads cover:

  • the OS has its own total file descriptor limit (on my system, cat /proc/sys/fs/file-max outputs "100678")
  • each user can have their own limit too (but on my system, running ulimit as any user outputs "unlimited" see update at bottom with more detail)
  • a few people said something along the lines of what this person said: 'Directive worker_rlimit_nofile doesn't specify "how many", it is the operating system limit which does. Directive worker_rlimit_nofile just allows a quick-and-dirty way to enlarge this limit if it's not enough.' So I guess the implication is that it's "better" to set the limit for the nginx OS user instead of in the config?

I can just throw in a worker_rlimit_nofile value greater than the number of connections per worker and call it a day, but I feel I don't really know what's going on here.

  • why would the limit per worker be less than the OS limit?
  • How do I find out what my limit is now?

update: for both root and a normal user, ulimit outputs "unlimited", BUT ulimit -Hn and ulimit -Sn both output 1024

John Bachir
  • 2,344
  • 7
  • 29
  • 37

2 Answers2

11

worker_rlimit_nofile will set the limit for file descriptors for the worker processes as oppose to the user running nginx. If other programs running under this user will not be able to gracefully handle running out of file descriptiors then you should set this limit slightly less then what is for the user.

First, What is using your file descriptors?

  1. Each active connection to a client
  2. Using proxy_pass? That will open a socket to the host:port handling these requests
  3. Using proxy_pass to a local port? Thats another open socket. (For the owner of that process)
  4. static files being served by nginx

Why would the limit per worker be less than the OS limit?

This is controlled by the OS because the worker is not the only process running on the machine. To change it for the user running nginx see below. It would be very bad if your workers used up all of the file descriptors available to all processes, don't set your limits so that is possible.

#/etc/sysctl.conf
#This sets the value you see when running cat  /proc/sys/fs/file-max
fs.file-max = 65536"


#/etc/security/limits.conf
#this sets the defaults for all users
* soft nofile 4096
* hard nofile 4096

#This overrides the default for user `usernamehere`
usernamehere soft nofile 10240
usernamehere hard nofile 10240

After those security limit changes I believe I still had to increase the softlimit for the user using ulimit.

How do I find out what my limit is now?

ulimit -a Will display all the limits associated with the user you run it as.

Dan R
  • 2,275
  • 1
  • 19
  • 27
  • 1
    Thanks -- now that I've upped the file descriptor limit, I'm running out of connections. Maybe you can help me with that too :) http://serverfault.com/questions/209014/how-can-i-observe-what-nginx-is-doing-to-solve-1024-worker-connections-are-no – John Bachir Dec 04 '10 at 00:50
  • 1
    Note to CentOS / Fedora users, if you have SELinux enabled, you will need to run `setsebool -P httpd_setrlimit 1` so that nginx has permissions to set its rlimit. – Jarrett Apr 02 '15 at 21:03
3

Have to check the source to be honest, but it's fairly low.

I used worker_rlimit_nofile 15000; and had no issues, you can safely increase it, though, the chance of running out of file descriptors is minuscule.

Martin Fjordvald
  • 7,589
  • 1
  • 28
  • 35