79

We recently began load testing our application and noticed that it ran out of file descriptors after about 24 hours.

We are running RHEL 5 on a Dell 1955:

CPU: 2 x Dual Core 2.66GHz 4MB 5150 / 1333FSB RAM: 8GB RAM HDD: 2 x 160GB 2.5" SATA Hard Drives

I checked the file descriptor limit and it was set at 1024. Considering that our application could potentially have about 1000 incoming connections as well as a 1000 outgoing connections, this seems quite low. Not to mention any actual files that need to be opened.

My first thought was to just increase the ulimit -n parameter by a few orders of magnitude and then re-run the test but I wanted to know any potential ramifications of setting this variable too high.

Are there any best practices towards setting this other than figuring out how many file descriptors our software can theoretically open?

Kevin
  • 893
  • 1
  • 7
  • 5

5 Answers5

78

These limits came from a time where multiple "normal" users (not apps) would share the server, and we needed ways to protect them from using too many resources.

They are very low for high performance servers and we generally set them to a very high number. (24k or so) If you need higher numbers, you also need to change the sysctl file-max option (generally limited to 40k on ubuntu and 70k on rhel) .

Setting ulimit:

# ulimit -n 99999

Sysctl max files:

#sysctl -w fs.file-max=100000

Also, and very important, you may need to check if your application has a memory/file descriptor leak. Use lsof to see all it has open to see if they are valid or not. Don't try to change your system to work around applications bugs.

sucuri
  • 2,817
  • 1
  • 22
  • 22
  • 1
    @sucuri Thanks. We're definitely concerned about resource leaks but that doesn't seem to be the case. We've been watching both lsof and netstat and while the numbers are high, they don't keep growing, they expand and contract. I expect that if there was a leak, the number of open sockets or descriptors would continue to grow over time. – Kevin Aug 01 '09 at 13:59
  • 3
    The `ulimit` limit is not per user, but per process! See http://unix.stackexchange.com/questions/55319/are-limits-conf-values-applied-on-a-per-process-basis And the `fs.file-max` setting is for the server as a whole (so all processes together). – Læti Mar 28 '14 at 15:44
17

You could always just

cat /proc/sys/fs/file-nr

During the 'high load' situation to see how many file descriptors are in use.

As to a maximum - it just depends on what you are doing.

Ben Lessani
  • 5,174
  • 16
  • 37
8

If the file descriptors are tcp sockets, etc, then you risk using up a large amount of memory for the socket buffers and other kernel objects; this memory is not going to be swappable.

But otherwise, no, in principle there should be no problem. Consult the kernel documentation to try to work out how much kernel memory it will use, and/or test it.

We run database servers with ~ 10k file descriptors open (mostly on real disc files) without a major problem, but they are 64-bit and have loads of ram.

The ulimit setting is per-process, but there is a system-wide limit as well (32k I think by default)

MarkR
  • 2,898
  • 16
  • 13
2

I am not personally aware of any best practices. It's somewhat subjective depending on system function.

Remember that 1024 you're seeing is a per-user limit and not a system-wide limit. Consider how many applications you run on this system. Is this the only one? Is the user that runs this application doing anything else? (IE do you have humans using this account to login and run scripts which may potentially run away?)

Given the box is only running this one application and the account running said application is for that purpose only, I see no harm in increasing your limit as you suggest. If it's an in-house dev team, I would ask for their opinion. If it's from a third party vendor, they may have specific requirements or recommendations.

Grahamux
  • 630
  • 3
  • 6
  • @Grahamux The system is dedicated to this application and the user that runs the application only runs this application. I'm part of the in-house dev team, so no help there. – Kevin Jul 31 '09 at 23:20
  • The limit is not per user, but per process. See http://unix.stackexchange.com/questions/55319/are-limits-conf-values-applied-on-a-per-process-basis – Læti Mar 28 '14 at 15:39
1

This seems to me one of those questions best answered with "test it in a development environment". I remember years ago Sun got nervous when you messed with this, but not that nervous. It's limit at the time was also 1024, so I'm a little surprised to see that it's the same now for Linux, seems like it ought to be higher.

I found the following link educational when I googled for answers to your question: http://www.netadmintools.com/art295.html

And this one also: https://stackoverflow.com/questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible

Kyle
  • 1,849
  • 2
  • 17
  • 23