We have heroku drains dumping to 1 log server. There are a lot of instances (~100) sending their logs to this server using TCP. The Server filters the logs based on hostname and puts them into a directory named with the hostname. Everything works fine for a while, but I noticed when I check the open descriptors with lsof
it halts after reaching 1047. In addition to writing to logs in separate directories, every log command is logged to a syslog.log file. This file continues to get updated after the 1047th FD is opened. This is why I think it has to do with not being able to open more file descriptors. How can I fix this problem?
Asked
Active
Viewed 721 times
1
pmilb
- 61
- 1
- 6
1 Answers
0
The is a very common problem. The default process limits with most systems out of the box don't make much sense for modern hardware.
The man page for limits.conf on your OS should point you in the correct direction.
It many linux distributions the file for setting limits is in
/etc/security/limits.conf
This question addresses the more general problem.
Practical maximum open file descriptors (ulimit -n) for a high volume system
Fred the Magic Wonder Dog
- 2,035
- 10
- 17