We've been hitting the max open files limit on a few services recently. There are also a bunch of other limits in place. Is there a way to monitor how close processes are to these limits so we can be alerted when it's time to either up the limits or fix the root cause? On the same note, is it possible to view a log of these events so we know when a crash occurs it's because of hitting one of these limits?
Asked
Active
Viewed 1.5k times
2 Answers
7
Assuming a linux server.
see global max open files:
cat /proc/sys/fs/file-max
see global current open files:
cat /proc/sys/fs/file-nr
change global max open files:
sysctl -w fs.file-max=1000000 or edit sysctl.conf
see limit on current user:
ulimit -Hn (hard limit)
ulimit -Sn (soft limit)
change limit on user:
edit /etc/security/limit.conf
to set limits for a webserver for example
apache2 soft nofile 4096
apache2 hard nofile 10000
You can have a script go through /proc
and gives you some statistics:
for pid in $(pgrep apache2); do ls /proc/$pid/fd | wc -l; done
5
Yes, you can. /proc/<pid>/fd
lists all open file descriptors. Memory usage can be seen in /proc/<pid>/{maps,smaps}
. Just browse thorugh /proc/<pid>
a bit and google files you don't understand :)
Dennis Kaarsemaker
- 18,793
- 2
- 43
- 69