1

My webserver/app server (apache/tomcat) running on CentOS, hung this morning. At the time of the hang I noticed a large number of sockets in either the TIME_WAIT or CLOSE_WAIT state. I'm trying to figure out how to more definitively determine if the hang was caused by hitting a max # of file descriptors and if so, if it was a per process limit, per user limit, or overall os limit. What os tje best to make that determination?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
tvfoodmaps
  • 33
  • 1
  • 3
  • possible duplicate of [Huge amount of TIME\_WAIT connections says netstat](http://serverfault.com/questions/23385/huge-amount-of-time-wait-connections-says-netstat) – Michael Hampton Feb 06 '13 at 19:36
  • 1
    This question is a bit different I'm trying to figure out how I can determine IF the file descriptor limit is actually being hit or if I'm seeing another symptom. – tvfoodmaps Feb 06 '13 at 19:58

1 Answers1

3
sysctl fs.file-nr

From this command you will get 3 numbers. First is the number of used file descriptor the second is the number of allocated but not used file descriptor and the last is the system max file descriptor.

Another way of checking process based fd information is

cat /proc/pid/limits 

You can check all kind of limits for this process including max open files.

This might be a good place to start investigation for fd related issues

APZ
  • 954
  • 2
  • 12
  • 24
  • Thanks, the system numbers don't look to be the issue. I'm more concerned with the per user or proc #'s. However cat/prod/pid/l – tvfoodmaps Feb 07 '13 at 02:57
  • @tvfoodmaps `/proc/pid/limits` would contain what you're looking for (the per-process limit). Also normally if you hit a limit and the process gets killed because of it a log entry gets made (check around in `/var/log`) – voretaq7 Feb 07 '13 at 17:37