35

Okay, this is creeping me out - I see about 1500-2500 of these:

root@wherever:# netstat

Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 localhost:60930         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60934         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60941         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60947         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60962         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60969         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60998         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60802         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60823         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60876         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60886         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60898         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60897         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60905         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60918         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60921         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60673         localhost:sunrpc        TIME_WAIT  
tcp        0      0 localhost:60680         localhost:sunrpc        TIME_WAIT  
[etc...]

root@wherever:# netstat | grep 'TIME_WAIT' |wc -l
1942

That number is changing rapidly.

I do have a pretty tight iptables config so I have no idea what can cause this. any ideas?

Thanks,

Tamas

Edit: Output of 'netstat -anp':

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:60968         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60972         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60976         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60981         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60980         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60983         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60999         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60809         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60834         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60872         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60896         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60919         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60710         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60745         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60765         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60772         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60558         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60564         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60600         127.0.0.1:111           TIME_WAIT   -               
tcp        0      0 127.0.0.1:60624         127.0.0.1:111           TIME_WAIT   -               
KTamas
  • 559
  • 1
  • 8
  • 15
  • 1
    Do you have something NFS mounted on the same machine that is exporting it? – Paul Tomblin Jun 10 '09 at 14:38
  • @Paul Tomblin: No. – KTamas Jun 10 '09 at 14:39
  • 1
    Well, you should look at the Established connections to find out which program is it. "rcpinfo -p" can also help to find out what is communicating with portmapper. – cstamas Jun 10 '09 at 17:21
  • For those that find their way here while trying to find a way to adjust the delay under Windows, [it can be done](http://publib.boulder.ibm.com/infocenter/cicstg/v6r0m0/index.jsp?topic=%2Fcom.ibm.cicstg600.doc%2Fccllal0264.htm) via a [registry setting](http://msdn.microsoft.com/en-us/library/aa560610.aspx). – Synetech Mar 24 '13 at 00:53

6 Answers6

29

EDIT: tcp_fin_timeout DOES NOT control TIME_WAIT duration, it is hardcoded at 60s

As mentioned by others, having some connections in TIME_WAIT is a normal part of the TCP connection. You can see the interval by examining /proc/sys/net/ipv4/tcp_fin_timeout:

[root@host ~]# cat /proc/sys/net/ipv4/tcp_fin_timeout
60

And change it by modifying that value:

[root@dev admin]# echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout

Or permanently by adding it to /etc/sysctl.conf

net.ipv4.tcp_fin_timeout=30

Also, if you don't use the RPC service or NFS, you can just turn it off:

/etc/init.d/nfsd stop

And turn it off completely

chkconfig nfsd off
Greg Bray
  • 5,530
  • 5
  • 33
  • 52
Brandon
  • 1,216
  • 8
  • 5
  • yeah my ipconfig script already lowers it to 30. I don't have nfsd in /etc/init.d/, but I did have portmap running, stopped it, now TIME_WAITs are dropped down to a few instances (1-5). Thanks. – KTamas Jun 10 '09 at 18:34
  • 19
    Uhh, tcp_fin_timeout has nothing to do with sockets in time_wait state. That afffects fin_wait_2. – diq Jun 10 '09 at 19:16
  • 2
    +1 for diq's comment. They're not related. – mcauth Aug 02 '14 at 20:27
  • 1
    Correct... you can see the sockets count down from 60 even if tcp_fin_timeout is changed using `ss --numeric -o state time-wait dst 10.0.0.100` – Greg Bray Dec 06 '18 at 02:25
27

TIME_WAIT is normal. It's a state after a socket has closed, used by the kernel to keep track of packets which may have got lost and turned up late to the party. A high number of TIME_WAIT connections is a symptom of getting lots of short lived connections, not nothing to worry about.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
  • This answer is short and sweet. It helps a lot. The last sentence confused me a bit, but I think the point is that you need to understand why so many connections are being created. If you are writing a client that generates a lot of requests, you probably want to make sure that it is configured to reuse existing connections rather than create new ones for each request. – Brent Bradburn Jul 26 '19 at 18:11
  • 1
    Short sweat, not complete. TIME_WAITs depend on the context. If you have a lot of them it might be that someone is attacking your server. – Mindaugas Bernatavičius Sep 24 '19 at 15:11
6

It isn't important. All that signifies is that you're opening and closing a lot of Sun RCP TCP connections (1500-2500 of them every 2-4 minutes). The TIME_WAIT state is what a socket goes into when it closes, to prevent messages from arriving for the wrong applications like they might if the socket were reused too quickly, and for a couple of other useful purposes. Don't worry about it.

(Unless, of course, you aren't actually running anything that should be processing that many RCP operations. Then, worry.)

chaos
  • 7,463
  • 4
  • 33
  • 49
4

Something on your system is doing a lot of RPC (Remote Procedure Calls) within your system (notice both source and destination is localhost). That's often seen for lockd for NFS mounts, but you might also see it for other RPC calls like rpc.statd or rpc.spray.

You could try using "lsof -i" to see who has those sockets open and see what's doing it. It's probably harmless.

Paul Tomblin
  • 5,217
  • 1
  • 27
  • 39
  • Nothing unusual there, I do see a TCP *:sunrpc (LISTEN) for portmap but guess that is normal. – KTamas Jun 10 '09 at 15:02
  • Keep doing it repeatedly until you see who is opening the connection. – Paul Tomblin Jun 10 '09 at 15:31
  • netstat -epn --tcp will show you the same information. Unless you're using NFS, you probably have very little reason for using portmap. You could remove it. – David Pashley Jun 10 '09 at 16:18
  • I don't use NFS indeed, however apt-get remove portmap wants to remove 'fam' which was automatically installed probably by libfam0 which was installed by courier-imap. apt-cache says 'fam' is a recommended package for libfam0. – KTamas Jun 10 '09 at 16:53
4

tcp_fin_timeout does NOT control TIME_WAIT delay. You can see this by using ss or netstat with -o to see the countdown timers:

cat /proc/sys/net/ipv4/tcp_fin_timeout
3

# See countdown timer for all TIME_WAIT sockets in 192.168.0.0-255
ss --numeric -o state time-wait dst 192.168.0.0/24

NetidRecv-Q  Send-Q    Local Address:Port    Peer Address:Port                             
tcp  0       0         192.168.100.1:57516   192.168.0.10:80    timer:(timewait,55sec,0)   
tcp  0       0         192.168.100.1:57356   192.168.0.10:80    timer:(timewait,25sec,0)   
tcp  0       0         192.168.100.1:57334   192.168.0.10:80    timer:(timewait,22sec,0)   
tcp  0       0         192.168.100.1:57282   192.168.0.10:80    timer:(timewait,12sec,0)   
tcp  0       0         192.168.100.1:57418   192.168.0.10:80    timer:(timewait,38sec,0)   
tcp  0       0         192.168.100.1:57458   192.168.0.10:80    timer:(timewait,46sec,0)   
tcp  0       0         192.168.100.1:57252   192.168.0.10:80    timer:(timewait,7.436ms,0) 
tcp  0       0         192.168.100.1:57244   192.168.0.10:80    timer:(timewait,6.536ms,0)

even with tcp_fin_timeout set to 3 the countdown for TIME_WAIT still starts at 60. However if you have net.ipv4.tcp_tw_reuse set to 1 (echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse) then the kernel can reuse sockets in TIME_WAIT if it determines there won't be any possible conflicts in TCP segment numbering.

Greg Bray
  • 5,530
  • 5
  • 33
  • 52
3

I had the same problem too. I cost me several hours to find out what is going on. In my case, the reason for this was that netstat tries to lookup the hostname corresponding to the IP (I assume it's using the gethostbyaddr API). I was using an embedded Linux installation which had no /etc/nsswitch.conf. To my surprise, the problem only exists when you are actually doing a netstat -a (found this out by running portmap in verbose and debug mode).

Now what happened was the following: Per default, the lookup functions also try to contact the ypbind daemon (Sun Yellow Pages, also known as NIS) to query for a hostname. To query this service, the portmapper portmap has to be contacted to get the port for this service. Now the portmapper in my case got contacted via TCP. The portmapper then tells the libc function that no such service exists and the TCP connection gets closed. As we know, closed TCP connections enter a TIME_WAIT state for some time. So netstat catches this connection when listing and this new line with a new IP issues a new request that generates a new connection in TIME_WAIT state and so on...

In order to solve this issue, create a /etc/nsswitch.conf which is not using the rpc NIS services i.e. with the following contents:

passwd:         files
group:          files
hosts:          files dns
networks:       files dns
services:       files
protocols:      files
netmasks:       files
leecher
  • 31
  • 1