9

I've been doing some load testing with wrk of my nginx reverse proxy -> my web app setup and I noticed that when I get to 1000+ concurrent connections, nginx starts returning 502s and the following error message:

2015/04/17 20:45:26 [crit] 6068#0: *1116212677 connect() to \
127.0.0.1:3004 failed (99: Cannot assign requested address) \
while connecting to upstream, client: xxx.xxx.xx.165, server: \
foo.bar.com, request: "GET /my/route HTTP/1.1", upstream: \
"http://127.0.0.1:3004/my/route", host: "foo.bar.com"

the wrk command was:

wrk -t10 -c500 -d5m "https://foo.bar.com/my/route" -H "Accept: application/json"

I'm trying to figure out what might have gone wrong here. My web application is listening to requests proxied by nginx at port 3004. Is nginx running out of ports? Is the web application not able to handle this many request? Are requests being timed out? I'm not clear on this and would love to have more insight into it.

Alexandr Kurilin
  • 546
  • 1
  • 8
  • 20
  • 1
    Seems you've run out of local ports due to sockets in TIME-WAIT state. You can try using bigger local port range, set keepalive for connections, or using unix sockets to connect to backends. See http://serverfault.com/questions/649262/high-of-sockets-in-time-wait-state-server-unresponsive-at-load – Federico Sierra Apr 17 '15 at 21:15
  • Consider https://github.com/lebinh/ngxtop for additional insights. NgxTop shows many more metrics based on those logs. – JayMcTee Apr 11 '16 at 09:12

2 Answers2

2

Already answered here: https://stackoverflow.com/questions/14144396/nginx-proxy-connect-to-ip80-failed-99-cannot-assign-requested-address

The message suggests you've run out of local sockets/ports.

Try to increase networking limits:

echo "10240 65535" > /proc/sys/net/ipv4/ip_local_port_range
sysctl net.ipv4.tcp_timestamps=1
sysctl net.ipv4.tcp_tw_recycle=0
sysctl net.ipv4.tcp_tw_reuse=1
sysctl net.ipv4.tcp_max_tw_buckets=10000

Alternatively you may try unix sockets to see if it helps.

user2743554
  • 357
  • 3
  • 12
1

Overview of Network Sockets When a connection is established over TCP, a socket is created on both the local and the remote host. The remote IP address and port belong to the server side of the connection, and must be determined by the client before it can even initiate the connection. In most cases, the client automatically chooses which local IP address to use for the connection, but sometimes it is chosen by the software establishing the connection. Finally, the local port is randomly selected from a defined range made available by the operating system.The port is associated with the client only for the duration of the connection, and so is referred to as ephemeral. When the connection is terminated, the ephemeral port is available to be reused.

Solution Enabling Keepalive Connections

Use the keepalive directive to enable keepalive connections from NGINX to upstream servers, defining the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. Without keepalives you are adding more overhead and being inefficient with both connections and ephemeral ports.

http {
    upstream backend {
        server 10.0.0.100:1234;
        server 10.0.0.101:1234;
 }

    server {
        # ...
        location / {
            # ...
            proxy_pass http://backend;
            proxy_bind $split_ip;
            proxy_set_header X-Forwarded-For $remote_addr;
        }
    }

    split_clients "$remote_addr$remote_port" $split_ip {
        10%  10.0.0.210;
        10%  10.0.0.211;
        10%  10.0.0.212;
        10%  10.0.0.213;
        10%  10.0.0.214;
        10%  10.0.0.215;
        10%  10.0.0.216;
        10%  10.0.0.217;
        10%  10.0.0.218;
        *    10.0.0.219;
    }
}

more : https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/

Mont
  • 11
  • 1