5

Nginx With PHP FPM - Resource Temporarily Unavailable - 502 Error

I am using a some code to send off just over 160 GET requests asynchronously using curl to my API which is running Nginx with Php-fpm on Ubuntu server 16.04. Each request fetches a different selection of data from the database before returning it as a JSON response. This number of requests is small enough that I believe it should not reach any of the various default limits (number of socket connections, file descriptors etc). However the fact that they are all being sent/recieved at the same time appears to be causing issues.

The vast majority of the requests will succeed, but a couple (consistently the same number in sequential tests, but which vary depending on the configuration) will get a "502 Bad Gateway" response.

If I look at the nginx error log (/var/log/nginx/error.log), I see a these error messages:

2017/11/21 09:46:43 [error] 29#29: *144 connect() to unix:/run/php/php7.0-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 192.168.3.7, server: , request: "GET /1.0/xxx HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.0-fpm.sock:", host: "my.domain.org"

There are always exactly the same in the number fo the number of "502 Bad Gateway" error messages in the log as I recieve back from the API.

Meanwhile, when watching the fpm log file during an execution of the test (with tail -100f /var/log/php7.0-fpm.log), nothing happens. It just has the following:

[21-Nov-2017 11:54:29] NOTICE: fpm is running, pid 329
[21-Nov-2017 11:54:29] NOTICE: ready to handle connections
[21-Nov-2017 11:54:29] NOTICE: systemd monitor interval set to 10000ms

Although my fpm configuration (at /etc/php/7.0/fpm/php-fpm.conf) specifies an error log with error_log = /var/log/php7.0-fpm.log, there doesn't appear to be such a file, suggesting no errors.

A Working Configuration

I have found that if I tweak the fpm configuration, I can get the webserver to work (no 502 errors) if I configure the /etc/php/7.0/fpm/pool.d/www.conf file to use a static number of 15 threads rather than dynamically spawning processes or using a smaller number of static processes.

pm = static
pm.max_children = 15

I believe this works because there are already ample threads ready to go in order to take the sudden hit, and there is no delay incurred with spawning or shutting down the threads. However, this does mean that my webserver will use much more memory than I should like. Ideally, I would like the pm.max_children to be a number equal to 2x the number of vCPUs on the server (so 8 or less). In this case I am using a quad core server, but would like to possibly scale down to a dual core instance. Ideally, I would like the server to answer all of the requests in-time even if the total time taken is a lot longer, e.g. a queue and adjusting timeouts.

Configuration Settings

The default php-fpm listen.backlog value is 511, but I set it to 2000 just to eliminate it from being a factor. listen.backlog = 2000

For Nginx, I set 1024 worker_connections and worker_processes auto;, so that should be 4.

I also have the following buffer and timeout settings to try and prevent them being a factor:

##
# Buffere settings
##
client_body_buffer_size 10M;
client_header_buffer_size 1k;
client_max_body_size 512m;
large_client_header_buffers 2 1k;


##
# Timeout settings
##
client_body_timeout 120;
client_header_timeout 120;
keepalive_timeout 120;
send_timeout 120;
fastcgi_connect_timeout 60s;
fastcgi_next_upstream_timeout 40s;
fastcgi_next_upstream_tries 10;
fastcgi_read_timeout 60s;
fastcgi_send_timeout 60s;
fastcgi_cache_lock_timeout 60s;

It is worth noting that we get all of the requests (including 502) in about 20 seconds so we are not reaching these. Also, even though fastcgi_next_upstream_tries is set to 10, I only get 1 resource unavailable message for each 502 error message, rather than 10x that many for the 10 tries it should be attempting.

Similar / Related Questions

I see that there are many similar questsions on serverfault and stack overflow. I am detailing them here so this question doesn't just get marked as a duplicate.

Which you can see lines up with the socket file in the error messages nginx provides.

Question

I believe Nginx is being too fast for the PHP-fpm side to handle. At some point fpm just doesn't respond to the nginx request, so Nginx gives up and sends back a 502 error. Is there a way (probably a configuration variable or two) to fix this so that fpm will queue up the requests, or have nginx retry again later (fastcgi_next_upstream_tries doesn't seem to have any effect)? I don't mind how long it takes the webserver to serve up all the requests (increase timeouts), only that I can set my fpm number of processes to an appropriate number relative to my CPU, and all of these 160 requests will be served.

Update - Works Fine Using TCP Sockets

I just tried swapping FPM from listening on a unix file socket to TCP sockets as detailed here.

E.g. changing fpm to: Listen 127.0.0.1:9000 and updating nginx to use: fastcgi_pass 127.0.0.1:9000;

This seems to have done the trick as a workaround. E.g. I don't get any 502 errors even if I use a dynamic pool or even a static pool with just 2 fpm threads.

However, I would love to know why this works instead of using a local unix file socket, and whether there is just a configuration change I can make to have the file socket based solution work, as that is the default and that many people are likely to be using.

Programster
  • 485
  • 12
  • 22
  • 1
    I just ran some tests and am experiencing the same result as you. Using a unix socket, many requests result in a 502 error (Resource temporarily unavailable). With TCP, there are no 502 errors. I've always assumed that unix sockets are better for Nginx to PHP-FPM connections, as there's no need for a TCP handshake. I am questioning the validity of that theory now. – William Byrne Nov 06 '18 at 19:06
  • it seems like setting net.core.somaxconn=65536 will prevent some 502 errors when using unix sockets - but not all. I believe there needs to be some timeout set on unix socket connections > any ideas? – Marek Vavrečan Dec 30 '19 at 17:53

2 Answers2

1

I believe you can use the ngx_http_limit_req_module to achieve that, configuring the number to desired r/s and using burst to set the queue size, with a configuration similar to:

limit_req_zone $binary_remote_addr zone=php:10m rate=2r/s;

server {
    location ~ \.php$ {
        limit_req zone=php burst=10;
    }

This example will allow 2 requests per second at an average, queuing the third to tenth requests (if any). If there are more than 10 r/s a 503 error will be returned (limit_req_status)

ProT-0-TypE
  • 491
  • 4
  • 8
0

go to your php-fpm configuration and add listen.backlog = 5000 note that the hard limit is 65536

aslo make sure it is also supported on system level check like this sysctl net.core.somaxconn if it is less than 5000 do echo "net.core.somaxconn=10000" >> /etc/sysctl.conf sysctl -p

more info here: https://easyengine.io/tutorials/php/fpm-sysctl-tweaking/