Ok so here is the situation, we currently have a server and we are now migrating to AWS. We have somehow identical configuration and we already tried to run apache benchmark so the PHP-FPM pool is somehow optimize as far as I know. But after we point the domain in the AWS DNS after an hour we are getting 502 bad gateway and is receiving this error:
connect() to unix:/var/run/nginx/php-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 127.0.0.1, server: domain.com, request: \"GET / HTTP/1.0\", upstream: \"fastcgi://unix:/var/run/nginx/php-fpm.sock:\", host: \"domain.com\""
Do you have an idea what is wrong in here? Or are there any way to trace which is the one who is causing the 502 bad gateway?
Resources
- EC2: m4.large
- CPU: 2
- RAM: 8
- Cloudfront
- ELB
- min: 2 instances
- Memcached (AWS Elasticache) for PHP session handling
Setup
Running in AWS using: CloudFront - ELB - NGINX - PHP
- NGINX 1.8
- PHP 7.1.11
Configuration
NGINX
worker_processes auto;
worker_connections 4096;
multi_accept on;
use epoll;
send_timeout 3600;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
fastcgi_connect_timeout 600;
fastcgi_send_timeout 600;
fastcgi_read_timeout 3600;
gzip on;
PHP-FPM
user = nginx
group = nginx
listen = /var/run/nginx/php-fpm.sock
pm = dynamic
pm.max_children = 46
pm.start_servers = 5
pm.min_spare_servers = 3
pm.max_spare_servers = 5
request_terminate_timeout = 3600
pm.max_requests = 400
process.priority = -19
request_terminate_timeout = 3600
catch_workers_output = yes