I am trying to scale an nginx installation to the best of it's ability.
I am running one nginx instance with 6 worker_processes (6 cores) and 5 backend servers consisting af a uwsgi
setup with 10 workers each. (total of 50 workers).
However, any benchmark I attempt with different parameters (using ab
) for total and concurrent connections seem to edge out at around 1000 requests/second.
I have disabled all logging for nginx and uwsgi (to avoid slowing down due to disk issues). I am testing against a Flask python application that merely sends {'status':'ok'}
back. No database access, no calculations, nothing.
The relevant part of the nginx config looks like this:
user www-data;
worker_processes 6;
worker_rlimit_nofile 100000;
pid /var/run/nginx.pid;
events {
use epoll;
worker_connections 2048;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log off; # /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
<...>
}
I am looking for any tips, anything I have overlooked, to increase throughput. Looking at stats for each uwsgi
pool (using uwsgitop
) they don't at any point seem hard pressed to perform, leading me to believe nginx is the bottleneck. Also, the performance was the same with a single pool of workers instead of 10. Additionally htop
also shows that I am nowhere near to max in terms of memory or CPU.