0

I have an nginx / gunicorn / django setup as follows:

Nginx

server {
    listen 80;
    server_name myserver.com;

    root /www/python/apps/pyapp/;

    access_log /var/log/nginx/myserver.com.access.log;
    error_log /var/log/nginx/myserver.com.error.log;


    location / {
        proxy_pass_header Server;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_connect_timeout 10;
        proxy_read_timeout 10;
        proxy_pass http://localhost:8081/;
    }
}

My upstart script for gunicorn

description "pyapp"
start on [2345]
stop on [06]

respawn

# start from virtualenv path
chdir /www/python/apps/pyapp/
exec /usr/bin/gunicorn  -w 11 -b 0.0.0.0:8081 --error-logfile=/var/log/nginx/pyapp.log wsgi:application

The server is running fine, requests are getting responded to pretty well. However, when i start directing traffic to this setup from my old server, pages start giving 504 gateway timeout errors.

What the requests are doing is only a matter of fetching data from DB and rendering using django-rest-framework. Looking at MySQL processlist, there doesn't seem to be any stuck queries there. This is kinda weird.

Any recommendations?

Maverick
  • 119
  • 1
  • 6

1 Answers1

0

First you could try how your backend (django/gunicorn) performs without nginx in front. ab (apache benchmark) is a simple tool for this task.

You can run it directly from the server, or if your 8081 port is not firewalled from any machine:

ab -c 50 -n 500 http://localhost/path-xyz/

(ab is available through the apache2-utils package, at least on Debian-based systems) the -c stands for 'concurrency', the -n for number of requests.

If the backend is the bottleneck, and you're using nginx anyway - it could be an option to do some caching (don't know your application, but maybe...). If your API exposes data that changes often you could set the cache time really short. (1 to 10 seconds) - so if you have e.g. 100 requests per second, only one of those has to hit the backend, the others will get the cached response.

For nginx proxy/cache see e.g. here.

ohrstrom
  • 138
  • 8