0

I have a news site which is run by 4 tornado instances and nginx as reverse proxy in front of them.

Pages are rendered and cached in memcached so generally the response time is less than 3 ms according to tornado logs.

[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.43ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 3.41ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 1.96ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.48ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 4.09ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.43ms
[I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.49ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.25ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.39ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.93ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.70ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.08ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.72ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.02ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.70ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.74ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.85ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.60ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.83ms
[I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.65ms


When I test this site with ab at concurrency level 1000 I get response times around 0.8 seconds. Here is the benchmark result:

Document Length:        12036 bytes

Concurrency Level:      1000
Time taken for tests:   7.974 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    10000
Total transferred:      122339941 bytes
HTML transferred:       120549941 bytes
Requests per second:    1254.07 [#/sec] (mean)
Time per request:       797.407 [ms] (mean)
Time per request:       0.797 [ms] (mean, across all concurrent requests)
Transfer rate:          14982.65 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    7  20.8      0      86
Processing:    57  508 473.9    315    7014
Waiting:       57  508 473.9    315    7014
Total:        143  515 471.5    321    7014

Percentage of the requests served within a certain time (ms)
  50%    321
  66%    371
  75%    455
  80%    497
  90%   1306
  95%   1354
  98%   1405
  99%   3009
 100%   7014 (longest request)


I can handle ~1200 requests/seconds with 1000 concurrent connections and when I do the same benchmark with 100 concurrent connections I can again handle around 1200 requests/second but response time drops to ~80 ms.

When it comes to real life with 1000 concurrent connections users will face 0.8 seconds response time which I think is a bad value.

My question is why response times increase when concurrency level is increased?


And here is my nginx configuration

user www-data;
worker_processes 1;

pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;

worker_rlimit_nofile 65536;

events {
    worker_connections 65536;
    use epoll;
}

http {
    upstream frontends {
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
        server 127.0.0.1:8083;
        server 127.0.0.1:8084;
    }

    access_log off;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    keepalive_timeout 65;
    proxy_read_timeout 200;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    gzip on;
    gzip_min_length 1000;
    gzip_proxied any;
    gzip_types text/plain text/css text/xml text/javascript
               application/x-javascript application/xml application/atom+xml;
    gzip_disable "msie6";

    proxy_next_upstream error;

    server {
        listen 80;

        client_max_body_size 1M;

        location / {
            proxy_pass_header Server;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Scheme $scheme;
            proxy_pass http://frontends;
        }

        location = /favicon.ico {
            rewrite (.*) /static/favicon.ico;
        }
        location = /robots.txt {
            rewrite (.*) /static/robots.txt;
        }

        location ^~ /static/ {
            root /var/www;

            if ($query_string) {
                expires max;
            }
        }
    }
}
laltin
  • 51
  • 6

1 Answers1

0

do you get the same results when you do your perftests against something like this:

location /perftest/ {

    return 200;


}

and please add your nginx.conf and server {} - block

  • I benchmarked perftest as you suggested with 100 & 1000 concurrent connections. With 100 connections nginx can handle 16k reqs/second and with 1000 connection 12k reqs/second. – laltin Oct 01 '13 at 08:44
  • @laltin after serval years, is that question still open or have it been solved? if yes please add an answer and accept it – djdomi Oct 24 '21 at 12:46