5

With a fresh install of nginx, I obtain these results running apache benchmark, which I feel are very very odd. The page fetched is the default static index.html test page installed by nginx. Running ab locally has very high requests/per second, but remotely it's drastically lower. I have disabled my firewall temporarily withese tests.

AB -n 100 running locally:

Document Path: /
Document Length: 3698 bytes

Concurrency Level: 1
Time taken for tests: 0.21347 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 391000 bytes
HTML transferred: 369800 bytes
Requests per second: 4684.50 [#/sec] (mean)
Time per request: 0.213 [ms] (mean)
Time per request: 0.213 [ms] (mean, across all concurrent requests)
Transfer rate: 17847.94 [Kbytes/sec] received

AB -n 100 running remotely (tried from two different machines, one windows 7 and the other mac os 10.7):

 
Concurrency Level: 1
Time taken for tests: 12.502 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 391000 bytes
HTML transferred: 369800 bytes
Requests per second: 8.00 [#/sec] (mean)
Time per request: 125.020 [ms] (mean)
Time per request: 125.020 [ms] (mean, a
Transfer rate: 30.54 [Kbytes/sec] re

Connection Times (ms)
min mean[+/-sd] median max
Connect: 38 43 1.6 42 56
Processing: 78 82 2.1 82 97
Waiting: 38 43 1.3 43 49
Total: 121 125 2.6 125 139

All our sites run on apache which also has this same issue. I installed nginx to try and see if it was an apache configuration issue, but it clearly is not. I am unable to determine why there is such a huge discrepancy between the results, and am hoping someone can provide some insight.

Is this normal? Is there something misconfigured on my server?

  • It seems like you're testing over a slow network, which will heavily affect your result. You're basically testing how much data you can transfer per second, not how fast nginx is. – Martin Fjordvald Jan 15 '12 at 18:31

1 Answers1

3

Your test slowed down so shockingly because you're remote, and you've got the test bottlenecking on request latency.

Throughput is one concern here (how fast is your client's link to the server), but the main issue that I see is that your concurrency is set to 1 - this means that before sending the next request, ab is waiting for each request to finish.

As it says, it's taking Time per request: 125.020 [ms] (mean) for each request. Since HTTP keep-alive is disabled by default in ab as well, I'm gonna guess that the round trip times that you get when you ping the server are around 60ms?

Try ab -n 100 -k -c 10 - it won't get rid of all of the latency delay, but it should cut the time per request in half, and the average across all concurrent by a factor of 10 - probably completing your test about 20 times faster.

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
  • Ping time is ~35ms. I didn't realize keep alive was off by default. It also makes sense that it will be slow waiting for each individual request. With your options, I get a much more reasonable measure:Requests per second: 104.49 [#/sec] (mean) Time per request: 95.700 [ms] (mean) Time per request: 9.570 [ms] (mean, across all concurrent requests) Transfer rate: 399.50 [Kbytes/sec] received – Adam Gerbert Jan 16 '12 at 03:46