7

I need some help with analyzing a log from Apache Bench:

Benchmarking texteli.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:
Server Hostname:        texteli.com
Server Port:            80

Document Path:          /4f84b59c557eb79321000dfa
Document Length:        13400 bytes

Concurrency Level:      200
Time taken for tests:   37.030 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      13524000 bytes
HTML transferred:       13400000 bytes
Requests per second:    27.01 [#/sec] (mean)
Time per request:       7406.024 [ms] (mean)
Time per request:       37.030 [ms] (mean, across all concurrent requests)
Transfer rate:          356.66 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       27   37  19.5     34     319
Processing:    80 6273 1673.7   6907    8987
Waiting:       47 3436 2085.2   3345    8856
Total:        115 6310 1675.8   6940    9022

Percentage of the requests served within a certain time (ms)
  50%   6940
  66%   6968
  75%   6988
  80%   7007
  90%   7025
  95%   7078
  98%   8410
  99%   8876
 100%   9022 (longest request)

What this results can tell me? Isn't 27 rps too slow?

Andrew M.
  • 10,982
  • 2
  • 34
  • 29
Alan Hoffmeister
  • 73
  • 1
  • 1
  • 3

2 Answers2

14

When running load tests, picking an arbitrary number and hitting your server is generally not a good way to go. All you've proven is that your server can handle 200 concurrent visitors as long as they don't mind waiting ~7s for their request to load. What you PROBABLY want to do is:

  1. First, establish a baseline. Use 1 visitor (concurrency of 1).
  2. Second, start ramping up numbers. For example, 1, 10, 25, 50, 100, 125, 150, 200, etc..
  3. Finally, make sure these requests run for prolonged periods of time (i.e., don't just start it up and then ^C it)

Once you have your results, graph them: number of visitors versus average request times, including max and min bars. Basically, load testing of an arbitrary application is only as useful as relevant tests; in this case, for example, if it takes 1 visitor 6s to load a page, then 7s a page for 200 visitors doesn't sound to bad, does it?

Andrew M.
  • 10,982
  • 2
  • 34
  • 29
  • That's a good answer. But how can I measure the real usage, I mean, my app have web sockets and rest endpoints, how can I measure the limit os user my app can have before thinking about scale? – Alan Hoffmeister Apr 10 '12 at 23:48
  • For that, you'll likely need to invest in a more customizable framework for loadtesting. To begin, though, you'll need to track user patterns--i.e., I noticed your site has a front page and what sounds like a backend. So I would assume that, on average, 50% of your hits are to your front end, and 50% are to your backend. With a real load testing framework (grinder, multi-mechanize, etc.), you can very easily test this out. – Andrew M. Apr 11 '12 at 00:30
1

you can start by setting a startup number of requests and number of concurrent requests and check the results as follows :

- Total Number of Requests per seconds
- Average Time Per Request
- Average waiting / processing / connecting times

then you can scale them up by increasing the number of concurrent connections till you reach close to the expected number of users and watch the response of your service and repeat it for number of times and check the time variation and take the average

Hany Hassan
  • 111
  • 1