1

I was benchmarking my site with apache ab and I noticed that the response time had big differences when running ab on the server and running ab on a client box remotely.

So what's the biggest difference between running ab on the server and running ab remotely. Is the time consumed on net transportation?

Mickey Shine
  • 929
  • 4
  • 16
  • 33

3 Answers3

2

Latency and network capacity.

We wrote a good article about concurrency/load testing with Siege (which is very similar to AB) specifically mentioning local versus remote testing.

You can read the full version here:

http://www.sonassi.com/knowledge-base/magento-kb/why-siege-isnt-an-accurate-test-tool-for-magento-performance/

Testing remote servers is almost pointless as it is a concurrency test (ie. how many requests can be satisfied repeatedly), the immediate bottleneck is the network connection between the two machines. Latency and TCP/IP overheads are what make testing a remote site completely pointless, the slightest network congestion amongst a peer between the two servers will immediately show reduced performance. So, what really starts to come into play is how fast the TCP 3-way handshake can be completed – the server being tested could be serving a dynamic page or static 0 byte file – and you could see exactly the same rates of performance, as connectivity is the bottleneck.

We can show this using a simple ping. Our data-centres are located in Manchester, United Kingdom, so we’ll try pinging a server in the UK, then a server in the USA and show the differentiation. Both servers are connected to the internet via 100Mbit connections.

Ping from UK to UK

[~]$ ping www.bytemark.co.uk -c4
PING www.bytemark.co.uk (212.110.161.177) 56(84) bytes of data.
64 bytes from extapp-front.bytemark.co.uk (212.110.161.177): icmp_seq=1 ttl=57 
--- www.bytemark.co.uk ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 2.515/2.641/2.869/0.142 mstime=2.86 ms

Ping from UK to USA

[~]$ ping www.mediatemple.net -c 4
PING www.mediatemple.net (64.207.129.182) 56(84) bytes of data.
64 bytes from mediatemple.net (64.207.129.182): icmp_seq=1 ttl=49 time=158 ms
--- www.mediatemple.net ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 154.155/155.282/158.321/1.802 ms

You can immediately see the difference in performance. For that single TCP/IP connection to the USA from the UK, it took 156ms, 62 times more than to a server in the UK. Which means that before you even try anything, the maximum throughput you can achieve on Siege in a second is going to be around 6 transactions per second, due to latency alone.

Lets put this to the test then …

[~]$ siege http://www.wiredtree.com/images/arrow.gif -c 1 -t 10S -b
** SIEGE 2.66
** Preparing 1 concurrent users for battle.
The server is now under siege...
Lifting the server siege...done.                                                                                                                                                                         
Transactions:                      50 hits
Availability:                 100.00 %
Elapsed time:                   9.89 secs
Data transferred:               0.00 MB
Response time:                  0.20 secs
Transaction rate:               5.06 trans/sec
Throughput:                     0.00 MB/sec
Concurrency:                    1.00
Successful transactions:          50
Failed transactions:               0
Longest transaction:            0.20
Shortest transaction:           0.19

Just under the predicted figure of 6 TPS. But unfortunately, this is always going to be the case. The latency will always prove to ruin any concurrency test even if the remote server is capable of much more. Lets repeat the exact same test from a server in the USA to see how latency really affected the test. First up a quick ping,

[~]$ ping www.mediatemple.net -c 4
PING www.mediatemple.net (64.207.129.182) 56(84) bytes of data.
64 bytes from mediatemple.net (64.207.129.182): icmp_seq=1 ttl=52 time=62.8 ms
--- www.mediatemple.net ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3067ms
rtt min/avg/max/mdev = 62.872/62.922/62.946/0.029 ms

[~]$ siege http://mediatemple.net/_images/searchicon.png -c 1 -t 10S -b
** SIEGE 2.72
** Preparing 1 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                     73 hits
Availability:                 100.00 %
Elapsed time:                   9.62 secs
Data transferred:               0.22 MB
Response time:                  0.13 secs
Transaction rate:               7.59 trans/sec
Throughput:                     0.02 MB/sec
Concurrency:                    0.99
Successful transactions:          73
Failed transactions:               0
Longest transaction:            0.14
Shortest transaction:           0.12

So there you have it, we’ve managed to double our transactions per second, without any server-side changes simply by using a server closer to the test site – showing how sensitive Siege is network latency.

Siege is going to be limited by the bandwidth available on your test server and the remote server. So once you start hitting higher levels of throughput, the amount of content being downloaded starts to go up. In our examples above, 0.02MB was downloaded in 10 seconds – which is a tiny 0.16 Mbps (megabits per second). But when you start to increase the number of concurrent users, things can change radically and it is very easy to saturate the network connection – long before the server itself has reached its capacity.

So if the server you were testing from only had 20Mbit of usable bandwidth, you would probably see a maximum of about 500 req/s on the 4Kb resource mentioned earlier.

Content extracted from http://www.sonassi.com/knowledge-base/magento-kb/why-siege-isnt-an-accurate-test-tool-for-magento-performance/

Ben Lessani
  • 5,174
  • 16
  • 37
0

Yes, the different network situation is the cause. An HTTP request tends to require 2 round-trips (for a very small request and response):

Client -> Server, SYN
Server -> Client, SYN/ACK
Client -> Server, ACK and HTTP request
Server -> Client, HTTP response

So, ping your server, and double that; that's the time that's being added to each request, on average.

You can enable HTTP keep-alive with -k and drop one of those round-trips out of the equation, but it will still be slower than local requests due to latency.

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
0

As you suggested the difference is due to the internet transfer from a remote client to the web server.

So it's always a good practice when doing benchmark to try and simulate your user experience. So what I do i try to run different benchmarks based on my visitors geo location to find out how they experience the site. For example if most of my visitor are from USA I run an EC2 instance from there and run the benchmark.

Based on that you can decide to deploy some kind of CDN if is needed.

golja
  • 1,611
  • 10
  • 14