1

I'm getting a server ready for production, and I'm trying to accurately estimate how much it can handle.

For example, on one of the static pages (the whole page is the same all the time), If I run apache benchmark on it, I can get anywhere between 5000 and 10000 requests per second (depends on the concurrency). But if I run the same test on a different same server (benchmarking the same server, but running apache benchmark on a different machine), I get like 168 requests per second. That's a huge difference.

For another example, on a dynamic page I might get between 100 and 200 requests per second when testing locally, yet only 50 requests per second when testing off of a different server.

What accuracy do these tests have? How can I figure out how good the server really is?

Matthew
  • 1,769
  • 4
  • 21
  • 32

1 Answers1

1

That is not a server issue - it depends a lot on the load you generate. Any realistic sizing has to come from a realistic test of data delivers, for which there are farmeworks (commercial, possibly some open source) that run through your web application in a determined way, with possibly many agents hitting the server to find out the load it can handle.

For example, on one of the static pages (the whole page is the same all the time), If I run apache benchmark on it, I can get anywhere between 5000 and 10000 requests per second (depends on the concurrency). But if I run the same test on a different same server (benchmarking the same server, but running apache benchmark on a different machine), I get like 168 requests per second. That's a huge difference.

hat is pretty much the network introduced. Staying on the same server is zero latnecy, pretty much unlimited bandwidth. This changes when you make the test from another server.

TomTom
  • 50,857
  • 7
  • 52
  • 134