2

I'm a PHP developer, I have always developed on a LAMP environment and it's all gone well 'til now. At the moment, I am developing a PHP web application, still on apache and without the use of any framework.

PHP-fpm will serve dynamic and static contents on the front-end, while php-cli scripts will run in the back-end through some cron-jobs (the actual application).

This time, I am facing up the concurrency problem because this web site will be used by multiple users. The estimated number of concurrent logged-in users is 50-75, at least for the start-up period.

I have read almost everywhere that Nginx is better than Apache in this kind of work; I've read a lot of articles which explain the differences between them, I've read performance stats, but nowhere have I found the actual threshold (of concurrent users) over which you should switch from Apache to Nginx.

I'm wondering this just because, as I said above, I have always worked with Apache and a change in web server is a significant choice, in particular regarding the time I will have to spend on Nginx documentation in order to completely understand its behavior/functionalities. Indeed, I have tried to set up a LEMP server, but Nginx sounds like Arabic to me at the moment.

After this "short" introduction, here are my questions:

  1. What is the reccomended threshold (number of users) over which switch to Nginx?
  2. Assuming my web site will never have more than 300 online users at the same time, do i really have to spend weeks to study Nginx, even if it is a bit faster in serving contents?
  3. Are there significant differences in terms of security between Apache/PHP-fpm and Nginx/PHP-fpm?
  4. Last but not least: i've just switched from OVH to Digital Ocean. Digital Ocean seems fantastic. They provide you pre-built images of a LEMP server. How much can i rely on Digital Ocean's Nginx's security settings? I ask this just because i searched a lot of hardening tips for Nginx, and most of them are performed before build Nginx itself. Anyone who is using Digital Ocean's LEMP server can help me?

NOTE: I imagine this info will be important in order to reply: Prod. server (at start up time) will be: 4c/8t intel - 8gb RAM - SSD hard disk

Thank you very much.

Geraint Jones
  • 2,483
  • 16
  • 19
Jhon Zunda
  • 59
  • 7
  • 2
    The threshold is when your benchmarks run faster on nginx than on apache. –  Jun 25 '15 at 00:10
  • Right. Only way to know for sure is to run performance tests with your expected loads in LAMP and LEMP staging environments. – Andrew Schulman Jun 25 '15 at 11:45

1 Answers1

1

First, 50-75 concurrent users is nothing - Either web server can handle that in a default, totally unoptimized configuration, and you don't really need to worry until you add a zero (or two) to the end of that number.
If you're having performance problems at that level your problem is likely somewhere else (database, application code).


That said, there is no magic "threshold" value to consider making an architectural change -- in fact there's no reason to ever switch if you don't want to: If you're familiar with Apache then you may want to stay with it for ease of administration and solve your load problems by adding more servers to distribute the workload.

If you're looking to optimize performance you should base your decision on performance. Tune your servers as best you can (see here for Apache 2.2 or here for Apache 2.4, or here for nginx), and then do load testing to determine what sort of performance each design will give you.

If the benchmark numbers are close it doesn't matter which web server you use because the bottleneck is elsewhere in your design (and your testing should show you where the bottleneck is so you can work on that particular part of the system).

If the numbers are radically different you can use the faster web server to stave off needing to add more hardware, but at some point you will need to add more machines to handle high concurrency - that's just the way of the web.


Finally remember that every time you prematurely optimize Knuth kills a kitten. He was speaking of premature optimization in programming, but it is axiomatic for system administration and infrastructure design as well: If you don't need to maximize performance immediately at launch deploy what you know (and what you've tested), and incrementally improve the performance later.

voretaq7
  • 79,345
  • 17
  • 128
  • 213