I am considering limiting the maximum number of httpd processes on a centos/apache web server. I have seen several posts which discuss taking this action to help reduce memory load (which is why I am considering it) but none of them discuss the consequences for the end users. If someone would be able to clarify that point it would be greatly appreciated.
3 Answers
The consequence for the end user is simply that once the server reaches the limit on simultaneous processes/threads/connections, it will not accept any more connections until some other user stops.
EDIT: as symcbean pointed out in comments, they will at first just get a delayed response, since the server has a backlog listen queue. It's not until that queue is full that they will get Connection refused. In any case, the result is that they will get a slower response or no response.
As a general rule, you should find out how many simultaneous users you usually have at peak periods and make sure that your server can handle at least twice that. And you need to revisit those statistics regularly and change your settings if the patterns of your visitors have changed.
I'd suggest reading How do you do load testing and capacity planning for web sites? for more information about this.
-
Many thanks for the quick answer, have upvoted. Is it generally one httpd processes per user? – MarcF Apr 18 '13 at 10:31
-
It depends on your configuration; generally apache will use several threads within each process and having one connection per thread. ("Per user" is not a relevant measure since one user may make many simultaneous connections, e.g. to load different parts of a page.) – Jenny D Apr 18 '13 at 10:32
-
It won't *accept* more connections, but it won't reject them until the listenbacklog is full. And Jenny's description is how the worker MPM behaves. The pre-fork MPM has one connection per process. The event MPM can have lots of connections per process. And users != connections – symcbean Apr 18 '13 at 12:25
-
Good point, symcbean. – Jenny D Apr 18 '13 at 12:26
The exact effect depends on OS/version of apache.
On your CentOS you have two values to consider: - processes (child) - threads per process
If you now have 100 threads per child and allow 100 childs, you may have access from 10.000 people at the exact same time.
If you reduce that to 5 childs (same ThreadsPerChild), only 500 people can access at the same time.
positive effect for you: memory saving but possible negative effect: people (anyone >500) can not access your site
It helps knowing how many people access your site in a certain time, to get a feeling of how many will access at the same time.
- 21
- 2
The cost is people effectively get told "busy, try again later", versus spending very long periods waiting for a result.
If, for example, a normal service time of a page were 1/10 of a second, a small server will deliver up to 10 quite quickly, but after that limit will slow down dramatically: 50 users will see individual responses in 4 seconds, 40 times slower, while 100 users connecting will see their responses take 9 seconds, 90 times slower. This is a horrible consequence of what could be quite a "normal overload" (;-))
The suggestion about using "admission controls" to keep you from running out if memory is a good one. If you do a trivial load test and find you run out of memory and start thrashing at 500 users, set the limit to stay just below that.
--dave
- 211
- 2
- 5