1

I run a site as a service for the statewide high school swimming community. I do this for love of the sport, so I can't spend a ridiculous sum on hosting, etc. Nor do I need to - except for one day a year.

The coaches all submit their regional entries through my site, and in typical fashion all wait until the due date to do it. That was yesterday, and my site which never fell over all season ground to an unresponsive halt at least 4 times throughout the day. It was never gone for more than an hour, and it always came back. (I issued apache restarts just to be sure).

My question is: how can I tune Apache to handle a one day surge of users? It makes no sense to pay for more hardware for an entire year when I only need to caox one day of increased performance. I've tried reading about prefork.c and worker.c but I just don't understand it well enough. Here's my current config:

<IfModule prefork.c>
 StartServers       1
 MinSpareServers    1
 MaxSpareServers    5
 ServerLimit       10
 MaxClients        10
 MaxRequestsPerChild  4000
 </IfModule>

 <IfModule worker.c>
 StartServers       1
 MaxClients        10
 MinSpareThreads    1
 MaxSpareThreads    4
 ThreadsPerChild     25
 MaxRequestsPerChild  0
 </IfModule>

The most users I would ever expect at once would be 400. Likely much less than that though. I only had about 70 yesterday and it didn't perform very well. Any suggestions?

UPDATE:

  728 root      20   0  317m  20m 7752 S  0.0  2.7   0:49.58 httpd
19700 webuser   20   0  489m  37m 6792 S  0.0  4.9   0:00.72 httpd
19737 webuser   20   0  493m  42m 6624 S  0.0  5.5   0:00.59 httpd
19756 webuser   20   0  494m  43m 6604 S  0.0  5.7   0:00.58 httpd
19758 webuser   20   0  495m  44m 6780 S  0.0  5.8   0:00.97 httpd
19777 webuser   20   0  493m  42m 6620 S  0.0  5.5   0:01.08 httpd

Took a look at the resources, and I'm not sure what I'm looking at. The process owned by root should be the one that runs Apache. Are those owned by webuser spawned by Apache itself? Why are they so big? Server has about 3/4G RAM, if I figure 1/2 for Apache, and processes are holding 40M, how can i estimate the number of simultaneous users possible?

Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
Eddie Rowe
  • 31
  • 1
  • 5
  • UPDATE: apparently the site stalled again this morning. There shouldn't be much in the way of users on it right now. What can I look for? The only recent change was to the SSL Certificates for curl. – Eddie Rowe Jan 29 '18 at 13:57
  • Cloud hosting somewhere like AWS would allow you to size the server up in advance of these high-traffic bursts. Serving static assets off a CDN or something like S3 would likely help some, as well. Right now, though, you should probably look at that `MaxClients 10` value - unless that's all the server can handle, it should be waaaaay higher. – ceejayoz Jan 29 '18 at 14:02
  • 1
    This also looks as an opening to a success story of any cloud provider out there: you don't need hardware, buy resources as you need them. Another sidenote: apache, with its threaded model, is not the best front-end for a highly loaded scenario. – bohdan_trotsenko Jan 29 '18 at 14:03
  • 1
    Using a cloud provider of some sort - whether AWS or similar, where you can spin up additional servers, or a content delivery network that caches your site. – Jenny D Jan 29 '18 at 14:34
  • Everyone says cloud or cdn without understanding the backend application. I doubt you would see these stalls if you were just serving static html files. Why does this application bring the server to a crawl? – Daniel Widrick Jan 29 '18 at 15:10
  • @DanielWidrick - I would love to be able to answer that question. There is a fair amount of dynamic content, but it's just data and layout, mostly. The only graphics are icons and the site banner. – Eddie Rowe Jan 29 '18 at 15:18
  • 2
    Did you write this application? If not, perhaps it's time to go find the developer and chain him to a desk... – Michael Hampton Jan 29 '18 at 19:49

1 Answers1

0

I'll try to sketch some general rule, given you provided very little detail about your environment:

  1. first of all, check which mpm module your Apache is using, so that you'll know which of the two blocks of configuration you posted you need to tweak:

    apachectl -D DUMP_MODULES | egrep -i "prefork|worker"

  2. If you're in prefork mode, and you're running some sort of php application via mod_php, the reason of your Apache processes becoming huge over time is more and more php code and data being loaded into ram by the module every time the application receives new requests from the clients. To keep the size of the processes small, try lowering MaxRequestsPerChild to something like 500. Each process will be killed after having served MaxRequestsPerChild requests, and a fresh one will be fired up (which will hopefully have a lower memory footprint). This comes at a performance cost, since new processes are going to be re-spawned more frequently, but since you're trying to keep your service running, not to make it lightning-fast, it should be an acceptable tradeoff.

  3. You don't want your web server to swap: make sure the maximum RAM consumption of each Apache process times ServerLimit or MaxClients never exceeds 75% of the server's available memory.

  4. If you can, move to AWS or a similar cloud services provider: AWS is able to automatically scale up and down your server on a predetermined schedule. "Autoscaling" is cloud-ish snakeoil in 90% of real-world scenarios but your case is possibly one of the few ones which could actually benefit from the feature. You shouldn't waste the opportunity.

  5. If you're running into such kind of problems while serving 70 clients on a server with 4GB of RAM, I suspect something wrong is happening under the hood but, again, you provided not enough details to make anything but an educated guess

MariusPontmercy
  • 677
  • 4
  • 15