0

My Ubuntu server has been running wonderfully for six months until last week when it started being a bit naughty. I've made no changes to the config in the last couple of months, so I'm scratching my head... here's where I'm up to!

I've done some digging in Munin (screenshot) and it looks like Apache is spawning worker processes up until the limit that's set in prefork and then becoming unresponsive (or from the white shown, perhaps completely going down). The server is still healthy as a whole and responds fine to SSH/FTP etc and memory and CPU usage are well within limits. The server is a Ubuntu VPS hosting about 50 sites, all of which are quite low traffic – there's maybe 500 hits per day across all sites on the server.

Unsurprisingly, Apache throws an error around the same time that Munin shows that we've maxed out:

[Tue Mar 10 23:33:57.643098 2015] [mpm_prefork:error] [pid 17764] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting

I could, of course, raise the number of RequestWorkers, however, I think that's just a bodge and doesn't address the real problem as this 'ramping up' is happening, at least some of the time, overnight – when the server should be getting virtually no requests.

Here's my prefork module configuration:

<IfModule mpm_prefork_module>
    StartServers            4
    MaxClients              35
    MinSpareServers         4
    MaxSpareServers         8
    MaxRequestWorkers       64
    MaxConnectionsPerChild  100
</IfModule>

Initially (before I started trying to troubleshoot yesterday), the MaxConnectionsPerChild parameter was 0. I read in various places that this will cause Apache to not release unused workers. I changed it to 100 (I believe), but when I came in this morning after the latest 'ramping up', it was reset back down to zero (even though my other change – to change MaxClients to 35, stuck).

The only similar report I've come across on ServerFault is this. I tried removing APCU, but this made no difference and had a similar 'ramping up' last night.

Any ideas lovely people?

Shankie
  • 1
  • 1
  • Check what the actual requests are. Lower keepalive timeout. – Dan Mar 11 '15 at 10:23
  • Thanks @Dan but the keepalive timeout is already 5, which seems pretty low. Any idea how to check what the actual requests are? – Shankie Mar 11 '15 at 15:15
  • Have you tried setting LogLevel Debug in apache config so you can see better what is going on? – yoshiwaan Mar 11 '15 at 21:51
  • Logs :) (see if normal requests or some bot hammering you) and mod_status to have a visual map of the state of apache processes – Dan Mar 12 '15 at 08:33

0 Answers0