-2

Apache seems to have no control over how much memory it reserves. In MPM prefork you control the maximum number of processes you want to allow, but whatever number you use there, there is always a risk that existing processes will require more memory than physically available. If the condition requiring high memory does not change, the OOM killer will start killing random Apache processes without solving anything and Apache will keep creating new processes, crashing the server. So two questions:

  1. Shouldn't Apache control how much memory it assigns?

  2. Isn't there a way to prevent Apache from creating new processes if there is no more memory available regardless of MaxClients or ServerLimit?

  3. If not, can anyone confirm if nginx has the same risk? Seems to me that nginx won't fire the OOM-Killer to a crash point as easily as apache, because I've read it has a small number of running processes, and a more or less stable memory consumption. What I don't know here is if nginx has the ability to stop creating threads once the memory has been exhausted, and just queue those request until there is more memory.

For those who downvoted the question: The research effort was done, but it suggested the answers to the questions 1 and 2 where exactly as pointed out by womble's comment: yes to question #1, and no to question #2. Maybe these answers were too obvious to many people, but I found them hard to believe so I decided to ask them here.

jacmkno
  • 115
  • 7
  • Why the downvotes? Ain´t this information useful? – jacmkno Nov 20 '15 at 20:13
  • The tooltip over the downvote arrow says, "This question does not show any research effort; it is unclear or not useful". That's why the downvotes. – womble Nov 21 '15 at 01:27
  • I didn't know about the tooltip... Thanks for pointing it out. I edited the question to show the research effort behind it, and rewrote the malformed question about nginx to make a third question. – jacmkno Nov 21 '15 at 02:13
  • Does this question still seem useless, unclear or lacking research effort? I cannot remove it now that it has answers... I started to believe the problem was not the research effort, but the clarity. Can you explain which one motivated you to downvote? – jacmkno Nov 21 '15 at 18:21

1 Answers1

2

To answer the direct questions you asked:

Shouldn't Apache control how much memory it assigns?

Yes, it should. And it does.

Isn't there a way to prevent Apache from creating new processes if there is no more memory available regardless of MaxClients or ServerLimit?

No, there is not.

can anyone confirm if nginx has the same risk?

Yes, it does, insofar as it has no way to prevent nginx from creating new processes in the face of memory pressure.

A more broad-faceted answer, which may be of more use to people who come here seeking information on this topic, is below.

Memory isn't consumed at random. Absent a leak, or a halting-problem-compliant sublanguage being executed in the process, the memory that a single Apache worker will use will be well-bounded. So, the solutions to your problem are:

  • Know what your Apache processes are actually doing.
  • Don't use leaky modules.
  • Don't embed full programming language runtimes in processes you don't want to have go nuts on memory consumption.

That last point is worth highlighting: Apache memory shenanigans are almost exclusively caused by mod_php, which was a bad idea when it was created and has just gotten worse over time. The thing is, though, that if you move the PHP execution to a separate process, with (say) php-fpm, you've still got the possibility that those processes could go wild and eat all your RAM, unless you've got appropriate resource controls in place (largely, the same knobs that you have inside Apache). The only benefit you get, from a memory consumption PoV, by running php-fpm over mod_php is that there's only a few php-fpm workers, relative to the number of Apache workers (in prefork) that you need to service a typical workload, so you don't get a large number of large processes, just a small number of large processes.

womble
  • 95,029
  • 29
  • 173
  • 228
  • I don´t have a problem with this and I don´t know what lead you to believe I do. I just wan´t to confirm the answer to those two question by someone who knows more about apache than I do... The risk I mention can be managed, but I think it should not exist or be so low that It should be irrelevant. – jacmkno Nov 20 '15 at 21:02
  • 1
    The answer to the direct questions you asked are "Yes" and "No". Is that more helpful to you? – womble Nov 21 '15 at 01:28
  • 1
    And to answer your newly-added direct question, "Yes". – womble Nov 21 '15 at 08:17
  • I would let this as correct answer but it focus mainly in mod_php and other unrelated subjects. If you compile a more simple and direct answer, focusing in the actual questions I made, I would flag it as correct. Plus, I don't think the actual answers should be in comments... – jacmkno Nov 21 '15 at 18:28
  • Well, you say apache does control how much memory it assigns, but yet, it does not stop creating processes once the memory is exhausted. Then what kind of "control" are you talking about? If it cannot use that control to stop creating processes then what's the purpose of that "control"? – jacmkno Nov 29 '15 at 00:33
  • It controls how much memory it "assigns" by only "assigning" memory when it needs it, in a manner which can be analysed and predicted by the operator, given the server configuration in use. With the "leak, or a halting-problem-compliant sublanguage" caveat I included in my answer. – womble Nov 29 '15 at 00:36