— Yeah, just that simple question. It's too easy to consume a lot of RAM in brain-dead mod_XxX (say, mod_php
) application, so I'm just wondering what does Apache allow to do as countermeasure.
Asked
Active
Viewed 3,793 times
3
poige
- 9,171
- 2
- 24
- 50
-
Which OS are you using? If you're willing to consider OS features instead of Apache features, `ulimit -m` and cgroups might help you. – tomclegg Feb 09 '15 at 07:08
-
Thank you, @tomclegg. `ulimit -m` doesn't work in Linux, AFAIK. – poige Feb 09 '15 at 07:10
-
Limiting RSS wouldn't necessarily give you what you want anyway: if your apache processes have 50M rss with 9950M swapped out and thrashing, are you really winning? `ulimit -v` might be the closest to what you need, if you inflate the limit to account for memory shared among apache workers. `cgroup` is worth a look, OTOH it could just as well end up arbitrarily killing off small/new workers and letting a few big/old ones run. Also: `MaxRequestsPerChild` is a dumb hammer but can be better than nothing. – tomclegg Feb 09 '15 at 07:31
-
Yeah, so finally we're inevitably coming to conclusion that Apache doesn't have anything to control such important aspect of resources consumption as memory. ) – poige Feb 09 '15 at 07:50
-
1Unless you count the sysadmin. :) – tomclegg Feb 09 '15 at 09:25
3 Answers
3
Apache doesn't, but PHP does allow limiting the maximum amount of memory used, in php.ini. For instance:
memory_limit = 128M
Of course if someone hits this limit the actual amount of RAM used will be slightly higher due to PHP being embedded into Apache.
Michael Hampton
- 237,123
- 42
- 477
- 940
-
Sorry, but this is not answer to my question. mod_php was just an example. IOW I'm looking for solution that would work for mod_any, not only mod_php. – poige Feb 09 '15 at 00:23
-
There isn't one. It depends entirely on the modules you are using. – Michael Hampton Feb 09 '15 at 00:25
-
Aha, I guess that's why I still haven't found it. But that only makes it's more interesting. ;) – poige Feb 09 '15 at 01:17
-
And now you know one reason why everybody switched to nginx. :) – Michael Hampton Feb 09 '15 at 01:37
-
Everybody didn't, you see at least one person favorited this question. ;) – poige Feb 09 '15 at 01:49
1
The ulimit
shell feature (which uses the setrlimit
system feature) can limit per-process memory use.
On a Debian box, this can be done by adding this to the bottom of
/etc/default/apache2
:
ulimit -v 1048576
http://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/
See also
- man ulimit
- man setrlimit
tomclegg
- 301
- 1
- 2
-2
I deem it should be considered answered with exactly this quote from the comments: "There isn't one (solution). It depends entirely on the modules you are using" © Michael Hampton
poige
- 9,171
- 2
- 24
- 50
-
1Generally speaking, if the answer is found in another person's answer (or the comments thereto), it's considered polite to approve that answer (and, if necessary, edit it to include the information from the comments). – Jenny D Jun 07 '17 at 06:30
-
Generally speaking this is not the real answer. It's just an indicaiton there's none yet given – poige Jun 08 '17 at 09:33