17

We're running a heavy Drupal website that performs financial modeling. We seem to be running into some sort of memory leak given the fact that overtime the memory used by apache grows while the number of apache processes remains stable:

enter image description here

enter image description here

We know the memory problem is coming from apache/PHP because whenever we issue a /etc/init.d/httpd reload the memory usage drops (see above screenshot and below CLI outputs):

Before httpd reload

$ free
             total       used       free     shared    buffers     cached
Mem:      49447692   45926468    3521224          0     191100   22609728
-/+ buffers/cache:   23125640   26322052
Swap:      2097144     536552    1560592

After httpd reload

$ free
             total       used       free     shared    buffers     cached
Mem:      49447692   28905752   20541940          0     191360   22598428
-/+ buffers/cache:    6115964   43331728
Swap:      2097144     536552    1560592

Each apache thread is assigned a PHP memory_limit of 512MB which explains the high memory usage depiste the low volume of requests, and a max_execution_time of 120 sec which should terminate threads which execution is taking longer, and should therefore prevent the constant growth in memory usage we're seeing.

Q: How could we investigate what is causing this memory leak?

Ideally I'm looking for troubleshooting steps I can perform on the system without having to bother the dev team.

Additional info:

OS: RHEL 5.6
PHP: 5.3
Drupal: 6.x
MySQL: 5.6

FYI we're aware of the swapping issue which we're investigating separately and has nothing to do with the memory leak which we've observed before the swapping started to occur.

Max
  • 3,373
  • 15
  • 51
  • 71
  • Last time I hit a severe memory usage problem with LAMP + Drupal was when I had PHP memcached library in use. After I took it away memory usage dropped very dramatically. Just a guess. Might type a proper reply for you a bit later. – Janne Pikkarainen Mar 13 '12 at 08:26
  • @JannePikkarainen: we are using the PHP `memcached` library. Based on the memcache admin page `memcache.php`, all we can see is that we have allocated `5GB` to memcache, of which `3.3GB` is being used. Would be great if you can assist us further here. – Max Mar 13 '12 at 08:49
  • Yes, the `memcached` daemon itself probably is just fine. It's the PHP memcache library which might or might not leak memory (and thus grow Apache processes memory use). My problem was about 1-2 years ago, so things might have been fixed after that. Anyway, if memcached is not mandatory for you, try to disable it for a while and see if the Apache memory usage still grows. – Janne Pikkarainen Mar 13 '12 at 09:01
  • What is the actual problem? Is performance poor? You're telling us symptoms without explaining what problem we're supposed to be helping you solve. (And what is this swapping issue you're talking about? Are you swapping so much it's impacting performance?) – David Schwartz Mar 13 '12 at 09:03
  • @DavidSchwartz: the problem is that if we don't restart `httpd`, the memory usage keeps growing and the box eventually crashes with some out of memory kernel messages. Performances are good (until memory usage is approaching the memory limit). Please ignore the swapping issue. – Max Mar 13 '12 at 09:26
  • @JannePikkarainen: we really need to use `memcache` otherwise performance would not be acceptable. We do have a workaround in restarting `httpd` but we would like to have a permanent solution. – Max Mar 13 '12 at 09:27
  • Are you using a single Drupal codebase? Or do you have different Drupal projects running different apps each with its own Drupal codebase? – HTTP500 Mar 13 '12 at 14:38
  • @HTTP500: your question would probably be better answered by our dev team but I'll give it my best shot: we only have 1 website which is based on Drupal 6.x core with a lot of contributed modules and custom code. – Max Mar 13 '12 at 14:44
  • @user64204 can i ask the question , what is the programm are you using to get those graphs ? – may saghira Sep 24 '14 at 09:34
  • @maysaghira: Munin – Max Sep 25 '14 at 20:33

4 Answers4

10

We know the memory problem is coming from apache/PHP because whenever we issue a /etc/init.d/httpd reload the memory usage drops

No - that just means it's related to the web traffic. You've gone on to mention that you're running mysql on the box - presumably managing data for the webserver - it could just as easily be the culprit here. As could other services your webstack uses which you've not mentioned.

Each apache thread is assigned a PHP memory_limit of 512MB which explains

No it doesn't. You're reporting an average of 7 and a max of 25 busy servers - yet your memory graph shows a delta of around 25Gb.

Really you should start again with basic HTTP tuning - you seem to be running a constant 256 httpds, yet your peak usage is 25 - this is just plain dumb.

and a max_execution_time of 120 sec which should terminate threads which execution is taking longer

No - only if the thread of execution is within the PHP interpreter - not if PHP is blocked.

that performs financial modeling

(sigh)

It would have been helpful if you'd provided details of how you have configured Apache, threaded or prefork, what version, how PHP is invoked (module, cgi, fastcgi), whether you are using persistent connections, whether you use stored procedures.

I'd suggest you start by moving mysql onto a seperate machine and stop using persistent connections (if you're currently using them). Set the memory limit much lower and override this on a per-script basis. Make sure you've got the circular reference garbage collector installed and configured.

symcbean
  • 19,931
  • 1
  • 29
  • 49
2

You probably solved your problem by now. As an interim to keep the server from swapping / thrashing I run the following command every hour from cron:

#!/bin/sh 
sync; echo 3 > /proc/sys/vm/drop_caches

I am not saying this is a solution, just a way to keep things running and to minimize downtimw as you investugate the actual cause of the memory leak.

More details can be found here.

http://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/

Orphans
  • 1,404
  • 17
  • 26
patrick
  • 29
  • 2
1

Apparently this is the way PHP works - and if you are doing long loops where you are allocating objects and who knows if you are passing them also via reference, so the only way to deal with it, is to after N requests for each PHP process to stop it. If you run PHP as CGI, every request makes it respawning - so no memory leak, and the performance drop might not be so big. You can also run fast-cgi, where e.g. each 1000 requests the php-fcgi process is killed and it's memory released - again no memory leak. If you run PHP as module mod_php, you might try to setup maxrequests in httpd.conf to see if it helps. I would try to setup e.g. 10 - if it's going to work, the performance drop will not be high, but there should be no memory leaks, even under heavy spike when all 250 httpds are in use (10*250 = 2500 - for each 10MB memory usage is 25GB - so maybe if you dont have 128GB RAM try also to lower the httpd number of processes to e.g. 50).

Andrew Smith
  • 1,123
  • 13
  • 23
-1

Check memory on global php.ini file . dont simply decalre valure like 1 G etc... I would highly recommend that a local php.ini is brought into that account so as not to impact the entire server. I would recommend setting the global php.ini limit to around 64M as this is typically enough for most accounts

check your apache settings too

Nikhil Babu
  • 101
  • 1