1

This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine.

Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail.

I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before.

Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps.

Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating!

Edit: quick top snapshot:

top - 15:18:15 up 2 days, 13:04,  1 user,  load average: 0.56, 0.44, 0.38
Tasks:  85 total,   2 running,  83 sleeping,   0 stopped,   0 zombie
Cpu(s):  6.7%us,  3.5%sy,  0.0%ni, 89.6%id,  0.0%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   2051088k total,   736708k used,  1314380k free,   199576k buffers
Swap:  4194300k total,        0k used,  4194300k free,   287688k cached
hobodave
  • 2,800
  • 2
  • 23
  • 33

3 Answers3

3

Check if you are using any swap memory when the lockups happen (free and vmstat). If you have MaxClients set too high what will happen during traffic spikes is memory usage and server load will increase slowly until you run out of RAM and you begin to use swap. This causes the Apache clients to start loading to/from swap memory which just kills performance, the server load sky rockets and the server "locks up".

Ideally you want to set MaxClients such that you never begin to use swap memory. The exact amount will depend on your Apache settings and what you are serving. Since you see 30-40 processes during the traffic spikes I would start at around 30 and see if that prevents swap usage (assuming that is the source of the problem).

uesp
  • 3,384
  • 1
  • 17
  • 16
  • Thanks for the answer. I'm confused though... if the error_log is telling me that I reached maxclients, wouldn't it hurt to lower that number? – joshcanhelp Feb 28 '11 at 15:20
  • The issue is that too large of a MaxClients will exceed the available RAM and cause a server lock-up. By lowering MaxClients you will prevent the swap from ever being used which will prevent the lock-up. At the same time a lower MaxClients will indeed result in fewer clients being served when you get a traffic spike but this is simply due to limits of the server. If you want more than 30 concurrent clients you'll have to change the hardware/software configuration (add more RAM, use a different web server, use a caching layer, etc...). – uesp Feb 28 '11 at 17:13
3

If your server cannot handle spinning up 30-40 httpd processes (it can't), then don't let it. I go into a lot of detail regarding LAMP configuration in my answer to this question. The examples I give are for a 512 MiB VPS, so don't just blindly copy the configuration "per the internet". :)

Short version: scale back your httpd MaxClients and ServerLimit variables to prevent 30+ httpd processes from spinning up. I'd start with something like 10 or 15 depending on the average size of your processes, and how much room you've given MySQL. Note that httpd's behavior will be to refuse requests when all client processes are busy.

hobodave
  • 2,800
  • 2
  • 23
  • 33
  • Hobodave: I thought that other answer was excellent, and upvoted it at the time, so I can't upvote it any more. I upvoted this instead. OP, you could do a **lot** worse than to read HD's linked-through answer. – MadHatter Feb 28 '11 at 17:18
  • Thank you for the answer. I didn't have a lot to go on with the config options so I tried to use common sense (failed me this time :) ). I'll read the other answer as well but I think I have a nice, coherent answer to this question. – joshcanhelp Feb 28 '11 at 23:11
0

It looks like your system is thrashing.

To debug it I'd first disable swap. This way you'll be getting out of memory errors instead of lock-ups caused by constantly swapping in and out memory pages. You'll then much more easily see what's causing you trouble.

I'd also:

  • limit available PHP memory — memory_limit option in php.ini to 64MB
  • limit MaxClients to about 10;

This would force your Apache to not use more than about 700MB of memory (10*64+memory for httpd). If a script would need more memory it would just fail, not bring down your server.

When you find out what is causing your trouble then you can enable swap. But no more than about 1/4 of your RAM. This way unused memory can be swapped out but not enough to cause thrashing.

Tometzky
  • 2,649
  • 4
  • 26
  • 32
  • This makes sense and is a much better explanation than I've found for this problem. So what will happen if traffic goes crazy on the site? Will it just slowly load (preferred) or die? Thanks! – joshcanhelp Feb 28 '11 at 16:13
  • I wish configuring httpd worked so seamlessly. Setting the php memory_limit to 64MB in no way limits the size of Apache to 700 MiB. On production servers with 128 MB memory_limits I often see httpd processes in excess of 1 GB. There's a lot more at play here than simply the PHP memory_limit. e.g. the memory consumed by SQL resultsets is not counted in the PHP memory_limit, at least not until they've been _copied_ (duplicating or triplicating their footprint) into a PHP array. – hobodave Feb 28 '11 at 17:00
  • All shared libraries also count for the size of httpd processes so an httpd process with php and other modules and several drivers in php can make the httpd process get quite big. But all that memory is shared with all in-core versions of the same library. – Koos van den Hout Feb 28 '11 at 18:36