0

I have another related question but I'm deviating from it since I'm changing my server to use nginx instead of Apache.

Related: Server problems, running out of RAM, really high load average

I'm still having issues, though they're much easier to target now. Here's the situation with as much detail as I can give:

My wife has a WordPress site with quite a few plugins, including WooCommerce. Even with just the two of us surfing around non-stop (me with two browsers open, she with one), we're able to bring the server to a halt.

System specs: Debian 7.7, 512MB RAM, 512MB swap, 2 cores (speed unknown), nginx, PHP5-FPM, MySQL Server.

This screenshot of my terminal window pretty much tells the story:

enter image description here

The vmstat si/so values are at 0 (mostly) during light surfing. As we both surf the site at the same time, the values go way up, and htop shows some serious struggling going on. With the new nginx/php-fpm setup I have, if I simply run sudo service php5-fpm restart, everything is fixed. The same problem occurs with Apache as well, if we surf the site at the same time, si/so skyrockets, and the site freezes, then it either recovers on its own after a while, or I have to restart Apache.

I'm at a loss here. I'd rather move forward with my nginx setup and troubleshoot that one. In that case, it seems like php-fpm is the issue maybe. But with just us hitting the server, this is pretty unacceptable. If she suddenly had 20 people at once visiting the site, we'd be screwed.

If it's a lost cause trying to run WordPress sites on a 512MB 2-core server, let me know. I may have to upgrade to a 1024MB/4-core.

CaptSaltyJack
  • 628
  • 2
  • 13
  • 34
  • 1
    Once again you have too many php-fpm processes running. Tune downward. – Michael Hampton Oct 27 '14 at 01:41
  • `pm` is set to `ondemand`, `pm.max_children = 20` and `pm.process_idle_timeout = 10s`. Are these not good settings? I've read there may be memory leak issues with `pm = dynamic`. – CaptSaltyJack Oct 27 '14 at 01:45
  • @MichaelHampton PS, if you have suggested `pm` settings, please post as an answer. Thanks! – CaptSaltyJack Oct 27 '14 at 01:46
  • 1
    With 20 children you've already run out of memory and then some! Tune downward or upgrade your RAM. – Michael Hampton Oct 27 '14 at 01:46
  • @MichaelHampton Will do. And what is the symptom I should look for if `pm.max_children` is set too low? – CaptSaltyJack Oct 27 '14 at 01:46
  • @MichaelHampton The fpm log shows: `WARNING: [pool www] server reached max_children setting (4), consider raising it`. Do I just ignore that? I figure it will keep telling me to raise it all day until I'm at a number where it doesn't run out of children. – CaptSaltyJack Oct 27 '14 at 01:57
  • 2
    Then you raise it until you start running out of memory. If you still get the message, then you need more RAM – Michael Hampton Oct 27 '14 at 02:12
  • @MichaelHampton Gotcha. Then perhaps a 512MB RAM server isn't going to cut it. I'm at 5 `max_children` and when the site is under load, RAM use is about 300-400MB. 20 children was killing the server. – CaptSaltyJack Oct 27 '14 at 02:15

2 Answers2

2

Separate your application and display logic (front) from your data access logic and storage (back). The database is going to generate a lot of IO activity, and as such will slow down other things inside the same server.

Add RAM. no, seriously, add RAM. Accessing data stored in memory is faster than accessing data stored inside a flash-drive, which is faster than accessing data stored inside a spinning hard-drive.

Add a caching layer between the application and the database, such as memcached or some of the others in the space. Again, it means frequently accessed data can be pulled from memory, rather than off of spinning platters.

More hard-drives. A hard drive can only seek to one physical position at a time. By increasing spindle count, you can have the others search while one is reading, and divide the more time-expensive part (searching) between a larger number of physical devices.

Split the web-head between multiple physical boxes and setup a loadbalancer to spray requests between them (with the caveat that they all store session data in a shared store, such as a database or a file in a CIFS/NFS share), so that they can split the load.

Use a web-server that offers connection pooling and connection reuse, since the time to spin up a connection is a long time, relative to servicing the request, especially if most of hte data is in memory.

Audit the processes inside your server or servers and determine if there are extraneous ones that are not building value. Identify if any are using resources (RAM, CPU, disk IO) but not building value, and determine if they can be safely disabled.

Stream your logs (especially http access logs and http error logs) over Berkeley Syslog to a dedicated log box (running RSyslogd or Splunk or the like) instead of writing to local disk. Again, local disk access is expensive.

Instrument your server and observe whether and how much data is being swapped out from main memory to disk. If it is more than 5%, or the amount is uneven and fluxuates a lot, ADD MORE RAM. Seriously, paging to disk is slow, expensive and painful.

Instrument your php and find what portions are the busiest, and what portions use what sets of resources. Assign a cost to RAM usage, CPU time, disk IO usage, network usage. Scale these costs by average access time. Actually charge yourself based on these costs and put the money into your optimization fund. Now look at how to save money.

But seriously, more RAM.


Edit 1: Here is how I might break it out. This is a very rough, back-of-the-napkin design, but it illustrates how to make it scale. The pieces that may look new are the queue bits. They essentially abstract out network traffic, and make the whole system less tightly coupled. This means a momentary transient between the data back-end and the web-heads will be less disruptive, and will recover more easily. The queues run between the application in the web-head and a front-end in the database servers, and are managed from the queue manager. This looser coupling will reduce the need to be paged in the middle of the night for a single momentary blip.

Basic design diagram

DTK
  • 1,688
  • 10
  • 15
0

there is very little tuning you have to do for nginx. All the OS/nginx tuning you can do might get your 5-10% more requests per second. Seems to me you are more memory bound then anything. Your best bet is to remove the php-fpm processes from your webserver and create app servers where nginx sends traffic to.

 ---> internet -> nginx -> many app servers
Mike
  • 21,910
  • 7
  • 55
  • 79
  • This looks like you're describing using nginx as a reverse proxy, but without enough info for someone unfamiliar with doing that to implement it. I doubt doing that where 'many app servers' arebon the same machine will have any benefit. – AD7six Oct 28 '14 at 13:16