9

I have spent a week or two researching and setting up my server to run Apache with the Worker MPM and FCID. I am trying to optimize it to allow for the most concurrent connections possible. It has been a nightmare to find good info on the Worker MPM.

Server - VPS with 1GB RAM (With Apache off its only using about 150MB of RAM) I would like Apache to have a memory usage CAP of about 750MB - so that my server will never run out of RAM.

I have been running the server for about 2 years without any problems - but we have recently started streaming MP3`s and this requires more concurrent connections. The server has also had a few minor DDOS attacks - so I trimmed the settings down a ton to prevent the server from running out of memory - I also added some firewall rules to rate limit.

The set up I have now looks like its working well - but I am getting some Segmentation fault errors

[Sat Mar 23 03:19:50 2013] [notice] child pid 28351 exit signal Segmentation fault (11)
[Sat Mar 23 03:56:20 2013] [notice] child pid 29740 exit signal Segmentation fault (11)
*** glibc detected *** /usr/sbin/httpd.worker: malloc(): memory corruption: 0xb83abdd8 ***

And some Out of Memory Errors

Out of memory during array extend.

This is my current set up, I would really appreciate some advice.

Apache Settings:

Timeout 30
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 2
#####################
# Spawn 2 child processes, spawning 25 threads for each child process.
# So, a pool of 50 threads is left up and sleeping, ready to serve incoming requests.
# If more requests will come in, apache will spawn new child processes, each one spawning 25 threads,
# enlarging the thread pool until the total number of threads become 50. In that case, apache begin
# to cleanly drop processes, trying to reach 25 threads.
# New processes and its threads are spawned in case of a large spike of requests, until 200 parallel
# client requests are reached, then apache will no longer accept new incoming connections.
# When the load calm down, and requests come back under 200 parallel connections, apache will continue
# to accept connections. After 25, 000 requests served by a child, q. 1000 per thread, the process
# get closed by the father to ensure no memory leak is fired.
<IfModule worker.c>
ServerLimit      16
StartServers         2
MaxClients       400
MinSpareThreads   25
MaxSpareThreads  50 
ThreadsPerChild    25
MaxRequestsPerChild  1000
ThreadLimit          64 
ThreadStackSize      1048576
</IfModule>
#####################

And then some settings in fcgid.conf

FcgidMinProcessesPerClass 0 
FcgidMaxProcessesPerClass 8 
FcgidMaxProcesses  25
FcgidIdleTimeout 60 
FcgidProcessLifeTime 120 
FcgidIdleScanInterval 30

As requested my output for /etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql

#skip-innodb

connect_timeout = 10
max_connections = 300
symbolic-links=0
innodb_file_per_table = 1
myisam_sort_buffer_size = 8M
read_rnd_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
sort_buffer_size = 512K
table_cache = 32
max_allowed_packet = 1M
key_buffer = 16k
query_cache_type = 1
query-cache-size = 32M
thread_cache_size = 16
net_buffer_length = 2K
thread_stack = 256K
wait_timeout = 300

slow_query_log

#log-slow-queries=/var/log/mysql/slow-queries.log
slow_query_log=/var/log/mysql/slow-queries.log
long_query_time = 1

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

And PHP memory_limit = 64M

  • Any ideas anyone? – user1287874 Mar 26 '13 at 01:28
  • 2
    so, of the 1 GB you have, if Apache takes 750 MB, how do you envision other 250 are distributed? This is really important... I'm asking because 750 is highly unrealistic and unhealthy expectation. Of this 1 GB realistically if you want well performing system ~200 - 250 MB is probably ceiling value – Hrvoje Špoljar Oct 15 '16 at 23:16

2 Answers2

0

You can try apache2buddy.pl script to tune apache settings for your web application and system.

Another way to forget about the problem is to create single node docker swarm cluster and contenerize your app - docker will kill apache container in case of memory outage and start it again...

0

These settings are all about balance, how high you can get them without risking running out of memory and crashing the server, or having your processes killed by the vps parent, which it is possible that is why you are getting SegFaults.

Usually when I am optimizing a server I will run the mysql tuning-primer.sh script to get an idea of how much memory at maximum MySQL can use:

https://launchpad.net/mysql-tuning-primer

Then for prefork I would multiply MaxClients by the php memory_limit to get an idea of how much memory Apache+PHP can use at max. These are rough estimates but once you've done this a lot you kind of get a feel for it.

I try to keep the total of those 2 right around the maximum memory of the server, if your VPS does not have a swap partition I would definitely try to keep it lower then max ram for a couple reasons:

1) The other processes on the server will be using memory

2) Some php scripts on the server may be using ini_set to change the memory_limit for themselves.

If you can provide the /etc/my.cnf and php memory_limit I may be able to come up with some good settings for you.


edit: I just wanted to mention I know you are using worker and not prefork, the same concepts apply but worker has to deal with threads and not just MaxClients so prefork was a better example. I would have to look into the settings after getting the requested information in order to give you good advice

  • Hi Michael, I have since updated my settings (after getting some advice from people) I have since moved Serverlimit down to 8 Maxclients down to 200 and FCGID down to 10 - will keep an eye on it and see how it goes. I will post the outputs in the original post soon – user1287874 Apr 02 '13 at 07:22
  • how sooN? april? – Eddie Nov 14 '13 at 21:35