1

I have a server with too many virtual hosts, about 500 virtual hosts, half of this with SSL.

All this hosts are served with mod_wsgi for Django applications.

I notice that after a certain number of virtual hosts all my server stops to work and all the sites crashes, I can handle this lowering the number of threads for each virtual hosts with this line:

WSGIDaemonProcess my.domain python-home=/var/www/env python-path=/var/www/my_app threads=1

The apache is crashing when reach about 1300 threads (when I check in htop). The error log of apache indicates that the module Django is no found, but this is not the real error, because everything is OK e just happens when the number of vhosts is too high. So I think I'm reaching some kind of limit with process or threads in Linux. I'm using Ubuntu 18.04, Apache 2.4, and have enough RAM and CPU, the server has 4 GB of RAM and is using just 2 GB, the average CPU is 10 to 20%.

I yet check my threads-max limit with:

cat /proc/sys/kernel/threads-max
30893

What can I do to increase the capacity of virtualhosts in my apache without creating another server.

Gui
  • 59
  • 6
  • 4
    Try use nginx .. it's faster and can handle big job – haidarvm Aug 19 '19 at 16:29
  • Nginx is just a webserver. For django o need a wsgi server too like gunicorn, for hundreds of sites this doenst escale very well I will need 10 times more resources than apache with mod_wsgi. – Gui Aug 19 '19 at 17:20
  • 1
    i would suggest if you really have such massive count to scale via nginx or haproxy on more as one Server, however i think you will limited by open files or so due its default belongs about 1024 – djdomi Aug 19 '19 at 20:19
  • Please take a look on this article mentioning the finetuning of the memory - https://stackoverflow.com/questions/2293333/django-memory-usage-going-up-with-every-request and https://serverfault.com/questions/289894/how-can-i-track-down-a-memory-leak-with-wsgi-django-php-and-apache2 – mightyteja Aug 23 '19 at 11:23
  • "ave enough RAM and CPU." - define. Literally I would say that you have a ressource problem and give zero information relevant to the question. – TomTom Aug 23 '19 at 13:00
  • Can you share crash msg with exact errors? – asktyagi Aug 23 '19 at 14:46
  • Just add more information to topic – Gui Aug 23 '19 at 19:31
  • I dont have memory and CPU problem, Im using AWS and increase to a better instance was the first thing I tried, this is not the problem. – Gui Aug 23 '19 at 19:52

2 Answers2

5

1. You might have more files open then you think. A way to tell (approximately) is:

lsof -u www-data | wc -l

2. try to increase stack size. Look how much current with

ulimit -s

and set new value with

ulimit -s value


ps. try these settings for 100k threads ability:

ulimit -s 256

ulimit -i 120000

echo 120000 > /proc/sys/kernel/threads-max

echo 600000 > /proc/sys/vm/max_map_count

echo 200000 > /proc/sys/kernel/pid_max

/etc/systemd/logind.conf: UserTasksMax=100000

cj ayho
  • 86
  • 2
  • Ok, I will try it. – Gui Aug 23 '19 at 19:31
  • My lsof command return a number of 18398, and ulimit -s 8192 – Gui Aug 23 '19 at 19:34
  • When using ulimit -s 256 I have a Segmentation fault when starting apache service: Segmentation fault (core dumped) – Gui Aug 23 '19 at 19:40
  • Looks like this is working, I just modified the hosts to use from 1 thread to 10 each and is working, but I made a few adjustments: just set the ulimit -s 120000 And increase the max number of files to 999999. How I can make this properties to remain after reboot? – Gui Aug 23 '19 at 20:04
  • 1
    /etc/sysctl.conf & /etc/security/limits.conf – hargut Aug 23 '19 at 20:54
0

In your position, considering that amount of virtual hosts, I would split that load amongst at least a couple servers using a load balancer and a reverse proxy.

Stefano Martins
  • 1,131
  • 7
  • 10