4

I currently have an Apache web server set up under which each virtual host is isolated using HTTPD-ITK and the AppArmor module. Each virtual host's workers are setuid/setgid by the server and are then placed in an AppArmor profile.

I'm looking to use Nginx but I can't find any documentation on setting it up so that rather than the worker processes being shared between all virtual hosts, worker processes are per virtual host (and thus can be setuid / setgid). Is there any way to do this under Nginx?

June Rhodes
  • 157
  • 4

2 Answers2

1

nginx is a completely different thing than Apache (apart from being an HTTP server). Its model is that nginx itself does not have "workers" where some web application runs inside the process.

nginx basically does "frontend termination" for http requests and then hands the work off to a backend server which can be done through HTTP again or specific protocols like FCGI, mod_passenger, etc.

Thus there is no need to do what you're doing with Apache directly in nginx. This is a feature: it allows the general architecture to be much more streamlined and in the end a lot faster and less demanding on resources likes CPU power and memory.

Theuni
  • 938
  • 5
  • 14
0

I ended up solving this by running an Nginx instance for each website, plus a master Nginx which reverse proxies into each of the websites. This is combined with PHP-FPM to provide PHP to those sites that require it.

I have put a framework that deals with generation of Nginx configs for such a setup in a GitHub repository for anyone to use: https://github.com/hach-que/Nginx-Secure.

June Rhodes
  • 157
  • 4
  • That sounds pretty detrimental to the architecture of nginx. Can you elaborate what you're winning by doing so? – Theuni Dec 09 '12 at 11:37
  • 1
    Each virtual host can't read or write to the files of the other virtual hosts. It's incredibly important to ensure this is the case when you have PHP code that has potential exploits. You don't want PHP in one virtual host compromising all of the virtual hosts (in the case of exploits which replace code, you can lose valuable infrastructure). – June Rhodes Dec 09 '12 at 19:50
  • I should point out that you don't take much of a performance hit with this setup; it's still a million times faster than Apache at serving content. – June Rhodes Dec 09 '12 at 19:51
  • Nevertheless with FPM the nginx processes don't even run that code or am I missing something? The interesting bit is that nginx never should run application code in its process space. – Theuni Dec 09 '12 at 21:27
  • You need to set up different php-fpm pools for each user. Running a bunch of different copies of nginx only loses; you gain absolutely nothing here. – Michael Hampton Dec 10 '12 at 01:50
  • An un-isolated Nginx would require read access to every website. If uWSGI or PHP could read files on other sites then an exploit in one could provide access to the whole codebase. The key element is that in order for Nginx to read any site, then PHP can read any site due to Linux ACLs. – June Rhodes Dec 10 '12 at 02:46
  • Web sites aren't meant to be read, then? – Michael Hampton Dec 10 '12 at 15:54
  • 1
    The website source code isn't meant to be read by other websites, no. – June Rhodes Dec 10 '12 at 20:04