9

EDIT #2 July 23, 2015: Looking for a new answer that identifies an important security item missed in the below setup or can give reason to believe everything's covered.

EDIT #3 July 29, 2015: I'm especially looking for a possible misconfiguration like inadvertently permitting something that could be exploited to circumvent security restrictions or worse yet leaving something wide open.

This is multi-site / shared hosting setup and we want to use a shared Apache instance (i.e. runs under one user account) but with PHP / CGI running as each website's user to ensure no site can access another site's files, and we want to make sure nothing's being missed (e.g. if we didn't know about symlink attack prevention).

Here's what I have so far:

  • Make sure PHP scripts run as the website's Linux user account and group, and are either jailed (such as using CageFS) or at least properly restricted using Linux filesystem permissions.
  • Use suexec to ensure that CGI scripts can't be run as the Apache user.
  • If needing server-side include support (such as in shtml files), use Options IncludesNOEXEC to prevent CGI from being able to be run when you don't expect it to (though this shouldn't be as much of a concern if using suexec).
  • Have symlink attack protection in place so a hacker can't trick Apache into serving up another website's files as plaintext and disclosing exploitable information like DB passwords.
  • Configure AllowOverride / AllowOverrideList to only allow any directives that a hacker couldn't exploit. I think this is less of a concern if the above items are done properly.

I'd go with MPM ITK if it wasn't so slow and didn't run as root, but we're specifically wanting to use a shared Apache yet make sure it's done securely.

I found http://httpd.apache.org/docs/2.4/misc/security_tips.html, but it wasn't comprehensive on this topic.

If it's helpful to know, we're planning to use CloudLinux with CageFS and mod_lsapi.

Is there anything else to make sure to do or know about?

EDIT July 20, 2015: People have submitted some good alternate solutions which are valuable in general, but please note that this question is targeted only regarding the security of a shared Apache setup. Specifically is there something not covered above which could let one site access another site's files or compromise other sites somehow?

Thanks!

sa289
  • 1,308
  • 2
  • 17
  • 42
  • wait so are you or are you not blocking commands like shell_exec – Michael Bailey Jul 11 '15 at 04:53
  • Or rather functions. Not commands. – Michael Bailey Jul 11 '15 at 05:11
  • 1
    Correct - we're not blocking those commands. Because CageFS isolates PHP to such a high degree, limiting such commands as part of a defense in depth approach doesn't seem worth it given that we do utilize them for legitimate purposes at times. If the server were a high value target to hackers (e.g. stored credit card data or something like that), then the benefits might outweigh the drawbacks, but in our case I don't think the restriction is warranted. That's something definitely worth considering for people who aren't using CageFS or some equivalent solution though. – sa289 Jul 11 '15 at 17:32
  • Sadly you seem to have missed the fact that CPanel ( and other Panel admin) questions are not topical on SF for exaclty the reasons you mention in some comments viz the panel gets in the way of real system administration. http://meta.serverfault.com/q/8055 and http://meta.serverfault.com/q/8094 are relevant – user9517 Jul 19 '15 at 19:16
  • @Iain I've removed the side mention of cPanel from my question and revised the comment that mentioned it since my response is still similar. Using a shared Apache is a legitimate use case which is independent of control panels, and it has definite benefits over other setups from a simplicity standpoint, though those have their benefits too. I've read http://serverfault.com/help/on-topic and I agree it would be off topic if my question were about using a service provider's control panel. My goal is to make sure if doing a shared Apache, that I don't miss any important security steps. – sa289 Jul 19 '15 at 19:35
  • 1
    Sadly though you have discounted good answers because of CPanel - the rest is history. – user9517 Jul 19 '15 at 19:40
  • 2
    Here's a summary of the reasons I "discounted" those answers. Dedicated Apache per site or Docker containers - requires more dedicated public IPs or added complexity of reverse proxy. Selinux - requires configuring and running selinux in enforcing mode. VMs - requires extra system resources over a non-VM setup. I think they are all good solutions, just not without drawbacks that I'd rather not go with. – sa289 Jul 19 '15 at 19:46
  • May i direct you to our sister site http://security.stackexchange.com/questions/77/apache-server-hardening . The top answer links to a document that contains much valuable information on hardening Apache. Not all perhaps applicable to your specific scenario. I am leaving this a comment as i dont have time to disseminate the information for your scenario. – artifex Jul 29 '15 at 16:34
  • @artifex Thanks. Someone else posted that as well. For anyone reading this, here is my response as to what I found most helpful in it http://serverfault.com/questions/704997/#comment876949_704997 – sa289 Jul 29 '15 at 16:36

8 Answers8

9

I completely agree with the items you have so far.

I used to run such a multi-user setup a few years ago and I basically found the same trade-off: mod_php is fast (partly because everything runs inside the same process) and suexec is slow but secure (because every request forks a new process). I went with suexec, because user isolation was required.

Currently there is a third option you might consider: give every user their own php-fpm daemon. Whether this is feasible depends on the number of users, because every on of them has to get at least one php-fpm process using their user account (the daemon then uses a prefork like mechanism to scale for requests, so the number of processes and their memory usage may be limiting factors). You will also need some automated config generation, but that should be doable with a few shell scripts.

I have not used that method in large environments but IMHO that is a good solution to provide good PHP website performance while still isolating users on the process level.

mschuett
  • 3,066
  • 20
  • 21
  • Correct me if I'm wrong, but I think the mod_lsapi + CageFS solution we're already planning to go with for PHP is at least as good if not better than PHP-FPM, isn't it? Thanks – sa289 Jul 13 '15 at 20:42
  • I have no experience with mod_lsapi and would have reservations trusting a closed source single vendor solution. But according to its advertisement page it should be just as good and just as fast, yes. -- One point I would look into is how it spawns new processes (upon new requests) and how it changes their effective user id to the user's. Regarding security that is the weakest point; the suexec documentation has a good explanation of things to look out for. – mschuett Jul 14 '15 at 08:10
  • I suppose there's reason to not trust either closed or open source hehe (Shellshock took 25 years to discover, Heartbleed 2 years, and who knows about TrueCrypt). Fortunately I think mod_lsapi is based around LiteSpeed's open source offering so there are at least a couple of vendors looking at some of it, plus whoever wants to look at the open source code it's based upon. I'm especially looking for any key security things I could be missing in the proposed setup (e.g. causing PHP to run as the site's user but forgetting about suEXEC for CGI scripts). Thanks – sa289 Jul 14 '15 at 19:34
  • We are using this approach (webserver with php-fpm) on pretty much large scale websites (where the webserver farm connects to the php-fpm farm via a load-balancer). The beauty of such a configuration that virtual hosts are separated at the OS level and that boundary is not easily circumvented (just make sure that the home directory of the virtual host has permissions like 0710 with ownership of the vhost user and group of the webserver process, then it's matter of proper permissions - if a file world readable it will be accessible to the webserver) – galaxy Jul 18 '15 at 12:34
4

Everything you have so far seems well thought out. The only thing that I could see as a problem is the fact most exploits seek to gain root access in one way or another. So even if each site and its corresponding processes and scripts are jailed correctly and everything has its own user and permissions a hacker with root couldn't care less, they will just sidestep everything you've setup.

My suggestion would be to use some sort of VM software(VMware, VirtualBox, Qemu, etc) to give each site it's own OS jail. This allows you, as a system admin, to not worry about a single compromised site. If a hacker gain root by exploiting php (or any other software) on a site's VM just pause the VM and dissect it later, apply fixes, or roll back to an unbroken state. This also allows the site admins to apply specific software or security setting to their specific site environment (which might break another site).

The only limitation to this is your hardware but with a decent server and the correct kernel extensions this is easy to deal with. I've successfully ran this type of setup on a Linode, granted both the Host and the Guest were very very sparse. If your comfortable with the Command line which I assume you are you shouldn't have any problems.

This type of setup reduces the number of attack vectors you have to monitor and allows you to focus on the Host Machines security and deal with everything else on a site by site basis.

T. Thomas
  • 187
  • 5
  • I agree they provide better security and have other benefits, but they also have drawbacks, some of which you mentioned. The premise of this question though is having a shared Apache. With CageFS, the odds of a root exploit should be reduced - not as effectively as a VM, but to a level I feel good about given the sites we're running. My main goal is to avoid any missteps in proper security, such that it'd have to be a perfect storm for someone to gain root access. For example, I could easily have seen not knowing about symlink attacks in the past and that having been a serious mistake. Thx – sa289 Jul 14 '15 at 19:13
4

I would suggest having each site run under its own Apache daemon, and chrooting Apache. All system php function will fail since the Apache chroot environment will not have access to /bin/sh. This also means that php's mail() function won't also work, but if you're using an external mail provider to send out mail from your email application, then this shouldn't be a problem for you.

Alpha01
  • 406
  • 3
  • 11
  • I'd like to do it this way - we've done it that way in the past (minus the chrooting), but unfortunately it prevents us from using a standard control panel setup and also it takes more dedicated IP addresses unless doing a more-complicated reverse proxy setup with Apache listening on internal IP addresses like's documented on the Apache site. – sa289 Jul 14 '15 at 17:49
  • Ah yes, that is a good point you mentioned there. It will definitely require having more than IP dedicated IP or have revert to a reverse proxy. – Alpha01 Jul 17 '15 at 17:17
  • If anyone's reading this answer is interested in the documentation for the reverse proxy setup, check out http://wiki.apache.org/httpd/DifferentUserIDsUsingReverseProxy – sa289 Jul 17 '15 at 17:22
4

SELinux might be helpful with mod_selinux. A quick howto is featured here:

How can I use SELinux to confine PHP scripts?

As the instructions are a little dated, I checked whether this works on RHEL 7.1:

I've used Fedora 19's version and compiled with mock against RHEL 7.1 + EPEL.

YMMV if you use the basic epel config mock ships with:

[mockbuild@fedora mod_selinux]$ mock -r rhel-7-x86_64 --rebuild \
    mod_selinux-2.4.3-2.fc19.src.rpm

Upgrade your target system first to ensure that selinux-policy is current.

Install on target box (or put in on your local mirror first):

yum localinstall mod_selinux-2.4.3-2.el7.x86_64.rpm

Now, you must assign each virtual host in apache a category. This is done by adding a line such as in the example below called selinuxDomainVal.

<VirtualHost *:80>
    DocumentRoot /var/www/vhosts/host1
    ServerName host1.virtual
    selinuxDomainVal *:s0:c0
</VirtualHost>

<VirtualHost *:80>
    DocumentRoot /var/www/vhosts/host2
    ServerName host2.virtual
    selinuxDomainVal *:s0:c1 
</VirtualHost>

Next, in the document root for each host, relabel their document roots to the same category as the ones labelled in the httpd config.

chcon -R -l s0:c0 /var/www/vhosts/host1
chcon -R -l s0:c1 /var/www/vhosts/host2

If you want to make the labelling get honoured if you do a system relabel, you'd better update the local policy too!

semanage fcontext -a -t httpd_sys_content_t -r s0-s0:c0 '/var/www/vhosts/host1(/.*)?' 
semanage fcontext -a -t httpd_sys_content_t -r s0-s0:c1 '/var/www/vhosts/host2(/.*)?'
fuero
  • 9,413
  • 1
  • 35
  • 40
  • I like the idea of this, but I'd have to turn on selinux on the server which may introduce other difficulties. +1 though since I think it could be a great solution for people who don't mind that. – sa289 Jul 19 '15 at 19:28
4

There are a lot of good technical answers provided already (please also have a look here: https://security.stackexchange.com/q/77/52572 and Tips for Securing a LAMP Server), but I still would like to mention here an important point (from yet another perspective) about the security: security is a process. I'm sure you have considered this already, but I still hope it could be useful (also for other readers) to sometimes rethink it.

E.g., in you question you concentrate mainly on the technical measures: "this question is targeted only regarding the security of a shared Apache setup. Specifically, are there any security steps that are important to take but are missing from the list above when running shared Apache and PHP."

Almost all answers here and on other 2 questions I mentioned also seems to be purely technical (except the recommendation to stay updated). And from my point of view this could make some readers a misleading impression, that if you configure your server according to the best practice once, then you stay secure forever. So please do not forget about the points that I miss in answers:

  1. First of all, do not forget, that security is a process and, in particular, about "Plan-Do-Check-Act" cycle, as recommended by many standards, including ISO 27001 (http://www.isaca.org/Journal/archives/2011/Volume-4/Pages/Planning-for-and-Implementing-ISO27001.aspx). Basically, this means that you need to regularly revise your security measures, update and test them.

  2. Regularly update you system. This will not help against targeted attacks using zero-day vulnerabilities, but it will help against almost all automated attacks.

  3. Monitor your system. I'm really missing this point in answers. From my point of view, it is extremely important to be notified as early as possible about some problem with your system.

    This is what statistics says about it: "Average time from infiltration to discovery is 173.5 days" (http://www.triumfant.com/detection.html), "205 median number of days before detection" (https://www2.fireeye.com/rs/fireye/images/rpt-m-trends-2015.pdf). And I hope that these numbers is not what we all want to have.

    There are a lot of solutions (including free) not only for monitoring the state of the service (like Nagios), but also intrusion detection systems (OSSEC, Snort) and SIEM systems (OSSIM, Splunk). If it becomes too complicated, you could at least enable something like 'fail2ban' and/or forward you logs to separate syslog server, and have e-mail notifications about important events.

    Again, the most important point here is not which monitoring system you choose, the most important is that you have some monitoring and revise it regularly according to your "Plan-Do-Check-Act" cycle.

  4. Be aware of vulnerabilities. Same as monitoring. Just subscribe to some vulnerability list to be notified, when some critical vulnerability is discovered for Apache or other service important for your setup. The goal is to be notified about most important things that appear before your next planned update.

  5. Have a plan what to do in case of an incident (and regularly update and revise it according to your "Plan-Do-Check-Act" cycle). If you ask questions about secure configuration, it means that security of your system becomes important for you. However, what should you do in case if you system got hacked despite of all security measures? Again, I do not mean only technical measures here like "reinstall OS": Where should you report an accident according to the applicable law? Are you allowed to shutdown/disconnect your server immediately (how much does it cost for your company)? Who should be contacted if main responsible person is on vacation/ill?

  6. Have a backup, archive and/or replacement/replication server. Security also means availability of your service. Check your backup/archive/replication regularly and also test restore procedures regularly.

  7. Penetration testing? (again, see "Plan-Do-Check-Act" cycle:) If it feels like too much, you could at least try some free online tools, that scan your web services for malware and security issues.

Andrey Sapegin
  • 1,191
  • 2
  • 11
  • 27
  • 1
    Good addition for people to keep in mind. In case it's helpful to anyone, I spent a lot of time skimming through the first two links you posted and what they linked to to see if I could find anything important I missed. The resources linked from those that I thought were the most helpful were http://benchmarks.cisecurity.org/downloads/show-single/index.cfm?file=apache.300 and http://iase.disa.mil/stigs/app-security/web-servers/Pages/index.aspx, though there was a decent amount of overlap between the two. I didn't come across anything major but still worth reading if security is paramount. – sa289 Jul 21 '15 at 22:02
3

Your use case is ideal for docker containers.

Each container can represent a customer or client, with unique user IDs assigned to each Apache container group as added security. The key would be to drop root privileges on container start, before starting your apache stack. Each customer gets their own DB service with their own unique passwords, without the headache of standing up dozens of virtual machines, each requiring their own special-snowflake kernels and other overhead. After all, at the heart of docker is the chroot. Properly administered, I'd take that over a typical virtual cluster any day.

Stephan
  • 999
  • 7
  • 11
  • Would this mean there'd effectively be a dedicated Apache daemon per client? If so, I think the drawback would be similar to Alpha01's answer. – sa289 Jul 17 '15 at 22:18
  • Yep, it's very similar to Alpha01, though dockerizing the applications takes much of the host configuration headache away. That said, your control panel issue persists whether you use the chroot/container approach or the one-VM-per-client approach. – Stephan Jul 20 '15 at 17:13
  • Thanks. Even without a control panel though, I'd still rather avoid having to do a reverse proxy or else require more public IPs unless I'm misunderstanding how this setup would work. – sa289 Jul 20 '15 at 17:25
  • 1
    Most shops I've seen (large and small) take the reverse proxy approach. I use HAProxy personally, it's ideally suited to the kind of large scale isolation you're looking for. It's highly performant, and allows you to scale horizontally very efficiently, and doesn't require the kind of exotic complexity that appears to be evident in mschuett's solution. – Stephan Jul 20 '15 at 17:40
2

Lots of good suggestions here already. There's stuff that's been missed in the discussion so far though.

Pay attention to processes outside of those run as part of serving web pages. i.e. make sure that all your cron jobs that touch untrusted data are running as the appropriate user and in the appropriate jail, whether those jobs are defined by the user or not.

In my experience things like log analysis, when provided by the hosting service, is run as root almost as often as not, and the log analysis software is not given as much security auditing as we might like. Doing this well is a little tricky, and setup dependent. On the one hand, you don't want your root-owned (i.e. the parent process) apache process writing to any directory the user could compromise. That probably means not writing into the jail directly. On the other hand you need to make those files available to processes in the jail for analysis, and you'd like that to be as close to real-time as possible. If you can give your jails access to a read-only mount of a file system with the logs, that should be good.

PHP apps typically don't serve their own static files, and if you have a shared apache process then I'm guessing that your apache process is reading stuff straight out of the jails from the host environment? If so, then that opens up a variety of concerns.

.htaccess files are an obvious one, where you'd need to be very careful what you allow. Many if not most substantial php apps are very dependent on .htaccess file arrangements that you probably can't allow without subverting your planned scheme.

Less obvious is how apache decides what is a static file anyway. e.g. What does it do with a *.php.gif or *.php.en file? If this mechanism or another fools the discrimination as to what is a static file, is it possible for apache to run php at all from outside the jail? I'd set up a separate light weight web server for static content, which is not configured with any modules for executing dynamic content, and have a load balancer deciding which requests to send to the static server and which to the dynamic one.

Regarding Stefan's Docker suggestion, it is possible to have a single web server which sits outside the container, and which talks to php daemons in each container for the dynamic content, while also having a second web server, which sits in a docker container, and which shares the volumes each uses for their content, and is thus able to serve the static content, which is much the same as in the previous paragraph. I commend docker amongst the various jail type approaches, but with this or other jail type approaches, you will have a bunch of other issues to work through. How does file upload work? Do you put file transfer daemons in each container? Do you take a PAAS style git based approach? How do you make logs generated inside the container accessible, and roll them over? How do you manage and run cron jobs? Are you going to give the users any sort of shell access, and if so, is that another daemon within the container? etc, etc.

mc0e
  • 5,786
  • 17
  • 31
  • Thanks. To answer your question - it's not possible for PHP to run outside the jail even if a different file extension is used due to CageFS as far as I can tell. I tried both `SetHandler` and `AddType` to make a new extension be processed as PHP and it was jailed. I don't know if there's some way around this, but that's what I'm hoping someone will point out if I missed something. Yes, Apache is reading straight out of the jails. Good point of looking at the cron jobs - it seems like various things like that that run as root are a source of lots of reported vulnerabilities. – sa289 Jul 29 '15 at 17:53
  • RE: `.htaccess`, in the conf I used AllowOverrideList to permit these: `Add{Charset,DefaultCharset,Encoding,Handler,OutputFilter,OutputFilterByType,Type} Allow Auth{GroupFile,Name,Type,UserFile} Deny DirectoryIndex ErrorDocument Expires{Active,ByType,Default} FileETag ForceType Header IndexIgnore Order php_flag php_value Redirect RedirectMatch Remove{Handler,Type} RequestHeader Require Rewrite{.various.} Satisfy Set{Env,EnvIf,EnvIfNoCase,Handler} SSLRequireSSL`. My concern is AddType, AddHandler and SetHandler, but Drupal uses SetHandler for defense in depth in file upload dirs for example. – sa289 Jul 29 '15 at 18:00
  • If you're allowing people to tinker with handlers, then you need to go through all the defined actions and make sure they are safe, not just php. – mc0e Jul 29 '15 at 18:37
  • Good point! I confirmed `SetHandler server-info` or `SetHandler server-status` in an htaccess file is one way someone can make an attack easier or disclose information that ideally wouldn't be disclosed such as all VirtualHosts on the server (which could be used for spear phishing for example) or current traffic to other sites. I might just have to resort to removing some of those Handler/Type from my `AllowOverrideList`. Do you know any way list list all the possible actions / handlers? I tried searching online but didn't find a good answer. – sa289 Jul 29 '15 at 19:44
  • 1
    Awarded you the bounty because our discussion lead to the information disclosure vulnerability that I hadn't covered. Please let me know if you have a response about listing the actions / handlers. Thx – sa289 Jul 30 '15 at 16:39
  • FYI, I created a new question rather than trying to handle that question via the comments http://serverfault.com/questions/709742/apache-list-all-possible-handlers-actions – sa289 Jul 30 '15 at 17:53
1

First thing I don't see is process management so one process cannot starve another process of CPU or RAM (or I/O for that matter, though your filesystem may be architected to prevent that). One major advantage of a "containers" approach to your PHP instances vs. trying to run them all on one "OS" image is that you can restrict resource utilization better that way. I know that's not your design, but that's something to consider.

Anyway, back to the use case of PHP's running behind Apache basically functioning as a proxy. suexec does not prevent something from running as the apache user - it provides the capability to run as another user. So one concern is going to be making sure that is all done properly - the doc page for it calls out that potential danger: https://httpd.apache.org/docs/2.2/suexec.html. So, you know, grain of salt and all that.

From a security standpoint it can be helpful to have a restricted set of user binaries to work with (which cagefs supplies), particularly if they are compiled differently or against a different library (e.g. one that does not include capabilities that are unwanted) but the danger is that at that point you are no longer following a known distribution for updates, you are following a different distribution (cagefs) for your PHP installations (at least with respect to user space tools). Though since you're probably already following a specific distribution with cloudlinux that's an incremental risk, not necessarily interesting on its own.

I would leave AllowOverride in where you might have intended it. The core idea behind defense in depth is to not rely on one single layer to protect your whole stack. Always assume something can go wrong. Mitigate when that happens. Repeat until you've mitigated as well as you can even if you have only one fence in front of your sites.

Log management is going to be key. With multiple services running in isolated filesystems, integrating activities to correlate when there is a problem could be a minor pain if you haven't set that up from the beginning.

That's my brain dump. Hope there's something vaguely useful in there. :)

Mary
  • 565
  • 5
  • 10