4

I want to setup PHP-FPM with Apache in a shared hosting environment. The recommended way is to use mod_proxy_fcgi.

Each customer has his own FPM pool, running PHP processes under his own system user. That provides good isolation. Let's assume that the unix sockets to access their FPM pools are stored as /run/php-fpm/{user1,user2,...}.sock.

Apache runs under a single system user for all customers, let's say www-data. Since its only tasks are to serve static files and pass connections through to FPM, that is mostly fine. However, all FPM unix sockets must be accessible to www-data.

.htaccess files are a good way for customers of shared hosting to configure stuff like redirects, basic auth, cache control headers, etc. In fact, the only reason for sticking to Apache in my setup is that I want to support these customer-provided configuration files. To secure Apache against malicious .htaccess files, AllowOverride and AllowOverrideList can be used. There I can exclude several directives that shall not be used, e.g. Action and SetHandler which would allow you to execute cgi scripts as www-data.

One particularly useful directive is RewriteRule from the mod_rewrite module. It is used by many applications for beautifying URLs, e.g. having /foo instead of /index.php/foo as URI. But there is also the [P] flag that causes requests to be handled by mod_proxy. If RewriteRule is allowd in .htaccess files, then a malicious user (user2) can do something like

RewriteRule "/evil" "unix:/run/php-fpm/user1.sock|fcgi://localhost/home/user2/evil.php" [P]

This executes a malicious file /home/user2/evil.php (provided by user2) in user1's FPM pool, i.e. as the user1 system user. It therefore has access to all files in /home/user1, it could for example dump out database secrets from wordpress configuration files. The only requirement is that user1 (the victim) can read this evil file, but that is no challenge because user2 (the attacker) can just chmod his own home directory and the evil file with o+rx.

The question is: What are the possible ways to mitigate that attack?

  • Forbid RewriteRule in .htacces (not really an option...)
  • Run a separate Apache process for each customer, using their own system user (but: resource consumption and need for another reverse proxy to share ports 80 + 443)
  • Patch mod_rewrite sources to remove the [P] flag (it will make security updates annoying)
  • Other ideas?

Edit: Patch for mod_rewrite (applies to Debian package version 2.4.25-3+deb9u7)

--- a/modules/mappers/mod_rewrite.c
+++ b/modules/mappers/mod_rewrite.c
@@ -4928,6 +4928,11 @@ static int hook_fixup(request_rec *r)
         if (l > 6 && strncmp(r->filename, "proxy:", 6) == 0) {
             /* it should go on as an internal proxy request */

+            ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
+                          "attempt to make proxy request from mod_rewrite "
+                          "in per directory context: %s", r->filename);
+            return HTTP_FORBIDDEN;
+
             /* make sure the QUERY_STRING and
              * PATH_INFO parts get incorporated
              * (r->path_info was already appended by the

Interestingly, they have a check whether mod_proxy is available in hook_uri2file (deals with RewriteRules in server context), but not in hook_fixup (deals with RewriteRules in directory context).

Instead of deleting the [P] flag I decided to stop it right before a request would be passed to the proxy module. Therefore it should also catch situations where you somehow manage to have proxy:... as rewritten filename. Furthermore, it still allows you to use [P] or proxy: in server context, i.e. directly in the <VirtualHost> block.

But as noted above, a custom patch is annoying maintenance work in each update. So please let me know if there are better solutions that work only by modifying configuration files. How do the big players handle that? Hopefully the answer isn't "not at all".

Sebastian
  • 41
  • 2

0 Answers0