8

Short version:

Are you aware of any proxy or firewall device which will permit outbound SSL connections to hosts with approved* SSL certificates only?

Long version:

Consider the following scenario.

I have a server farm which is protected from the Internet by a firewall. Let's say HTTPS is allowed in from the Internet, but that the firewall blocks outbound access from these machines to the Internet. I don't want my server admins getting bored and surfing /. from the data center, and I don't want malware or malicious actors being able to connect outbound easily in the event of a breach.

I do want to permit some outbound access, though. For example, let's say I want my Windows servers to go out to windowsupdate.microsoft.com. If I can determine the IPs used by windowsupdate, fine, I open them up specifically in the firewall, no problem.

But what if those IPs aren't known, or aren't knowable? Specifically, let's say Microsoft is using Akamai or another CDN to serve their files. The IP address that you reach out to is going to be difficult to determine ahead of time and will change regularly. I can't whitelist by IP in that case.

One elegant solution - presuming that we're reaching out to an SSL-protected service - is to whitelist based on the SSL certificate rather than the IP address. So, if the server certificate is *.windowsupdate.microsoft.com and it's signed by a valid CA, then permit the connection. If it's not *.windowsupdate.microsoft.com, or if it's signed by SnakeOil, then disallow the connection.

(Approved certificate could mean any number of flexible things. Name and trusted CA; Name and specific CA; Specific CA-signed certs; etc. etc.)

The control point must not be at the internal server - if a malicious actor gains access to it, they must not be able to disable this protection. As with IP whitelisting, it makes sense to remove the control to the perimeter as a proxy or firewall device.

That seems to me to be the right way to do it. What are my options for doing it this way? Is this a paradigm anyone has ever implemented, free or commercial? Are their other paradigms I should be aware of that can clamp down on outgoing access in a flexible but powerful way that meets modern (Akamai, AWS, Dyn-ish) dynamic services?

Any help appreciated!

gowenfawr
  • 71,975
  • 17
  • 161
  • 198

4 Answers4

5

There are two things I can think of, neither fits the bill perfectly, and there's some assembly required.

  1. squid with sslBump and the SSL Server Certificate Validator This is basically an MITM SSL proxy configuration, and you get to provide an external "helper" that augments normal verification. Stumbling blocks include certificate trust and managing your own CA/certs. I've never used this though.

  2. Apache httpd as a forward-proxy, with mod_rewrite, mod_proxy and an external validation script. The approach here is to use a mod_rewrite program map to pre-validate the target's certificate before allowing access.

The Apache option can be tested easily: httpd.conf

LoadModule proxy_module           modules/mod_proxy.so
LoadModule proxy_connect_module   modules/mod_proxy_connect.so
LoadModule rewrite_module         modules/mod_rewrite.so
[...]
Listen 10.0.0.16:3128

<VirtualHost 10.0.0.16:3128>
    ServerName   proxy.domain.com
    ErrorLog  ...
    CustomLog ...

    ProxyRequests on
    RewriteEngine on

    RewriteMap sslval prg:/usr/local/apache2/bin/sslval

    <Proxy>
        RewriteEngine on

        ## parse CONNECT
        RewriteCond %{THE_REQUEST} "^(CONNECT) ([^:]+)(:([0-9]+))? HTTP/"   [NC]
        RewriteRule .          -   [env=CHOST:%2,env=CPORT:%4]

        ## hand over to sslval
        RewriteCond  ${sslval:%{ENV:CHOST}:%{ENV:CPORT}}          ok
        RewriteRule  .                            -               [P,L]

        RewriteRule   .                           -               [F]

    </Proxy>
</VirtualHost>

This is an external "sslval" script which uses gnutls-cli because of its certificate caching support, and lets you specify exactly which certificate you expect:

#!/bin/bash
export HOME=/usr/local/apache2
while read key; do
    timeout 10 gnutls-cli --tofu \
        --port=${key##*:}  ${key%%:*}  >/dev/null 2>&1 <<<Q
    [ $? = 0 ] && echo ok || echo fail
done

Now all you need to do is add the expected certificates to ~apache/.gnutls/known_hosts interactively with gnutls-cli --tofu ..., and entering "y" to confirm.

This trust on first use method might seem familiar, it's exactly like OpenSSH's known_hosts. This approach is not unlike the OpenVPN's --tls-verify option.

A variation on the sslval script can be used to check against a select CA database, e.g. with OpenSSL's s_client and a database of .pem CA files (maintained with c_rehash), e.g:

timeout 10 openssl s_client \
         -connect "$key" -servername "${key%%:*}" \
         -verify 5 -CApath /usr/local/etc/CA <<< Q |
   gawk '/Verify return code: (19|0) /{ok=1} 
      END{print ok?"ok":"fail"}'

Error checking, locking, configuration and access-control, caching, and all those nice things are an exercise for the reader... ;-)

With Apache (2.2.2x) I have found that proxying normal requests (non-CONNECT) is affected by the above mod_rewrite technique, you will need an second VirtualHost for those.

mr.spuratic
  • 7,937
  • 25
  • 37
1

You can filter by certificate using an SSL-terminating proxy, as Mr Spuratic explains well.

However, turning an SSL pass-through proxy to an SSL-terminating proxy is quite a big deal, and not something you would do purely for the motivation you outline.

Your best pragmatic option is to filter by host name.

paj28
  • 32,736
  • 8
  • 92
  • 130
0

Blue Coat ProxySG can do most, if not all, of this.

Chris
  • 21
  • 2
  • ProxySG's abilities seem to be around traditional MITM SSL interception and content filtering. The settings having to do with server certificate validity "mimic the overrides supported by most browsers"; they aren't doing any pinning or whitelist connection control. Sort of the flip side of the coin from what I'm looking for. – gowenfawr Dec 22 '13 at 19:18
0

If I'm understanding this correctly then you want to protect against the risk of an attack based on DNS poisoning and certificate forging.

The problem with the Squid method as described by mr spuratic is that you send a faked cert back to the client - in the case of a daemon process it should reject the connection while a browser will display an ugly warning message (and fail to connect to sites implementing HSTS) unless you also install your snake-oil CA cert on the clients - which is probably not a good idea from a security point of view.

If I were looking to implement this, then I'd probably go with a custom proxy where I can put the CONNECT request on hold while the server cert is polled from the resolved IP address (its easy to script openssl to retrieve and manipulate certs). That seems to be what mr spuratic goes on to describe using Apache - but I'm not familiar enough with mod_rewrite/gnutls to be able to say if this is the case / if his approach is valid.

Add to this the need to accomodate certificate expiry, and the possibility that there may be more than one valid certificate for a particular hostname (particularly with a large organisation like Microsoft) and this starts to look like a rather complex exercise.

My first thought would be that it may be more effective to focus on protecting the DNS data rather than applying more extensive validation to the certificate. An obvious solution is to use DNSSEC, but I see that although Microsoft provide DNSSEC client support, it does not appear to be implemented on their servers.

Maybe a more practical solution is to locally maintain DNS data for the hosts in question much longer than their TTL?

symcbean
  • 18,278
  • 39
  • 73