123

When hosting a cluster of web application servers it’s common to have a reverse proxy (HAProxy, Nginx, F5, etc.) in between the cluster and the public internet to load balance traffic among app servers. In order to perform deep packet inspection, SSL must be terminated at the load balancer (or earlier), but traffic between the load balancer and the app servers would be unencrypted. Wouldn't early termination of SSL leave the app servers vulnerable to packet sniffing or ARP poisoning?

Should SSL be offloaded? If so, how can it be done without compromising the integrity of the data being served? My main concern is for a web application where message layer encryption isn't an option.

Matt Goforth
  • 1,233
  • 2
  • 9
  • 5
  • 5
    And interestingly enough, only a few months after this question was posted back in 2013: [Meet 'Muscular': NSA accused of tapping links between Yahoo, Google datacenters](http://www.zdnet.com/article/meet-muscular-nsa-accused-of-tapping-links-between-yahoo-google-datacenters/), [Google, the NSA, and the need for locking down datacenter traffic](http://www.zdnet.com/article/google-the-nsa-and-the-need-for-locking-down-datacenter-traffic/), [Google Boosting Encryption Between Data Centers](http://www.datacenterknowledge.com/archives/2013/09/09/google-boosts-encryption-between-data-centers/), ... – user Jul 08 '16 at 14:05
  • Interesting. So is the recommendation now to use HTTPs everywhere? Even in VPCs? Do Amazon etc recommend doing so in the AWS documentation? – rents Jan 03 '18 at 14:23

5 Answers5

87

It seems to me the question is "do you trust your own datacenter". In other words, it seems like you're trying to finely draw the line where the untrusted networks lie, and the trust begins.

In my opinion, SSL/TLS trust should terminate at the SSL offloading device since the department that manages that device often also manages the networking and infrastructure. There is a certain amount of contractual trust there. There is no point of encrypting data at a downstream server since the same people who are supporting the network usually have access to this as well. (with the possible exception in multi-tenant environments, or unique business requirements that require deeper segmentation).

A second reason SSL should terminate at the load balancer is because it offers a centralized place to correct SSL attacks such as CRIME or BEAST. If SSL is terminated at a variety of web servers, running on different OS's you're more likely to run into problems due to the additional complexity . Keep it simple, and you'll have fewer problems in the long run.

That being said

  1. Yes, terminate at the load balancer and SSL offload there. Keep it simple.
  2. The Citrix Netscaler load balancer (for example) can deny insecure access to a URL. This policy logic, combined with the features of TLS should ensure your data remains confidential and tamper-free (given that I properly understand your requirement of integrity)

Edit:

It's possible (and common) to

  • Outsource the load balancer (Amazon, Microsoft, etc)
  • Use a 3rd party CDN (Akamai, Amazon, Microsoft, etc)
  • Or use a 3rd party proxy to prevent DoS attacks

... where traffic from that 3rd party would be sent to your servers over network links you don't manage. Therefore may not trust those unencrypted links. In that case you should re-encrypt the data, or at the very least have all of that data travel through a point-point VPN.

Microsoft does offer such a VPN product and allows for secure outsourcing of the perimeter.

makerofthings7
  • 50,090
  • 54
  • 250
  • 536
  • 1
    What if I'm not using a load balancer within my own datacenter but instead a CDN? E.g. Cloudflare has a 'Flexible SSL' mode where it's SSL to the CDN, then non-SSL to the original server. Maybe this is different enough of a scenario to warrant its own question? – Tyler Collier Jul 31 '14 at 22:58
  • 1
    @TylerCollier thanks for your comments. I clarified. – makerofthings7 Jul 31 '14 at 23:46
  • 1
    If you're on a secured colocation, then it's natural that you trust your own machine (which inside a physical cage) more than you trust the data center. – Lie Ryan Aug 01 '14 at 00:03
  • @LamonteCristo: In the cases when there are multiple data centres involved and let's say that before fulfilling the request, traffic hitting at dc1 in America and has to hit dc2 in Japan too, so in this case it makes sense to re-encrypt the traffic between dc1 and dc2, correct? – Piyush Kansal Apr 17 '15 at 08:18
  • 1
    @PiyushKansal Some companies have a network layer VPN in these instances, so you don't have to worry about this, but if this doesn't exist yes, I would re-encrypt. I don't know about your particular situation, but there may be things to consider like the SafeHarbor ( https://safeharbor.export.gov/list.aspx ) trust lists, and your company's legal responsibility in an international situation like yours – makerofthings7 Apr 17 '15 at 10:11
  • @LamonteCristo My question was primarily on technical side. However, I got your point. Bottomline is to use either VPN or encrypted traffic. Thanks. – Piyush Kansal Apr 17 '15 at 18:03
22

Yes, I would argue that TLS should be offloaded. I have done everything that I mention below specifically with the Citrix Netscaler, but I believe F5 should be able to do the same things.

First, you always need to make sure that you reencrypt on the other side of the load balancer, but the device decrypting TLS should be able to inspect what's going on from a security perspective. The integrity of the data should not be compromised by this approach.

Many people have said to me that reencrypting on the back end makes it just as computationally expensive, but that is not true. The expense with TLS is the building and closing of the connection, which the TLS offloader handles. On the backend you have a more persistent connection to the servers, and therefore the required resources are much lower.

Additionally, if you don't have TLS offloading then even a small DDoS attack via TLS would completely annihilate your servers. I am very familiar with this situation and TLS offloading is an incredible help from a computational perspective, and also allows you to block attacks further up the chain. For extremely large DDoS attacks, you could even split your mitigation strategy between your TLS offloader and your servers.

JZeolla
  • 2,936
  • 1
  • 18
  • 25
  • 5
    +1 for reencrypt on the other side. As a .NET developer I would like to make sure that SSL/TLS is used for cookies, by configuring like ``. But this only works if the application server behind the load balancer itself gets connections via https. – Marcel Jul 08 '16 at 06:13
  • Note also that you have to be actually _authenticating_ the connections from the load balancer to the servers behind it or you are still subject to various attacks (e.g., MITM) on those connections anyway. I find it interesting that many "cloud" load balancers will do TLS to the backends, but don't bother to authenticate them. – cjs May 18 '17 at 07:54
8

To inspect the data which goes within a SSL connection, then either of these must be true:

  • The tunnel ends on the machine which does the inspection, e.g. your "load balancer".
  • The inspection system knows a copy of the server's private key, and the SSL connection does not use ephemeral Diffie-Hellman (i.e. the server does not allow the cipher suites which contain "DHE" in their name).

If you follow the first option, then data will travel unencrypted between the inspection system (the load balancer) and the clusters, unless you reencrypt it with some other SSL tunnel: main SSL connection is between client browser and the load balancer, and the load balancer maintains a SSL link (or some other encryption technology, e.g. a VPN with IPsec) between itself and each of the cluster nodes.

The second option is somewhat lighter, since the packet inspector just decrypts the data but does not have to reencrypt it. However, this implies that all cluster nodes are able to do the full SSL with the client, i.e. know a copy of the server private key. Also, not supporting DHE means that you will not get the nifty feature of Perfect Forward Secrecy (this is not fatal, but PFS looks real good in security audits so it is a fine thing to have).

Either way, the node which performs deep packet inspection must have some privilege access into the SSL tunnel, which makes it rather critical for security.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
8

I would advocate terminating SSL at the load balancer (be that on your network, or at a CDN provider or whatever). It means the LB can inspect the traffic and can do a better job of load balancing. It also means your load balancer is responsible for dealing with slow clients, broken SSL implementations and general Internet flakiness. It's likely your load balancer is better resourced to do this than your back end servers. It also means that the SSL certs that the world sees are all on the load balancer (which hopefully makes them easier to manage).

The alternative here is to simply load balance the TCP connections from clients to your back end servers. As the LB can't inspect what's going on this way, it can't spread the load evenly across the back end servers, and the back end servers have to deal with all the Internet flakiness. I'd only use this method if you don't trust your load balancer, CDN provider or whatever.

Whether or not you re-encrypt from the load balancer to your back end servers is a matter of personal choice and circumstance. If you're dealing with credit cards or financial transactions then you're probably regulated by government(s) and so will have to re-encrypt. You probably should also re-encrypt if the traffic between load balancer and back end servers is travelling over untrusted networks. If you're just hosting your company's website then you might be able to avoid the additional overhead of the re-encryption, if you don't really care about the security aspects of it.

Re-encryption doesn't add as much load as you might think though. Usually, the load balancer will be able to maintain persistent connections back to the servers, so the SSL cost will be quite low for that 'hop' on the network.

The last thing to think about is the application on the back end servers. If all the traffic that arrives there is HTTP, then it can't make decisions based on the protocol the client was using. It can't then say "you're trying to access the logon page over HTTP, so I'll redirect you to the HTTPS version of the page", for example. You can have the load balancer add an HTTP header to say "this came from HTTPS", but that header would need special handling in the application. Depending on your situation, it may just be easier to re-encrypt and let the application work in its 'default' way rather than needing a site-specific modification.

In summary, I'd say: terminate at the load balancer and re-encrypt to your back end servers. If you do this and notice some problem, then you can make adjustments if you need to.

Ralph Bolton
  • 351
  • 2
  • 3
2

You can choose to encrypt internal traffic with a lower-key certificate. And it's also advised to position your load balancer as near as possible to your servers to prevent sniffing or man-in-middle attacks. SSL termination can be done at the Load Balancer to offload CPU intensive jobs away from web servers. If the LB brand you have chosen can do certain functions such as inspecting for malformed protocol connections, detect DDoS behaviour, etc..

Davis
  • 49
  • 3