6

I have a web farm where the web servers are responsible for negotiating the secure connections. Does anyone else with a web farm go out of their way to reduce TLS handshake overhead by ensuring that TLS resume handshakes are supported? And if so, why?

We are switching from a sticky session to a more balanced load balancing algorithm. We are concerned that we will lose the benefit of the TLS resume feature. Assuming every connection from a client goes to a different webserver, we are assuming a full TLS handshake will be required. I don't know the overhead, but if we are looking at 20ms round trip it would appear that the full handshake will take 3x or so as long to complete.

Dennis Williamson
  • 60,515
  • 14
  • 113
  • 148

3 Answers3

4

I don't know how large the OPs web server farm is, but for most smaller / midsized installations I find it cleanest and simplest to handle all TLS/SSL on the load balancer. So you have:

Internet (HTTPS req) -> L7 HTTPS proxy LB -> plain HTTP on LAN -> webserver

o3 Magazine had a good writeup on how relatively easy this is with nginx, and what performance numbers you can expect. f5 posted a commentary on the benefits of using a commercial appliance for SSL acceleration instead of a DIY solution (IMHO somewhat biased).

Note that you'll need your web servers to inspect X-Forwarded-For and X-FORWARDED_PROTO headers and handle the connection correctly.

Most installations should get by fine with a single HTTP & HTTPS load balancer, or a pair of load balancers in active/passive configuration for HA. In this setup handshake resume is a non-issue, as there is only one SSL/TLS endpoint (which typically will automatically support handshake resume).

0

I was under the impression that one the connection had been established, even with a load-balancer in place, all future connecitons in that session would happen to the same server?

Perhaps that's merely how sites I've used and services I've configured operate.

warren
  • 17,829
  • 23
  • 82
  • 134
  • 1
    Your impression is incorrect. Every request from a web browser can potentially be routed to any distinct server in the farm. The only exception are requests made in quick succession may benefit from HTTP's keep-alive mechanism. In order for requests to be routed back consistently to the same server, a load balancer needs to perform some magic, such as injecting a cookie into the first response to tell it which server to re-route to on subsequent requests. We call such a LB feature "sticky sessions". – Chris W. Rea Oct 26 '09 at 23:26
  • Then that's how systems I've worked-with have been configured :) ..thanks for the added info – warren Oct 27 '09 at 02:51
0

Assuming you're using Apache, have a look at distcache. From the man page, "the distcache architecture provides a protocol and set of accompanying tools to allow applications, and indeed machines, to share session state between them by way of a network service."

"The primary use of distcache right now is SSL/TLS session caching. This allows SSL/TLS servers (eg. a secure Apache web server providing HTTPS support) to use a centralised session cache, i.e any server may resume SSL/TLS sessions negotiated by any other server on the network. The advantages to this approach include increased freedom of mechanisms for load-balancing."

Gavin Brown
  • 103
  • 9