9

https://en.wikipedia.org/wiki/HTTP/2

Earlier, encryption method TLS 1.2 or greater was planned to be mandatory as part of the protocol. However, in lieu of consensus for mandatory TLS, an optional unencrypted mode exists in addition to required support of an encrypted mode. Some implementations, such as Firefox, have stated that they will only support HTTP/2 when it is used over an encrypted connection.

The Question: Why wouldn't it be great, if HTTP/2 would only allow communication via TLS?

thequestionthequestion
  • 1,181
  • 1
  • 10
  • 9
  • 1
    While not an exact duplicate, the reasons for resistance to mandatory encryption on HTTP/2 are the same reasons that sites still use HTTP instead of HTTPS. – AJ Henderson Nov 04 '14 at 20:43
  • 1
    Because (some amount of) people who would use HTTP/2 without TLS aren't going to use HTTP/2 with TLS instead. They're going to use HTTP/1 without TLS. – user253751 Nov 05 '14 at 00:50
  • SSL cert is _not_ free. If I remember correctly, even for the CA providing "free" cert, you need to pay to revoke it. – Question Overflow Nov 05 '14 at 01:55
  • 2
    This is alot like asking "Why not armor all cars?" - Well, because not all cars carry something worth the cost of the armor. And besides, armor is pretty heavy and makes your car move slower. – J.Todd Nov 05 '14 at 03:12

3 Answers3

20

Mandatory SSL/TLS for everything has the following drawbacks:

  • All systems must pay the cost of the cryptography. The cost is mostly negligible for current systems, except for the really small embedded devices, that would not be happy with that (but they are happy with unprotected plain HTTP).

  • SSL makes some caching strategies more difficult; in particular, transparent proxies no longer work. Some people (in particular Internet Service Providers) are quite fond of transparent proxying, because it helps them save on their own bandwidth.

  • In a non-SSL world, many sysadmins run filters and antivirus software on their exit routers / firewalls, and have grown quite fond of that practice. Switching to a whole-SSL network would require them to review their procedures and update their skills; of course they will fight such changes to their death.

  • Using TLS implies using X.509 certificates, and sending some money to one of the "commercial PKI" -- or configuring and maintaining one yourself, for private applications. Doing something is necessary (you don't get security out of nothing) but some people would prefer a model closer to SSH, without dirtying their fingers with the handling of the dreadful X.509 thingies.

Therefore it is expected that there will be some resistance to mandatory TLS. On the other hand, this is completely neutral for professional spy agencies, since their targets of choice already use SSL or similar protection mechanisms, even if they are not mandatory.

Developers of Firefox and Chrome have already announced that they won't support any kind of HTTP/2 that does not run over TLS; this is a deliberate pledge to try to overcome the pockets of resistance described above.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • +1 for X.509. The current model is a real mess, and no good replacement in sight. – user10008 Nov 04 '14 at 20:49
  • I think that it's important to state categorically that the primary intended benefit isn't to thwart agencies like the NSA, but rather to protect against criminal threats. – Polynomial Nov 04 '14 at 21:10
  • 3
    The certificate concern could be handled in a different way. Instead of having encrypted and unencrypted modes, there could be modes with and without certificates. This could be designed such that both modes would be secure against passive attacks, and that you could not even tell the difference between the two modes by passively snooping the traffic. – kasperd Nov 04 '14 at 22:16
  • When you mention sysadmins running filters and antivirus software on their routers/firewalls, couldn't they just add a CA for all sites to the machines on their network, and have their routers/firewalls continue caching/scanning as before? (assuming that devices with scanning/caching features also have the option of replacing the CA cert with the proper, valid one when necessary ofc) – user2813274 Nov 05 '14 at 01:46
  • The only slight argument I would have is with regard to spy agencies not caring. Specifically a dramatic increase in the volume of strongly encrypted traffic makes identifying threats more difficult. when the vast majority of traffic is clear text it is simple to place secure traffic under additional scrutiny. However when all traffic is secure this model breaks completely and gathering meta-data from secure traffic becomes unmanageable. – Vality Nov 05 '14 at 02:08
  • 4
    It is very likely that in the end, people simply do not adopt HTTP/2 rather than grudgingly deploy TLS. – Siyuan Ren Nov 05 '14 at 03:29
  • @user2813274: there _are_ products that do interception of SSL traffic by generating on the fly a fake certificate and, technically, doing a MitM attack (this of course requires installation of the corresponding CA certificate in the clients). However, this may incur substantial CPU cost on the firewall/filter side, and it breaks client certificates (although this are quite rare in practice). – Tom Leek Nov 05 '14 at 11:31
  • 1
    update: letsencrypt.org (free domain validated certificates) significantly changes/mitigates the X.509 cert problem; if you don't have to pay for certs but instead just need some (ideally fully automated) scripts to renew them, that's definitely simpler. – bgp Jan 29 '16 at 18:00
3

Couple things off the top of my head that aren't great about encrypted data.

  1. You can't cache data for use among several computers, because it looks different each time it comes through the network.
  2. Security gateways can't inspect the content coming into the network, so something malicious might slip through easier.
  3. The whole trust infrastructure is a huge private industry. To get SSL/TLS working, you need a valid SSL certificate granted by a certificate authority that is trusted by all of the major browsers, and that generally costs money (although some free ones exist). This means if anyone wants to start a website, they not only have to purchase the domain, but an SSL certificate as well, and that increases the price for entry.
Greg
  • 869
  • 7
  • 10
  • 1
    (2) is only semi-valid in a corporate setting or similar, where root certificates can be pushed out and used to successfully MITM any SSL/TLS session; few individuals do that sort of filtering. (3) doesn't apply, because even TLS with a self-signed certificate exchanged per session provides protection against blanket passive surveillance (I recall seeing some discussion about such TLS sessions showing up the same way as plaintext HTTP in browsers); you only need more than that when you need protection against *active* surveillance. (1) may be valid, depending on the approach taken. – user Nov 05 '14 at 08:57
  • Note that for the purposes of the above comment, criminal MITMing on public networks is just about exactly the same thing as active surveillance. – user Nov 05 '14 at 09:06
  • @MichaelKjorling a self-signed cert will show a scary security warning that the user has to bypass, and you can bet browser vendors don't like users being able to do that. (At least, the kinds of browser vendors that want everyone to use TLS don't want anyone to be able to bypass warnings) – user253751 Nov 05 '14 at 10:23
  • 1
    @immibis That is true for now, but it doesn't have to be true. HTTP/2 untrusted certs could be used in different ways, for example to provide some degree of transmission security without implying any form of security to the user. Compare how compression can be used transparently. – user Nov 05 '14 at 10:31
  • I wonder how hard it would be to define a standard for authenticated http: content, such that a browser receiving a long URL of a particular format would be regarded as containing what should be the hash value of an "AuthenticationData:" header, which would in turn define a signature that should authenticate the content to follow? That would allow an https: site to redirect requests for content that should be publicly cacheable to a suitably-formatted http: link and still have the content be authenticated. – supercat Feb 19 '15 at 01:02
1

Also, note that software debugging will be harder if you don't at least have the chance to turn off TLS. I've had several cases of Vendor-one-closed-source-client using XML over HTTP, or SOAP, not being able to talk to Vendor-two-closed-source-server. Of course, each vendor claimed his solution was working correctly with dozens of other products, and the other was to blame.

Turning TLS off, and firing up wireshark, and finding an example of a protocol violation is often the only way to actually prove to a vendor he's doing something wrong. And while wireshark can decode TLS if provided with the server key, it's much easier to convince a server admin to downgrade to plain HTTP for an hour than to hand out the certificate keys.

Guntram Blohm
  • 1,529
  • 11
  • 13
  • Wireshark + Firefox/Chromium can intercept TLS traffic with SSLKEYLOGFILE. Some proxies (mitmproxy, sslsplit, hyperfox, charles, fiddler) streamline the process (use a separate browser profile to accept their certificate authority or proxy settings). sslsplit in particular can intercept arbitrary protocols. – Tobu Jan 22 '15 at 01:10
  • True for a browser connection. But in corporate environments, lots of applications use http(s) as well, where a) you can't or b) policies won't allow you to install the fake CA entries you need for mitm proxies, and if the TLS server endpoint is used for various different services beside yours, there's no way you'll get the server keys you'd need for wireshark either. Browsers might still be the main application for HTTP(s) in the internet these days; in corporate networks, they're not (and even on the internet, iOs/Android Apps are taking over). – Guntram Blohm Jan 22 '15 at 08:22