3

I've been working on get HTTP2 support running on an Nginx server for some time now. At this point I'm stuck at selecting ciphers to support. Hopefully you can help me understand this.

Before I started with getting HTTP2 to work, I made it a hobby to get the best possible scores in SSLlabs while maintaining support for the majority of browsers. Thus, I only supported 256 bit ciphers and didn't list any 128 bit ciphers.

Since enabling HTTP2, I lost support for Firefox on Windows (and probably other browsers/platforms as well). Note that I'm fine having lost support for Java, XP and Android 2.3 according to the SSLlabs browser simulations, as this is a private server.

According to SSLlabs, Firefox version 45 and 46 on Windows fail to connect to the server. The message shown is: Server negotiated HTTP/2 with blacklisted suite. According to the results, these versions of Firefox will have selected the cipher TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA. A quick search led me to this topic on ServerFault that explained that the RFC specifies a blacklist of ciphers.

This is the cipher list I had configured:

ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:kEDH+AESGCM:CAMELLIA256:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK:!CAMELLIA+RSA:!AES128:@STRENGTH;

I'm led to believe that TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA is stronger than TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (which is used by Firefox in my current configuration), as it has a higher preference for Nginx if I add @STRENGTH to the ssl_ciphers directive. Still, the first one is listed in the black list and the second one isn't.

I'm aware that there are already some topics here about what ciphers should be chosed to get the best support. However, with this post I'm trying to better understand why some of the cipher suites listed above are blacklisted and several 128 bit ciphers aren't.

Evy Bongers
  • 131
  • 1
  • 3

2 Answers2

4

As the rfc4880 appendix you linked to says

  Note: This list was assembled from the set of registered TLS
  cipher suites at the time of writing.  This list includes those
  cipher suites that do not offer an ephemeral key exchange and
  those that are based on the TLS null, stream, or block cipher type
  (as defined in Section 6.2.3 of [TLS12]).  Additional cipher
  suites with these properties could be defined; these would not be
  explicitly prohibited.

Stream suites are blacklisted because the only stream cipher used in TLS is RC4 and attacks against RC4 have practically exploded in the last few years; see rfc 7465.

CBC suites are blacklisted because it is now recognized TLS-nee-SSL use of MAC-then-encrypt was a poor choice (although before the first clear recognition of this, Bellare & Namprempre in Asiacrypt 2000) partly because MTE combined with CBC (which requires padding) in an online protocol allows padding-oracle attacks like POODLE and Lucky13. POODLE severely breaks SSL3, but SSL3 was officially obsolete already and POODLE finally motivated people to eliminate it. Lucky13 requires precise timing and is not very practical now, but it shows an approach that might well improve. (SSL and TLS1.0 use of exposed IV for CBC also allowed BEAST, but that part is fixed in 1.1 and 1.2.)

Null ciphers are blacklisted because, well, their undesirability should be obvious.

This leaves only AEAD suites, and in practice only GCM, at least for now. Note that AEAD requires TLS1.2, which is not yet universally implemented, so generic tests like SSLLabs still accept CBC modes to allow for servers and/or clients using 1.1 and 1.0 (but not SSL3 as above). Chrome used to describe anything other than 1.2 with AEAD and ephemeral key exchange as 'obsolete cryptography', and security.SX has quite a few Qs about that, but on retest (51.0.2704.84) it seems to have stopped. HTTP/2, which comes along after TLS1.2 and AEAD 'should' already be in place, can reasonably be more demanding.

Unless we get practical quantum computers -- a very big unless people are now worried about -- the difference in strength between AES-128 and AES-256 is meaningless. It's the difference between 'unbreakable in our galaxy, but might be broken by someone controlling the entire universe for zillions of years' and 'unbreakable even in the entire universe'. No matter how perverse your collection of cat videos, you shouldn't care if it's decrypted after you're long dead, all your descendants are dead, all human beings are dead, and our planet and solar system no longer exist.

(Although given all the other media features added in HTTP/2, I'm disappointed they didn't add any capability to prohibit or at least seriously limit and degrade cat videos. Well, there's always X-.)

dave_thompson_085
  • 9,759
  • 1
  • 24
  • 28
  • Thank you both for your answers. For starters, I wasn't aware that there is a difference between the number of bits and the effective bit strength. Also, I haven't found any indications so far that CBC was considered a poor choice. This is probably due to the fact that I don't have any background in encryption and my mathematics is limited to the level of highschool. It leaves me wondering though, why CBC ciphers don't receive a penalty in SSLlabs. Is this because there are at this point no practical attacks on them? – Evy Bongers Jun 13 '16 at 19:07
  • @PaulBongers CBC wasn't a bad choice, especially in the 1990s when AEAD hadn't yet been invented, and even today is fine in some usages. But CBC **combined with MAC-then-encrypt in an online protocol** does allow some attacks. POODLE was very bad, but only affected SSL3 which should have been obsoleted anyway and now is. Lucky13 is, as you guessed, not very practical, and I'm not aware of anything worse in this category _yet_ but there are undoubtedly people working on it. So for now CBC is okay, especially for TLS1.1 where the preferred AEAD (GCM or possibly CCM) isn't available. Clarified. – dave_thompson_085 Jun 15 '16 at 08:55
1

Encryption algorithms are intended to obscure the original input from the output without knowledge of the key. Ideally, changing any bit in the key should result in several bits being altered in the output, and should do so in an apparently unpredictable pattern. The blacklisted algorithms have undergone careful analysis and found to lack this characteristic for a significant number of key bits. This is why you'll often see an "effective bit strength" for a given algorithm, and it's usually smaller than the key's bits (because many algorithms do invariably leak some bits). In other words, the more random the output appears to be, the more effective it is.

For example, a 256 bit key in a certain algorithm might only yield 56 bits of security, while a 128 bit key in another algorithm might yield 96 bits of security. When broken down this way, it's easy to see that the 128 bit key is superior to the 256 bit key, despite being half the size, and therefore 2128 times smaller in terms of possible values. This is one reason why "xor encryption" is considered weak: you can easily apply a function to the output and guess the key and input, unless certain conditions are met.

The reason why HTTP/2 blacklists certain algorithms is to start with a strong base of ciphers that will be viable for the foreseeable future. You can't continually blacklist algorithms and expect all servers and clients to keep up with the list in real-time. Therefore, by rejecting those ciphers that are already broken or within danger of being broken in the near future, it has more time do deal with the exponential growth of computing.

phyrfox
  • 5,724
  • 20
  • 24