0

Best practice in term of security for the web are enforced by major actors like Google or Mozilla, but it feels a bit like something is messed up in the global design.

To mitigate all man in the middle and other types of attack on SSL/TLS a lot of new protocols and practices has been created like:

  • OCSP stapling
  • Do not use wildcard use multidomain certificates
  • Do not use this cipher suite it's too old (still very hard to break but too old)
  • increase DHGroup
  • put a DNSCAA record in your DNS
  • and the list can goes on

Are these "enhancements" the result of SSL/TLS being fundamentally flawed?

Is there any plan to replace this protocol with something more reliable, or is it that most problems are implementation-based and all those best practices/RFCs are there to mitigate possible human error?

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
Kiwy
  • 323
  • 1
  • 13
  • 2
    Most problems are implementation based, though there are sometimes really bad problems that affect all TLS systems. There's a reason TLS 1.3 is being developed. – forest Feb 14 '18 at 14:29
  • Thanks for the edit, it' been a long time since I wrote seriously some english – Kiwy Feb 14 '18 at 14:44

2 Answers2

3

Is there any plan to replace this protocol with something more reliable, or is it that most problems are implementation-based and all those best practices/RFCs are there to mitigate possible human error?

There isn't a simple answer here, so I would "yes and no to all of the above". There are lots of factors at play.

Is TLS flawed?

Much like any other protocol (take HTTP or HTML for example) SSL/TLS is an evolving standard. Like any other protocol, it evolves to include:

  • new features (ex. DNS CAA or OCSP which give higher security)
  • performance enhancements (ex. OCSP stapling, and a lot of the TLS 1.3 stuff to improve speed and bandwidth usage)
  • new use-cases that reflect the changing way in which the internet is built (ex. wildcard vs multi-domain vs multiple certs in a load-balanced cloud server context).

Unlike other protocols, TLS also needs to keep up with increases in the CPU power available to hackers, and cryptographic research (both research into new algorithms, and research into breaking existing algorithms). That's why you see new ciphers added, old ones dropped, and key sizes gradually increase.

I wouldn't say that TLS is flawed, but rather that, like any software, TLS is being improved over time to reflect the changing internet.

Implementation errors

You are right to point out that most of the egregious vulnerabilities in TLS are not due to a problem with the TLS specification, but due to programmers not following that spec properly. Two examples come to mind:

Mining your Ps and Qs

[Paper]

The issue here is with (mainly) small embedded devices (think home routers or internet-enabled webcams) that need to generate TLS server keys on first boot-up. Turns out these devices all roll out of the factory almost identical, which means the keys they generate on first boot are not very random.

This is certainly a TLS vulnerability, but is not the fault of the TLS spec.

ROBOT

[Project homepage]

Here, the TLS spec (RFC 5246 section 7.4.7.1) gives very clear instructions for how to avoid Bleichenbacher-style attacks:

As described by Klima [KPR03], these vulnerabilities can be avoided
by treating incorrectly formatted message blocks and/or mismatched
version numbers in a manner indistinguishable from correctly
formatted RSA blocks.  In other words:

  1. Generate a string R of 46 random bytes

  2. Decrypt the message to recover the plaintext M

  3. If the PKCS#1 padding is not correct, or the length of message
     M is not exactly 48 bytes:
        pre_master_secret = ClientHello.client_version || R
     else If ClientHello.client_version <= TLS 1.0, and version
     number check is explicitly disabled:
        pre_master_secret = M
     else:
        pre_master_secret = ClientHello.client_version || M[2..47]

That's 3 pretty straight-forward steps. Turns out that the software developers behind a large number of the TLS implementations out there did not these steps properly, leading to vulnerabilities.

Again, his is certainly a TLS vulnerability, but is not the fault of the TLS spec.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
1

Are these "enhancements" the result of SSL/TLS being fundamentally flawed?

OCSP stapling

OCSP stapling is just OCSP with better performance and less privacy impact. This does not fix a problem in TLS per se.

Do not use wildcard use multidomain certificates

I don't know of this "best practice". If wildcards or multi-domain should be used depends on the use case. Not a problem of TLS.

Do not use this cipher suite it's too old (still very hard to break but too old)

That computers get faster and we get advances in cryptography and that thus ciphers have a limited time where they provide security is normal. This is also not a flaw in TLS. In contrary, TLS is flexible in that it allows to use newer ciphers if needed. Still TLS 1.3 explicitly forbids some of the older ciphers.

increase DHGroup

Same thing as using up-to-date ciphers.

put a DNSCAA record in your DNS

This only allows CA to check if they are allowed to issue certificates for the specific domain. Does not fix a problem of TLS but improves the management of the PKI which is used in the context of TLS to validate certificates.

and the list can goes on

So far no problems of TLS itself where mentioned. But there were actually problems in older SSL/TLS standards and that's why you got newer TLS versions. And, the development of TLS has not been stopped but new versions are in development.

This again does not mean that there is a fundamental flaw. If this would be the case we would abandon TLS and not continue to develop it.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424