14

I had this idea a few hours ago, but of course it already exists and there is even an RFC...
Why don't we publish the fingerprint for the SSL/TLS certificate via DNS? We need DNSSEC to make sure the answer is legit and we need to make sure the nameservers are in a secure environment, but besides that I see no issues that SSL/TLS doesn't already have.

This change would move the authority for securing all our encrypted traffic from certificate authorities (CAs) to DNS nameservers. This could be an issue because in that case, all DNS providers (that have DNSSEC) should become very high-security environments like CAs are now. But shouldn't all nameservers already be this? The majority of the websites use plain http for sensitive traffic and most people would not notice if there is no padlock when logging in to Google. I'd say they have a huge responsibility already.

If we don't trust the domain's nameservers, we could include fingerprints in root servers instead, but I'm not sure how much extra load that would be.

Another issue is that DNSSEC is not that widely implemented yet. But the reason for my asking is because the IETF is considering to support encryption in http/2.0 without requiring certificate verification "to make it easier to deploy".

In my opinion, security should either be done correctly or not at all. Don't give people a false sense of security. But if we're going to change protocols anyway (upgrade all webservers), we might as well implement DNSSEC and do this the right (authenticated) way.

The issues/changes in short:

  • we need to trust (root?) nameservers instead of a ton of CAs;
  • we need to add a dns record type or extend the use of the sshfp record; and
  • we need to support dnssec more widely.

Am I missing something? Why are we paying top dollar for SSL/TLS certificates when all we need to do is publish the certificate's fingerprint over some authenticated channel?

The point of unauthenticated encryption in http 2.0 is to encourage enabling it without having to modify any other config at all. I'm against this because it's needlessly slow (another roundtrip for the key exchange, which is especially bad overseas or with any packet loss) and hardly adds any security. It might even provide a false sense of security. On the good side, it obfuscates traffic.

This proposal to use DNS makes it somewhat more work to implement encryption on webservers, but makes it (at least nearly) as secure as normal SSL/TLS while still keeping widespread availability of encryption very high (I don't expect they'll keep up the ridiculously high prices for a sshfp/httpfp/tlsfp-enabled nameserver like they do with certificates). Unauthenticated encryption might still be a last resort when all other options (signed cert; stored cert; securely published fingerprint) are unavailable (though it should not be shown to the user because it provides a completely false sense of security), but I'm not sure it's even worth the slowdown.

If it's not possible to use the root servers for this, we could also use the domain's nameservers as "reasonably secure envionments" (normal padlock) and use a CA-signed certificate for EV (green bar). Then tell people to look for the green bar when they're doing online banking. Nameservers already have a rather big responsibility so it might be safe enough even if we can't expect them to be as secure as a CA. At least it'd be a much more flexible system; I'm restricted to my domain and one subdomain for any reasonably priced certificates (and the subdomain is already occupied by "www.").

My question again in short: Is there any reason why we don't use an SSHFP-like system for https, if DNSSEC were widely available?

Luc
  • 31,973
  • 8
  • 71
  • 135

3 Answers3

17

There is a RFC for that. It is part of what DNSSEC is meant to do.

Now don't get too hopeful about "top dollar" or reduction thereof. The need to "certify" in some way public keys with regards to server names is not magically removed by switching to DNSSEC. The "CA" role is just moved around, and the associated costs are still there. It can be predicted that if the registrars must assume these costs, then their prices will rise. Sooner or later, someone will have to pay for it, and I have a strong suspicion that it will be the SSL server owner.

Philosophically, it is unclear whether merging the certification structure (the PKI) and the naming structure (the DNS) is a good idea. The merge would sure simplify the communications between DNS and PKI when dealing with the specific case of issuing certificates bound to host names; but it also means that the two responsibilities become intertwined. It takes a substantial dose of wishful thinking to believe that whoever is a good registrar would also be a good CA; the two kinds of activities are vaguely similar, but not identical.

Also, certificates are supposed to have a scope beyond HTTPS servers; they should be usable in contexts which have no relation whatsoever with the DNS and host names, for instance when signing software (application, drivers...).

Economically, it would not change much. Among the commercial CA, the most used is VeriSign, aka Symantec. Switching to DNSSEC means that the ultimate certification authority will no longer be VeriSign, but whoever manages most of the root DNS, i.e... VeriSign. Now THAT would be change, wouldn't it ?

Politically, there would be trouble with "state CA". Many of the dozens of CA which are trusted by default by a Windows system are "vanity CA" sponsored or even operated by various governments. The same governments have no intention to operate DNS-related infrastructures which require a lot more bandwidth and availability; and they would resent seeing their precioouuusss certificates becoming "second-class citizens".

For security, this is mostly a non-issue. Fake certificate events are very rare; we hear about one such occurrence per year. The bulk of Internet-related frauds and phishing and other attacks does not rely on fooling existing CA. In other words, the current CA system works. Don't fix that which is not broken. Yeah, the CA system looks brittle and prone to failures, but despite much publicity about the occasional breaches, the raw fact is that it survives.


Summary: the standards are ready, the RFC are published, the software is written. Now all that lacks is a reason to switch from a "commercial CA X.509" world to a "commercial registrar DNSSEC" world. It is hard to see what good it would actually do.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • Thanks for the elaborate answer! *"The current CA system works."* I agree, but then why are we implementing totally useless (unauthenticated) encryption into http2.0? According to the IETF (and I agree with this part), because we need to make deployment easier. Admins don't feel like spending money on or even ordering a certificate when it's not really required. But since you named no real vulnerabilities in this scheme, I guess cert fingerprints in DNS could be part of http2.0 to make the encryption have any use beyond obscurity. Perhaps no padlock, but at least it'd be worth something. – Luc Aug 27 '13 at 21:01
  • Regarding http2, I'm sure you're aware it is still a work in progress. Reading the httpbis mailing list shows that the current discussion regarding unauthenticated encryption is just that - a discussion. And even unauthenticated encryption can still raise the bar (http://lists.w3.org/Archives/Public/ietf-http-wg/2013JulSep/0970.html) even though it shouldn't be relied upon for anything further than that. – JoltColaOfEvil Aug 27 '13 at 21:15
  • 4
    The current CA system works the same way that unencrypted http works. We might have an order of magnitute more attacks on http than on https, but it is still a fraction of the number of transactions actually done. So http "works" as well. Hell, most banks in many EU countries do not even use https on their front page. – user239558 Mar 17 '14 at 08:54
6

The CERT RR has been deprecated. The current proposal for putting public key material in DNS is called DANE. They have defined a TLSA record type which is documents in RFC 6698. This lets a domain administrator assert a specific certificate, or a CA for a particular service.

Richard Salts
  • 363
  • 1
  • 2
  • 11
  • 1
    The CERT RR is also [used by PGP](http://www.gushi.org/make-dns-cert/HOWTO.html) for publishing keys, although also not as common as the "PKA" method using TXT RRs. – user1686 Aug 28 '13 at 18:12
3

It does exist - it's called DANE - https://www.rfc-editor.org/rfc/rfc6698 (the CERT RFC hasn't seen to of had any traction, I don't know why).

To generate the records you can use this:

https://github.com/pieterlexis/swede

This firefox plugin can validate them:

https://os3sec.org/

unfortunatly it's not complete.

The certificate patrol team have an updated plugin:

https://labs.nic.cz/page/1207/dane-patrol/

but it dose not seem to work either :(

There dosn't seem to be any other support for other browsers, chrome has support written but not published, see this blog post (If TLSA is important to you, maybe email Adam?):

https://www.imperialviolet.org/2012/10/20/dane-stapled-certificates.html

Note that TLSA don't just let you validate a cert, it also lets a site owner assert that the cert used on their site will only be signed by a particular CA, which goes a some way to putting the rogue/hacked CA problems to rest.

Remember that any CA (or affiliate of a CA, or a subsidiary of a CA etc.) can sign a cert for any domain. For the CA ecosystem to be secure every CA has to be secure!

JasperWallace
  • 446
  • 3
  • 5