25

After yet another failure of the public key infrastructure, I was thinking about how broken the whole thing is. This business of undeniably associating an identity with a public key, and all the work we put into achieving it, is starting to feel like ice-skating up hill. Forgive me, I'm mostly just thinking aloud here.

I started thinking about the ToR Hidden Service Protocol and their method for solving this. The 'hidden service name' (which is typed into the address bar like any other URL) is actually derived from the public key - so you end up visiting sites like kpvz7ki2v5agwt35.onion - but you have no need for certificates or PKI, the public key and the domain alone are enough information to prove that they belong together (until you are able to generate collisions, but that's another story). Now clearly there is a problem with this - the whole point of the DNS is to provide a human-readable mapping to IP addresses.

Which leads me to my final, possibly flawed, suggestion; why do we not use an IP address that is generated from a public key? (The other way around sounds more convenient at first, but can't work for obvious cryptographic reasons).

We have a HUGE address space with IPv6. A 1024 bit RSA keypair is believed to have around 80 bits of entropy at the most. So why not split off a an 80-bit segment, and map public RSA keys to IP addresses in this segment?

Some downsides off the top of my head;

  • So an attacker can generate a key pair, and know immediately which server that key pair would be used on, if such a server existed. Perhaps the 80 bit space could be expanded to use 4096 bit RSA keys, believed to have around 256 bits maximum entropy, making such a search infeasible (we would unfortunately require IPv7+ with a 512 or so bit address for this to fit however). This attack is also not as bad as it might at first sound, as it is untargeted. This attack could be mitigated by including a salt into the key->IP process, which the server sends to clients when they connect. This salt makes each server's key->IP process unique.
  • An attacker could potentially brute-force the key space using a known salt until they match a chosen IP. This is a targeted attack, so is a bit more scary. However, using a slow (1-3 seconds ) algorithm to make the mapping from public key to IP could mitigate this. Use of the salt also means that such a brute-force would only apply to a single IP, and would have to be repeated per target IP.

In order to try to stop the mods closing this, I'll do my best to turn it into a question; Is this idea completely flawed in some way? Has it been attempted in the past? Am I just rambling?

lynks
  • 10,636
  • 5
  • 29
  • 54
  • Neat idea, but the routing tables would be huge... – makerofthings7 Jan 04 '13 at 14:49
  • 1
    @makerofthings7 I don't quite follow - routing isn't affected by this? – lynks Jan 04 '13 at 14:50
  • 2
    In the IPv4 world networks are grouped into subnets that assist in routing traffic to the right continent, then to the right ISP using net masks. If the users in the 80 bit segment is located around the world, then each router will need to know how to route that traffic to the right place. One routing table entry per user, per router. If this idea took off, I'd invest in any company that deals with high speed RAM ;) – makerofthings7 Jan 04 '13 at 14:56
  • I'm not sure what problem you're solving. If Turkitrust makes an error in issuing a key, then that error will result in an erroneous/deceptive IP. The solution exacerbates the problem because the site name is no longer human readable. What assurance do I have that pvz7ki2v5agwt35.onion is a website operating under the authority of lynks? How can I tell if it is operated by Anonymous claiming to be lynks? – MCW Jan 04 '13 at 15:03
  • @MarkC.Wallace the core property I was trying to achieve is that I can communicate with a remote server, without pre-sharing a secret, and create a confidential channel without the need for a third party. – lynks Jan 04 '13 at 15:05
  • @makerofthings7 ahh, I see where you're coming from. Good points. – lynks Jan 04 '13 at 15:06
  • 1
    @lynks You still have that trusted third party, except you now call it DNS and not CA. – CodesInChaos Jan 04 '13 at 15:07
  • 1
    @CodesInChaos that's a perfect summation of (the first part of) ThomasPornins answer, thanks. – lynks Jan 04 '13 at 15:08
  • 2
    You can currently communicate with a remote server without pre-sharing a secret and create a confidential channel. (Diffie-Hellman Key exchange) You just can't be sure who that remote server is. – MCW Jan 04 '13 at 15:11
  • FYI there is already a protocol which maps public keys to IP(v6) addresses: [cjdns](https://en.wikipedia.org/wiki/Cjdns) – rugk Feb 20 '16 at 11:44

3 Answers3

25

Mapping the public key to an IP address is easy (just hash it and keep the first 80 bits) and you have listed the ways to make this somehow robust (i.e. make the transform slow). It has the drawback that it does not solve the problem at all: it just moves it around.

The problem is about binding the cryptographically protected access (namely, the server public key) to the notion of identity that the human user understands. Human users grasp domain names. You will not make them validate hash-generated IPv6 addresses...

Of course there is a deployed system which maps names to technical data such as IP addresses; this is the DNS. You could extend it to map domain names to public key verification tokens (i.e. put the hash of the public key somewhere in the DNS), or even public key themselves. If you use the DNS to transfer security-sensitive name-key bindings, then the DNS becomes a valuable target, so you would have to add security to the DNS itself. At that point, you have DNSSEC, which is a current proposal for a replacement of the X.509 PKI for HTTPS Web site. Whether DNSSEC would fare better than existing CA is unclear; that's switching actors, but the conceptual certification business would still lurk here.

Humans want human-readable names, and public keys are unreadable. All the certificate-based solutions (be they X.509 or DNSSEC or whatever) try to bind a public key to an arbitrarily chosen name. Another distinct method would be to make the public key readable. Strangely enough, there are cryptographic protocols for that: ID-based cryptography. They use some rather tortuous mathematical tools (pairings on elliptic curves). They do not change the core concept (really, at some point, someone must make the link between a societal identity, like "Google", and the world of computers) but they change the dynamics. In an ID-based system for SSL, each server would have very short-lived private keys, and a central authority would issue to each server a new private key every day, matching its name. The net effect would be like an X.509 PKI where revocation inherently works well, so damage containment would be effective.

Yet another twist would be to replace the notion of identity. Since humans cannot read public keys, then, accept it: they will not read them. Instead, track down active attacks with specialized entities, who do know how to read keys. That's the whole point of the "notaries" in Convergence. The notaries keep track of what public key is used by which site, and they scream and kick whenever they see something fishy.


Anyway, the current system is not broken -- not in an economically relevant way. The breach you are linking to will join the Comodo and DigiNotar mishaps; that's a short list. Such problems occur way to rarely to even show up on the financial radar: if you add up the cost of all frauds which used a fake server certificate obtained from a "trusted CA", you will get an amount which is ludicrously small with regards to the billions of dollars from more mundane credit card frauds. From the point of view of banks and merchants and people who do commerce on the Internet, the X.509 PKI works. There is no incentive for them to promote a replacement. If there was a fake Google certificate every day, used to actually steal money from people, then the situation would be different. Right now, we are around one event per year.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
12

The first big flaw of your idea is that it doesn't really solve much. Once you want meaningful names like they're currently in use, you need DNS or a similar system. So your point of failure is back, except that it's now DNS and not CAs.

Putting the fingerprint into the IP offers little advantage over putting it into DNS alongside the IP, but has the downside that routing becomes more complicated and expensive.

There is a system that puts key fingerprints into DNSSEC, it's called DANE. Chrome recently added support for it.

There is a system called cjdns that puts fingerprints into IPs. It using 120 bits of the IP and 8 bits from a strengthening construction, to get the equivalent of 128 bit fingerprints. Note that due to multi-target attacks the actual security level is lower than the size of the fingerprint.

You're also overestimating the level of PKI failure. Powerful adversaries occasionally break it, but a normal criminal usually won't. Deploying a system like Certificate Transparency will also lead to quick detection of such failures, so an attacker burns his valuable certificates much quicker.


All name resolution systems run into Zooko's triangle. If you want global, meaningful names, you'll need some kind of trusted central registry.
Namecoin comes the closest to breaking zooko's triangle, but it has it's share of issues too. The biggest one IMO is that since trademarks can't be enforced, the owner of google.bit probably won't be the well known company.

You should also read Zooko's essay on this topic.

CodesInChaos
  • 11,854
  • 2
  • 40
  • 50
  • One valid counter argument against DANE is that is consolidates the trusted roots and contains a breach to that given TLD – makerofthings7 Jan 04 '13 at 14:52
  • Domain validated certificates suffer from similar issues. Controlling DNS is already a severe attack. – CodesInChaos Jan 04 '13 at 14:54
  • +1 for all the links. Plenty to read over the weekend. – lynks Jan 04 '13 at 15:13
  • BTW DNSEC/DANE support in Chrome has been removed "due to lack of use". (source: https://www.imperialviolet.org/2011/06/16/dnssecchrome.html) More information here: https://www.imperialviolet.org/2015/01/17/notdane.html – rugk Feb 20 '16 at 12:11
2

I think the bigger key is to improve the way PKI Trusted Root CAs are revoked. The exposure could be limited if there was a master revocation list for Root CAs, then only one compromised Root CA would really matter and that one could be guarded like Fort Knox. If it ever did get compromised, then the current means of patch to fix could be applied.

Another possibility is to tie the SSL credentials to the DNS record. A given TLD would have to be able to resolve what SSL certificate is valid to say it is it's identity. This would then require both DNS to be spoofed and a Root CA to be compromised, both without detection to trigger a revocation. For example, if a Root CA got compromised and I made a cert saying I was Google.com, simply making a check to google.com's DNS record could indicate that the cert does not match and is not genuine. Effectively, rather than exclusively trusting the third party, we would be requiring the third party and the first party to both agree on two different channels.

I realize it still doesn't help against situations where a MITM has both a compromised root CA and the ability to alter DNS results, but it severely limits the potential useful attack scenarios of a compromised CA. (As opposed to currently being able to make most users think that www.gaggle.com is a new Google service that they should log in to with their Google Wallet.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110