4

So to be clear right-off-the-bat: I think re-designing modern encryption to include governmental "backdoors" is, on balance, a quite bad idea, for any number of good reasons. Nor (FWIW) do I actually think laws mandating that will actually get enacted, when all is said and done (in the U.S., anyways). But my question is not the policy or political aspects of the encryption debate; it's about a technological aspect to this that is more interesting.

Setting aside whether you should do so, could you adapt current major encryption standards & implementations to allow one authorized third-party (for e.g., the U.S. government) to monitor encrypted communications between Party A and Party B and decrypt those conversations without necessarily making it significantly easier for another, unauthorized third-party to do so as well?

In the past weeks I've read too many statements to count basically answering: "No. No chance." But it's usually been unclear about whether by that answer it is meant that (a) such modifications cannot be made without fundamentally weakening common encryption, or (b) that technical feat might be doable, but the "authorized" government would just inevitably lose control of whatever secrets they possessed and allow the Bad Guys free reign against personal & org info.

I do still remember the 1990's debacle where the NSA tried to sort-of start to do this in the Fortezza/"clipper chip" initiative via a key escrow system; it went exactly nowhere outside of government use. And to my understanding key escrow wouldn't really be scalable enough for usage today anyway. And certainly it's easy to create "backdoored" encryption ... if you're not worried about weakening that encryption vs. every attacker. But if those are not viable are there any alternative technical approaches (looking at things from a 30,000ft level of technical detail) that might be used to create encryption systems/implementations as robust as current ones against "unauthorized" third-party decryption. Or is that really--to our current knowledge--impossible?

schroeder
  • 123,438
  • 55
  • 284
  • 319
mostlyinformed
  • 2,715
  • 16
  • 38
  • You may want to have a listen to the "Security Now" podcast, especially Episode 491 at https://www.grc.com/sn/sn-491.htm and Episode 506 at https://www.grc.com/sn/sn-506.htm. They both talk about backdoors for governement use. – Marcel Dec 22 '15 at 07:21
  • The fact that people actually read this and responded to the specific question as written has greatly increased my faith in humanity (or at least stackexchange). – octern Feb 06 '16 at 17:54

2 Answers2

8

Sure. Key escrow works, and it's well understood.

RSA offered a version of it with their "enterprise" version of PGP 6.0 about 20 years ago. PGP uses hybrid encryption, where the message is encrypted with a symmetric cipher, and the symmetric cypher's key is then encrypted with a public key of the recipient. Their key block supports encrypting the symmetric key with multiple public keys, allowing for multiple recipients who don't share a private key.

In that product, key escrow was implemented with the introduction of an extra public-key-encrypted symmetric key, where the private key was held by the owner of the PGP system. It was ostensibly created for corporations who might need to decrypt a message if the legitimate employee was incapacitated or terminated. (The earlier version suffered from a terrible vulnerability in that the key block was not protected by a MAC, and a malicious third party could silently add their own escrow key.)

Presuming you could impose key escrow upon a standards body, it would be possible to create a secure protocol that could be decrypted both by the legitimate site and by the holder of the private key. Use of key escrow could be enforced by backbone located network security appliances, which could deny communications unless they saw a digital signature on the key exchange packet indicating that key escrow keys were in place. Products like snort already function like this today; they would just need the signature information of the legitimate protocol, and could then send an RST packet to any SSL or TLS key exchange that doesn't comply.

The devil would be in the details, though. Nothing would stop double encipherment, because the network security appliances couldn't be entrusted with the keys needed to decrypt the packets to deeply inspect them. Nothing can prevent pre-arranged out-of-band signalling, like "a picture of a teakettle on the left side of the table means 'attack at dawn'." Bespoke key exchanges that are not detectable as TLS would come and go at the speed of light. Darknet VPNs would pop up routing messages around the federal firewalls. Steganographic communications mechanisms would embed illicit messages in streams of cat videos and ICMP ping requests. Key management and distribution would be a nightmare. And imagine what would happen to the security of the nation if Edward Snowden Jr. decided to publish the NSA's master key, or Robert Hansen Jr. turned over the FBI's copy to the Russians?

Such measures might make it harder for ordinary people to use non-default encryption, and might expose their iMessages to the NSA on a regular basis, but they would barely slow down criminal or terror organizations. And they'd be challenged in court immediately: forcing users to add key escrow would be tantamount to the government compelling speech, which is constitutionally prohibited in the USA.

John Deters
  • 33,650
  • 3
  • 57
  • 110
  • I think that in your answer, you fail to make necessary allowance for scaling. Yes, PGP had an option for that but doing so on ALL the encrypted channel even in a single country is a different matter entirely. In fact, I'm pretty sure that providing a secure key escrow system is impossible in the same way as moving the entire population of France to South America: it's not that the trip is impossible, it's that the infrastructure that would make it possible doesn't exist. – Stephane Dec 22 '15 at 13:05
  • @Stephane, the encryption operation is local to the device doing the encryption, and doesn't require on-line communication with a service. The only thing that needs to be scaled up that doesn't exist today is the anti-SSL network detection devices. Note that these don't have to be perfect in order to change people's behavior - by shutting down TLS-1.2 and forcing TLS-KeyEscrow on most channels, people will need to 'upgrade' just to do ordinary business online. Once upgraded, it's a challenge for non-tech people to downgrade again, so they probably won't. – John Deters Dec 22 '15 at 17:48
  • Very, very interesting answers & comments. One question about this kind of key escrow system: in end-to-end encryption the symmetric key would be encrypted with the for-government-only public key and signed by the end clients, right? How do you prevent a client from just taking some random data, signing it, and misrepresenting that that data decrypts to the proper symmetric key? Wouldn't a gov agency have to check that by actually using its private key for every session it even wanted to preserve its ability to monitor (which would presumably expose that key to theft to a greater degree)? – mostlyinformed Dec 22 '15 at 21:33
  • As I mentioned, there are many ways around this. Encryption and steganography exist today; no one can put those genies back in the bottle. The best the NSA could hope for would be to change standards. Ordinary people (and stupid criminals) have shown they will blindly use whatever tools they're given; even if iMessage suddenly becomes interceptable, they'll keep using it. But sophisticated criminals and terrorists will continue to use stego and encryption, as they do today. – John Deters Dec 22 '15 at 23:50
2

Not only is it possible, but we know that it has been done before. Specifically, the Dual_EC_DRBG algorithm that's been discussed ad naseum in the past couple of years, is an example of a strong cryptographic algorithm that has been confirmed to contain an NSA backdoor which allows the NSA (and only the NSA) to "completely break any instantiation of Dual_EC_DRBG".

To quote from the Wikipedia entry:

One of the weaknesses publicly identified was the potential of the algorithm to harbour a backdoor advantageous to the algorithm's designers—the United States government's National Security Agency (NSA)—and no-one else. In 2013, The New York Times reported that documents in their possession but never released to the public "appear to confirm" that the backdoor was real, and had been deliberately inserted by the NSA as part of the NSA's Bullrun decryption program.

To get a better description of the nature of this backdoor, I'll point you at a blog posting by Bruce Schneier way back in 2007:

In an informal presentation (.pdf) at the CRYPTO 2007 conference in August, Dan Shumow and Niels Ferguson showed that the algorithm contains a weakness that can only be described as a backdoor.

This is how it works: There are a bunch of constants -- fixed numbers -- in the standard used to define the algorithm's elliptic curve. These constants are listed in Appendix A of the NIST publication, but nowhere is it explained where they came from.

What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

The researchers don't know what the secret numbers are. But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.

HopelessN00b
  • 3,385
  • 19
  • 27