25

I just read the article written by Bruce Schneier, the crypto guru. In the article, he says :

Prefer symmetric cryptography over public-key cryptography.

But, he doesn't shed any light as to why.

Now, I thought that public key systems avoid problems with some man in the middle attacks & key distribution which are the problems with symmetric key cryptography, right?

Can anybody please explain Bruce Schneier's view or am I missing something?

Rahul
  • 353
  • 3
  • 6
  • 6
    in this [post](https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html) he says something interisting about the question: > One last blue-sky possibility: a quantum computer. ... The defense is easy, if annoying: stick with symmetric cryptography based on shared secrets, and use 256-bit keys. –  Sep 06 '13 at 12:08
  • Bruce Schneier is not a crypto guru. He knows more about crypto algorithms than 99.999% of the programmers in the world, he developed several algorithms, but just these 4 words `a quantum computer could` make him look like a complete jerk. And different from public/private key encryption, he hasn't told, how a symmetric key should be distributed. Maybe he's just in it only for the money? – ott-- Sep 07 '13 at 20:52
  • 2
    @ott-- If you look at what he's written, that part is *extremely* speculative, almost in the XKCD What-If? sense. He also goes on to state what the implications would be *if* it was true and then how to deal with the situation that arises from those implications. Using strictly symmetrical cryptography with pre-shared secret keys is practical in some situations and impractical in others, but that's a different matter entirely. – user Sep 07 '13 at 21:24
  • @ott-- What's wrong with quantum computers? – Evgeniy Chekan Oct 09 '13 at 10:13

5 Answers5

31

This preference of symmetric cryptography over asymmetric cryptography is based on the idea that asymmetric cryptography uses parametrized mathematical objects and there is a suspicion that such parameters could be specially chosen to make the system weak. For instance, when using Diffie-Hellman, DSA or ElGamal, you have to work modulo a big prime p. A randomly chosen prime p will be fine, but it is possible to select a special p which will "look fine" but will allow easy (or easier) breakage of the algorithms for whoever knows how that special p value was generated. Such primes p which make crypto weak are very rare so you won't hit one out of bad luck (as I say, a randomly chosen prime will be fine).

This means that good cryptographic parameters are nothing up my sleeve numbers. If you look at FIPS 186-4 (the standard for DSA), you will see a description of a generation system for DSA parameters (namely, the big modulus p, and the generator g and group order q) which is demonstrably non-malicious. This works simply by showing the deterministic PRNG which presided to the generation of these values; by revealing the seed, you can show that you produced the parameters faithfully, and thus did not indulge in "special prime crafting".

A jib can be made against the standard NIST elliptic curves (see FIPS 186-4 again) because these curves again have some parameters, and NIST forgot to use a deterministic PRNG as described above (actually, NIST is not at fault here; these curves were inherited from SEC, so the mistake was probably Certicom's). The Brainpool curves attempt to fix that mistake. (Note that among the 15 NIST curves, at least the 5 "Koblitz curves" cannot have been deliberately weakened since there is no random parameter in them at all.)

Ole' Bruce turns all of the above into a generic anathema against asymmetric cryptography because he wanted a punchy line, not a more correct but lengthy explanation on group parameter validation, as described above. The lesson to remember is that the complete design of any cryptographic system shall be as transparent as possible, and this includes the generation of "seemingly random" parameters.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 1
    Not only the parameters, the system itself might be weak depending on algorithmic advances. Symmetric crypto is in a sense a lot more arcane, because there is a lot less math involved. It's more like stirring a witch cauldron using explosive bullets fired by AK-47's in the hope that it's contents are so thoroughly mixed that only if you knew one part of the contents (the key) you can figure out the other (the message). As Bruce Schneier states he has more confidence in symmetric crypto, because we honestly have no idea how to crack these arcane mixing techniques. – orlp Sep 06 '13 at 15:28
  • 2
    Do you know this is what Schneier meant or is it speculation? Because while it might apply to some asymmetric systems it doesn't apply to e.g. RSA where there are no such parameters. And for DLDH it's very easy to generate a new set of parameters and this is recommended for some situations. Even for ECC you can generate your own curves, and the algorithms for generating secure curves are described in the annexes to e.g. P1363. One could argue those algorithms could have been crafted to produce insecure curves, but at least you avoid having to trust kilo-bits of numbers of unknown origin. – Morty Sep 07 '13 at 19:53
4

According to what he says, companies subtly change their products in undetectable ways for back doors. About public-key systems, he mentions adding a common exponent to a public-key exchange protocol.

... making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on.

Perhaps, he thinks public-key cryptography is more open to manupulation than symmetric cryptography.

2

Schneier's blog has a couple of clarifying statements:

It is more likely that the NSA has some fundamental mathematical advance in breaking public-key algorithms than symmetric algorithms.

and

I personally am concerned about any constant whose origins I don't personally trust.

The justification of the former statement of course is very complex and relies to an extent on intuition/experience concerning the state of the art in mathematics. The latter is a more straightforward statement that certain systems have been more prone to NSA influence than others, and echoes the (probably controversial) statement from the article, "Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can".

Also bear in mind that the article is intended to present rules of thumb for a Guardian-reading audience. Of course there exist symmetric algorithms that the NSA is likely to have broken (or that are known to be breakable by the NSA), and asymmetric algorithms with no untrustworthy constants. He's not saying prefer a Caesar shift to DSA. But in Schneier's view the standard symmetric algorithms are more trustworthy than the standard asymmetric algorithms.

I believe you're correct that the increased difficulty of key distribution might mean that for many purposes a preference for symmetric crypto will not translate into actually being able to use it. He's probably mostly thinking about how you should communicate with friends/colleagues/co-conspirators, since those are the circumstances where the average user has a free choice of algorithms. No doubt his advice to those designing full crypto-systems would be more complex.

Finally note that all of this advice is specifically about NSA surveillance. If you do not regard the NSA (or its allies) as your adversary then it is not necessarily relevant, although much of it is good advice anyway. A typical hacker has a more limited set of available attacks.

Steve Jessop
  • 2,008
  • 10
  • 14
1

I can only guess, but I think it may derive from a sense that asymmetric encryption is just more mathematically shaky than symmetric.

For instance, secure symmetric cryptography is known to be theoretically possible: Consider for instance the one time pad, which can even sometimes be used in practice. Standard symmetric ciphers are in a way a scheme for emulating this but with shorter keys (especially if you consider e.g. CTR mode).

Then consider the case of any asymmetric system. Here you need it to be mathematically possible to have a mathematical function F(secret-to-protect, pub key) that produces an encrypted result that cannot be used to infer secret-to-protect even if pub key is known to the attacker. I.e. you don't have any secret argument the function can use for protecting the secret (as is the case for the symmetric scheme) - the only thing that is secret is what you want to protect! Still the function must be bijective since otherwise the intended recipient can't decrypt it. So all the information must be there. Indeed, it must be computationally easy to reverse the operation knowing some extra information related to the public key, namely the private key. It is much less clear that such functions exist - it is not known that they do. RSA and ECC are essentially proposals for such a scheme - but only proposals since they are not proven secure. Establishing the security of any asymmetric scheme, would by implication establish P != NP, and hence seems to be a very difficult problem.

So given that no known instance of a secure asymmetric scheme is known (and indeed an asymmetric scheme must satisfy some much more difficult mathematical properties than the symmetric scheme), but given that secure examples are known of symmetric schemes, then all else being equal, it is reasonable to be more suspicious of asymmetric schemes.

From a practical point of view, it seems that the complexity of factorization and the discrete log problem (as employed in asymmetric ciphers) is reduced year by year (even though the best algorithms are still exponential in time) whereas AES and DES (almost) are holding their ground. The biggest problem with DES was the small key size, not that the algorithm was cracked. 3DES is still secure, but quite slow compared to AES.

Morty
  • 185
  • 6
  • As I recall, DES was expected to be secure (for some definition thereof) until some time in the 1990s, and even then was never approved for use on confidential data (at least in the US), though I can't find a reference for the former. If that's so, it held up remarkably well -- [the EFF DES Cracker was built in 1998](https://en.wikipedia.org/wiki/EFF_DES_cracker) and needed a few days on average to find the key for a given plaintext/ciphertext combination. – user Sep 07 '13 at 12:47
  • @Michael: Right. By contrast, RSA was originally thought to be much more secure than it really is, even taking into account the growth in computer power. A large part of the erosion that has taken place is due to improvement in factorization techniques (GNFS) and the advent of distributed computing. See for instance Rivest's estimates from 1990 (http://www.offshore.com.ai/security/rivest-factoring.txt). Here it is estimated that a 665-bit key would still be secure in 2015. – Morty Sep 07 '13 at 19:14
  • whereas in fact 768-bit was broken years ago by one person using a relatively small number of machines. As mentioned, DES and 3DES have largely held their ground in terms of computational power required and indeed 3DES still offers 2^112 security which is still highly secure. Not bad for an algorithm from 1975. In fact, DES (in the form of 3DES) might be considered the most secure of all ciphers, since it has stood the test of almost 40 years of intense cryptanalysis. – Morty Sep 07 '13 at 19:20
  • That's the basis for the oft-quoted statement "attacks always get better, they never get worse". [RSA-200 (663 bits) was factored in 2005 using the GNFS](https://en.wikipedia.org/wiki/RSA_numbers#RSA-200) and Wikipedia claims it took the equivalent of approximately 75 CPU-years at the time to do so. And it's the exact reason why you should not stray too close to the lower bound of what is considered "secure" at the time for the needed duration. The document you cite also specifically states *using the best known algorithms*, best known then according to it being the NFS. – user Sep 07 '13 at 19:25
  • Also, DES was never considered to be a weak algorithm. Its standard short key length was an issue, and is definitely an issue today, but other than that the algorithm is largely sound, and 3DES addresses the key length issue quite nicely even though, as you point out, [you don't reap the entire benefit from the 56-to-168 bits key length increase even with three-key EDE 3DES](https://en.wikipedia.org/wiki/Triple_DES#Security). – user Sep 07 '13 at 19:31
  • @Michael: I agree, and of course it's not a criticism of Rivest. Still it's interesting how reality beat even the 'high'-column estimates from the article. It's indisputable that cryptanalysis has been more successful against RSA, DHDL and arguably ECC than has been the case for DES/AES. And yes, DES/3DES was never considered insecure. I agree that you don't get the full benefit of the key length increase and as I mentioned it's slow compared to the same _estimated_ security of AES. But in terms of time-proven security 3DES beats AES and might be worth more attention in some circumstances. – Morty Sep 07 '13 at 19:42
-2

Public key cryptography uses two keys and has huge mathematical complexity as a result of which computer gets slow when encryption and decryption for large data is to be done.

  • 1
    The second part (about performance) is just plain **wrong** in practice. Yes, asymmetric encryption is slow and complex compared to comparable-security symmetric encryption. No, that does not cause an appreciable slowdown in a practical implementation, *because the asymmetric encryption is generally used only to encrypt the session key* and the session key in turn is used with some symmetric algorithm to encrypt or decrypt the actual payload data. Similar for digital signatures: you generate a hash of the data to be signed and then sign the hash; you don't sign the to-be-signed data directly. – user Sep 07 '13 at 12:42