Strong cryptography
Strong cryptography or cryptographic-ally strong are general terms applied to cryptographic systems or components that are considered highly resistant to cryptanalysis.
Demonstrating the resistance of any cryptographic scheme to attack is a complex matter, requiring extensive testing and reviews, preferably in a public forum. Good algorithms and protocols are required, and good system design and implementation is needed as well. For instance, the operating system on which the cryptographic software runs should be as carefully secured as possible. Users may handle passwords insecurely, or trust 'service' personnel overly much, or simply misuse the software. (See social engineering.) "Strong" thus is an imprecise term and may not apply in particular situations.
Background
The use of computers changed the process of cryptanalysis, famously with Bletchley Park's Colossus. But just as the development of digital computers and electronics helped in cryptanalysis, it also made possible much more complex ciphers. It is typically the case that use of a quality cipher is very efficient, while breaking it requires an effort many orders of magnitude larger - making cryptanalysis so inefficient and impractical as to be effectively impossible.
Since the publication of Data Encryption Standard, the Diffie-Hellman and RSA algorithm in the 1970s, cryptography has had deep connections with abstract mathematics and become a widely used tool in communications, computer networks, and computer security generally.
Cryptographically strong algorithms
This term "cryptographically strong" is often used to describe an encryption algorithm, and implies, in comparison to some other algorithm (which is thus cryptographically weak), greater resistance to attack. But it can also be used to describe hashing and unique identifier and filename creation algorithms. See for example the description of the Microsoft .NET runtime library function Path.GetRandomFileName.[1] In this usage, the term means "difficult to guess".
An encryption algorithm is intended to be unbreakable (in which case it is as strong as it can ever be), but might be breakable (in which case it is as weak as it can ever be) so there is not, in principle, a continuum of strength as the idiom would seem to imply: Algorithm A is stronger than Algorithm B which is stronger than Algorithm C, and so on. The situation is made more complex, and less subsumable into a single strength metric, by the fact that there are many types of cryptanalytic attack and that any given algorithm is likely to force the attacker to do more work to break it when using one attack than another.
There is only one known unbreakable cryptographic system, the one-time pad, which is not generally possible to use because of the difficulties involved in exchanging one-time pads without their being compromised. So any encryption algorithm can be compared to the perfect algorithm, the one-time pad.
The usual sense in which this term is (loosely) used, is in reference to a particular attack, brute force key search — especially in explanations for newcomers to the field. Indeed, with this attack (always assuming keys to have been randomly chosen), there is a continuum of resistance depending on the length of the key used. But even so there are two major problems: many algorithms allow use of different length keys at different times, and any algorithm can forgo use of the full key length possible. Thus, Blowfish and RC5 are block cipher algorithms whose design specifically allowed for several key lengths, and who cannot therefore be said to have any particular strength with respect to brute force key search. Furthermore, US export regulations restrict key length for exportable cryptographic products and in several cases in the 1980s and 1990s (e.g., famously in the case of Lotus Notes' export approval) only partial keys were used, decreasing 'strength' against brute force attack for those (export) versions. More or less the same thing happened outside the US as well, as for example in the case of more than one of the cryptographic algorithms in the GSM cellular telephone standard.
The term is commonly used to convey that some algorithm is suitable for some task in cryptography or information security, but also resists cryptanalysis and has no, or fewer, security weaknesses. Tasks are varied, and might include:
- generating randomness
- encrypting data
- providing a method to ensure data integrity
Cryptographically strong would seem to mean that the described method has some kind of maturity, perhaps even approved for use against different kinds of systematic attacks in theory and/or practice. Indeed, that the method may resist those attacks long enough to protect the information carried (and what stands behind the information) for a useful length of time. But due to the complexity and subtlety of the field, neither is almost ever the case. Since such assurances are not actually available in real practice, sleight of hand in language which implies that they are will generally be misleading.
There will always be uncertainty as advances (e.g., in cryptanalytic theory or merely affordable computer capacity) may reduce the effort needed to successfully use some attack method against an algorithm.
In addition, actual use of cryptographic algorithms requires their encapsulation in a cryptosystem, and doing so often introduces vulnerabilities which are not due to faults in an algorithm. For example, essentially all algorithms require random choice of keys, and any cryptosystem which does not provide such keys will be subject to attack regardless of any attack resistant qualities of the encryption algorithm(s) used.
Legal issues
Since use of strong cryptography makes the job of intelligence agencies more difficult, many countries have enacted law or regulation restricting or simply banning the non-official use of strong cryptography. For instance, the United States has defined cryptographic products as munitions since World War II and has prohibited export of cryptography beyond a certain 'strength' (measured in part by key size), and Russia banned its use by private individuals in 1995.[2] It is not clear if the Russian ban is still in effect. France had quite strict regulations in this field, but has relaxed them in recent years.
Examples
- PGP is generally considered an example of strong cryptography, with versions running under most popular operating systems and on various hardware platforms. The open source standard for PGP operations is OpenPGP, and GnuPG is an implementation of that standard from the FSF. However, the IDEA signature key in classical PGP is only 64bit long, therefor no longer immune against collision attacks.
OpenPGP therefor uses SHA2 integrity and AES cryptography.
- The AES algorithm is considered strong after being selected in a lengthy selection process that was open and involved numerous tests.
- The Elliptic curve cryptography is another system which is based on a graphical geometrical function
Examples that are not considered cryptographically strong include:
- The DES, whose 56-bit keys allow attacks via exhaustive search.
- Triple-DES (3DES / EDE3-DES). see DES - this also suffers a meanwhile known phenomenom, called the "sweet32" or "birthday oracle"
- Wired Equivalent Privacy which is subject to a number of attacks due to flaws in its design.
- SSL v2 and v3. TLS 1.0 and TLS 1.1 are also deprecated now [see RFC7525] because of irreversible flaws which are still present by design and because they do not provide elliptical handshake (EC) for ciphers, no modern cryptography, no CCM/GCM ciphermodes. TLS1.x are also announced off by the PCIDSS 3.2 for commercial business/banking implementations on web frontends. Only TLS1.2 and TLS 1.3 are allowed and recommended, modern ciphers, handshakes and ciphermodes must be used exclusively.
- The MD5 and SHA-1 hash functions.
- The RC4 stream cipher.
- The Clipper Chip, a failed initiative of the U.S. government that included key escrow provisions, allowing the government to gain access to the keys.
- The 40-bit Content Scramble System used to encrypt most DVD-Video discs.
- Almost all classical ciphers.
- Most rotary ciphers, such as the Enigma machine.
- some flawy RSA implementations exist which are suffering an algorithm problem, leading to weak, biased keys ("ROBOT" Vulnerability, the "Bleichenbacher Oracle")
- RSA keys weaker than 2048 bits
- DH keys weaker than 2048 bits
- ECDHE keys weaker than 192 bits, also not all known older named curves still in use for this are vetted "safe".
- DHE/EDHE is guessable/weak when using/ re-using known default prime values on the server
- SHA-1 integrity (and everything weaker) is no longer immune against collision attacks.
- CBC blockcipher subcoding of ciphertext is considered weak for TLS (CCM/GCM modes are recommended now)
The latest version of TLS protocol (version 1.3), used to secure Internet transactions, is generally considered strong. Several vulnerabilities exist in previous versions, including demonstrated attacks such as POODLE. Worse, some cipher-suites are deliberately weakened to use a 40-bit effective key to allow export under pre-1996 U.S. regulations.
Notes
- Path.GetRandomFileName Method (System.IO), Microsoft
- Farber, Dave (1995-04-06). "A ban on cryptography in Russia (fwd) [Next .. djf]". Retrieved 2011-02-14.
References
- Strong Cryptography - The Global Tide of Change, Cato Institute Briefing Paper no. 51