2

I have 2 questions that are somewhat related:

  1. When you try and guess the key in a program (in a brute force way), how does the program know if it got it right and it had deciphered the message (I'm not talking about the obvious files with headers, but just a long string of text)?
  2. What is the problem with cascade encryption/multiple encryption? If the only way to answer question 1 is to use something like a language processor, then aren't all multiple encryptions unbreakable using brute force methods?
schroeder
  • 123,438
  • 55
  • 284
  • 319
shoham
  • 135
  • 5

3 Answers3

2

For multiple encryption, there was a previous question: Is multiple encryption a good idea?

As to how do you know if you decrypt it correctly -- usually you look at the output and decide if it makes sense. E.g., does it have the file format of a gif/jpeg/zip file? If it was plaintext does is it all ASCII or properly encoded UTF-8? Ignoring non-ASCII code-points, only 96 of the 256 possible values of a byte are printable. So if you decrypted garbage message with the wrong key and the message was 1000 characters long, the chance that all of them are printable ASCII letters by random chance will only be (96/256)1000 or about 1 in 10400. Even allowing UTF-8 the characters have to be in certain combinations; e.g., if the first byte is of the form 1110xxxx (where each x could be either a 1 or 0), that means the next two characters both need to be the form 10xxxxxx to be proper unicode). These sorts of things rarely happen by chance.

There's also often padding (that needs a certain form in the plaintext), checksums, or Message Authentication Codes that help distinguish garbage data from the wrong key, from actual data.


EDIT: If you encrypted a file twice with two different (symmetric) keys k1 and k2 and you tried decrypting c=E(k1, E'(k2, m)), there isn't a way you could just brute-force guess k1 by just examining the output of D(k1', c), which will appear random for any pattern. You would have to brute force, k1 and k2 simultaneously to figure out properties of m.

However, its extremely infeasible to guess a high-entropy private-key by brute force. The amount of work to brute-force a 128-bit AES key is about 2128, e.g., a billion (109) computers testing a trillion (1012) keys per second for a 1000 years would only have a 1 in 10 million chance in finding the right key. So the gain from multiple encryption in this scenario doesn't seem to be much.

On the other hand, if you had asymmetric encryption with a known public-key, you can quickly check if the guessed private key is correct -- can you encrypt a message with the known public-key and then decrypt it with your guessed private key to recover the original message?

dr jimbob
  • 38,768
  • 8
  • 92
  • 161
  • My question is: if I encrypt some text twice, or many times. How do you know if you've decrypted the encrypted text? Another question: In the question you sent me to, people say that if one of the algorithms I encrypt with is weak, it damages the whole encryption and I would be better using single encryption, why is that? – shoham Apr 25 '14 at 09:11
  • @shoham - Imagine you were sending encrypted messages to a remote server that auto-decrypted the messages (to act out commands or what not) with a pre-shared AES-128 key and didn’t use TLS. Your encrypted data would be safe from [heartbleed](http://security.stackexchange.com/a/55350/2568) attacks. But if you turned on TLS using OpenSSL v1.0.1a-f all of a sudden your now multiply encrypted data is vulnerable. Your HTTP server is leaking memory potentially with your secret keys or fully decrypted messages. Hence TLS being broken could break TLS+AES, while AES alone would have been secure. – dr jimbob Apr 26 '14 at 04:36
  • 1
    @shoham - Simply `E(k1, E'(k2, m))` mathematically won't be weaker than single encryption (where E and E' are different encryption functions with different keys). But there is a chance that a complicated multiple encryption scheme opens you up to more side-channel attacks from bad implementations (think heartbleed, or padding oracle, or some new attack that you add). Thomas Pornin, recommending "don't do it" in the 2012 post, more [recently answered more favorably about using layers of encryption if you "believe in layers, then add layers"](http://security.stackexchange.com/a/56395/2568). – dr jimbob Apr 26 '14 at 04:49
  • @shoham - I'm not trying to say he's inconsistent, I just brought up that post as he ends is that its relatively obvious that layers of encryption doesn't weaken security (except in the way he mentioned in 2012 and I mentioned in the Heartbleed example, that you potentially open up more side-channel attacks). Granted, the side-channel attacks would still be there if you used a single flawed encryption algorithm and chose the wrong one. – dr jimbob Apr 26 '14 at 04:53
  • Cool. But you forgot to answer my question: How do you know that you've decrypted a message when it's multiply encrypted? You can't check if it's readable because it won't be. You must have both keys, don't you? – shoham Apr 26 '14 at 17:53
  • @shoham - Did you see my edit? Sorry my comments were getting long. – dr jimbob Apr 26 '14 at 21:06
  • "can you encrypt a message with the known public-key and then decrypt it with your guessed private key to recover the original message?" - Isn't this the point of asymmetric encryption? What do you mean? – shoham Apr 27 '14 at 03:53
  • @shoham - The point was that if you guess the private key for an encrypted message its trivial to check whether its right or not. You just take the public key, encrypt some random message with it, take your guess for the private key and try to decrypt it. Is it the original message? Yes? Then you guessed right. No? You guessed wrong. Yes? Then take the private key and use it to decrypt the first encrypted message. Granted in reality you try brute-forcing public keys with RSA by trying to factor the modulus (similarly for ElGamal you try solving the discrete log). – dr jimbob Apr 27 '14 at 04:28
1

We are actually using multiple layered protocol techniques for our sentient AI-AI Inferix Project communications. Most people say you should not layer as it introduces weaknesses in the form of more opportunities for the hacker.

But we think that setting up dynamic or hidden protocols within protocols can produce (at least) doubt as to whether the decryption has really reached the "base level" of encryption to reveal "the true message"... for example on a simple level a one level decrypted message might reveal a plausible message and then Eve might think she has eavesdropped on that message ... but the real message might be down a few more levels.

Anyway ... good luck with peeling away the layers to look for a real but smaller hidden messages in a much bigger message or image hidden by prior arranged protocols that can be influenced by different data streams (e.g. by number broadcasts, or number files, etc.).

kbo me :)

1

Multiple encryption doesn't increase bit strength anymore than using a longer key. You seem to think an attacker could only try one key but they wouldn't. They would try multiple keys to see if the output of multiple rounds produces the plaintext. You might be thinking well that would be "hard" but it isn't any harder than an equivalent single key.

For example symmetric encryption with a 256 bit key will take on average 2256 attempts to find the proper key by brute force. Doing two rounds with 128 bit keys will take 2128 × 2128 = 2256 attempts as well. Now you might say well I would use two rounds with 256 bit keys. Ok that requires 512 bits worth of keys. It isn't any stronger than a single round with a 512 bit key.

The reality is anything beyond 128 is beyond brute force. There isn't sufficient energy in our star to count to 2256 before it burns out much less brute force a key. Strong cryptography is strong. You don't need to invent clever ways to obfuscate it.

You could say Hello NSA here is the encrypted file. It uses AES-256 in CBC block mode and here is the IV. Try and break it. Strong encryption is strong. Obligatory reference to xkcd wrench scenario aside they aren't breaking it. They could bankrupt the country and build a planetary sized computer and they aren't breaking it. 2256 is a lot bigger than you think. So if 2128 or 2256 is so large as to be beyond brute force why would you need multiple of that? It protects against a non-existent threat.

The weak link is how did you generate that key. Was is derived from a password. Maybe one as weak as "P@ssw0rd!"? Well that is a lot easier to attack than a random 256 bit key? The "weakness" in modern cryptography isn't the primitives. They are well understood. It is all the implementation details. If the block mode CTR and you reused the IV? Your weak. Is it derived from a weak password? Your weak. Did you do a simple unsalted hash of a password instead of a using a KDF? Your weak. Does your PRNG have a flaw or backdoor? Your weak. Nobody brute forces keys with 128 bits of entropy. Nobody. So your solution is one in search of a problem. If your implementation is flawed it very likely is flawed for multiple keys as well so you are still weak. Simple is good, there is less to get wrong.

forest
  • 64,616
  • 20
  • 206
  • 257
Gerald Davis
  • 2,250
  • 16
  • 17
  • If two 128-bit keys (`K1`, `K2`) are used to encrypt a message like `c=E(K1, E(K2, m))` then to brute force `c` to recover `m`, you would need to check all the possible combinations of the two keys with a space of (2^128) * (2^128) = 2^(256) possibilities. This is equivalent to brute-forcing a 256-bit key. (Note you said (2^128)^128 which is equal to 2^(128\*128) = 2^16384 -- this mistake was originally brought up in another answer by a user named Joe.) – dr jimbob Apr 28 '16 at 21:36