0

I vaguely remember some story about some cipher (I think it was 3DES) having some "issue" where if the key happened to be especially bad (like 0x00000...) then the resulting encryption becomes trivially attacked. The pathological keys are known, but there are only a very small number of them so the probability of picking them at random is very low.

One's instinct might be to check the key, and if it's one of these few pathological cases, generate a new one.

However, the chances of randomly generating one of these pathological keys is so low, that it's more likely that by trying to avoid the problem a new problem is introduced, making things worse than just leaving these pathological cases to chance.

And I think when I read the story, it went on to explain precisely how an implementation of this "retry on unlucky key" behavior could be exploited.

Can someone please fill in my memory a bit?

Phil Frost
  • 725
  • 4
  • 10
  • 1
    With regards to DES, you're probably thinking of the "[weak keys](https://en.wikipedia.org/wiki/Data_Encryption_Standard#Minor_cryptanalytic_properties)". They're not actually cryptographically weak, they just have the interesting property that you can decrypt the data by re-running the encryption operation, where with every other key, encryption and decryption are distinct operations. – Mark May 22 '21 at 03:21
  • I vaguely remember coming across something on this subject too. But, I thought it had to do elliptical curve cryptography, where there are rare points on some curves that produce incorrect results; and most implementations either don't account for these points, or they get it wrong. – mti2935 May 22 '21 at 12:26

0 Answers0