0

I'm reading s2n's docs (https://github.com/awslabs/s2n) where it is claimed that:

https://github.com/awslabs/s2n#compartmentalized-random-number-generation

The security of TLS and its associated encryption algorithms depends upon secure random number generation. s2n provides every thread with two separate random number generators. One for "public" randomly generated data that may appear in the clear, and one for "private" data that should remain secret. This approach lessens the risk of potential predictability weaknesses in random number generation algorithms from leaking information across contexts.

How does this help against common mode failures, or malicious folks subverting algorithms? Does it (having two RNGs) introduce any extra attack surfaces (especially given that actual crypto is outsourced by the s2n library to external libraries like OpenSSL)?

Deer Hunter
  • 5,297
  • 5
  • 33
  • 50

2 Answers2

2

Let's look at how such a thing might play out:

A classic example is the Dual-EC DRBG which employees a back-doored RNG to create a general attack against TLS. The other half of this attack beyond the vulnerable RNG is a mechanism for revealing to the attacker the state of the RNG at the time of use to allow the attacker to predict the remaining "random" content as produced by the victim's RNG.

So there's a non-standard feature implemented in TLS called Extended Random, through which a TLS server just dumps a bunch of output of its RNG into the TLS negotiation, which is revealed to anyone eavesdropping on the conversation. In fact, it's been calculated that this feature improves the Dual EC attack by a factor of 64000X.

So a system which uses a different RNG for the "Extended Random" component than for key derivation would be significantly (64000x) better protected against this attack.

But not completely immune. The RNG is vulnerable in itself, even if you keep your "high-value" randomness separate. A second RNG instance is useful, but not foolproof.

The goal of such a system is to protect against the unknown, so measuring its usefulness is difficult at best. And other protections may work better.

For example, random values XORed with anything is still random. So if your application runs multiple RNGs and XORs their output, then the quality of the randomness you get is equal to highest quality of all the input RNGs. So if an application XORs a whole list of RNGs of varying quality, then if any one of the RNGs is safe, then the output will be safe as well.

So how good would this technique be in protecting against unknown threats? We don't know. But what we can say is that it would protect against known threats such as the Dual EC backdoor mentioned above.

tylerl
  • 82,225
  • 25
  • 148
  • 226
1

There are some classes of PRNG weaknesses which require the attacker to obtain multiple numbers in order to predict the next number in the sequence. Using two PRNGs would compartmentalise the risk, in the event the "public" chain can be predicted, the "secret" numbers may still be unpredictable.

Honestly I wouldn't consider it that significant but it could harden the secret keys in certain scenarios. It wouldn't provide any benefit in an entire class of flaws where the PRNG is unpredictable but repeatable as that can lead to key compromise through information leaks.

Gerald Davis
  • 2,250
  • 16
  • 17