4

The purpose of my question is get an better idea about reasonable amount of time the generation of the RSA public/private keypair should take?

To be less vague let me specify the question and define I would use these commands to generate the keypair on an laptop running a recent 3.2.xx linux kernel.

$ openssl genrsa -out testkey.private 2048
$ openssl rsa -in testkey.private -pubout -out testkey.public

I have timed about commands and in average this operation is taking about 0.5 seconds, which seems awfully quick to me (too quick indeed).

Know I am aware of (and also tried to express that in the questions title) that it is not merely the time, but rather the amount of entropy and its rate of replenishment which influences the time needed to create the keypair. At least I assume that entropy is used to generate the keys (else the keys would be predictable and consequently useless, right?).

On the same system using gpg --gen-key from OpenPGP "GNU Privacy Guard" which I can also also be used to create a 2048bit RSA key-pair for instance takes much much longer time and even requires me to move the mouse around etc.

So I am wondering how this is adding up. In essence how much entropy and consequently time should a RSA keypair generation take on a moderns system.

I am convinced there is no standard time, but surely it can be estimated in terms of "less then 30 minutes" "more then 10 seconds".

Given that there are storages of "entropy" in the system. Let's assume they are empty and hence the time I am looking for the creation of one RSA 2048 keypair "rule of thumb" should involve starting from scratch "in entropy storage terms".

Why would OpenPGP gpg take about 2 minutes to create a RSA 2048 keypair and openssl be done in half a second...?

Benoit Esnard
  • 13,942
  • 7
  • 65
  • 65
humanityANDpeace
  • 1,412
  • 1
  • 12
  • 24

3 Answers3

7

For security, all that you need is that the process runs with 128 or so of entropy as initial seed. Entropy here means: that which the attacker does not know. In particular, there is no problem if the same source entropy is extended with a CSPRNG into the production of many key pairs.

The core model is the following: the operating system maintains an internal "entropy pool" which contains at least 128 bits of entropy. A cryptographically strong PRNG extends that internal pool into an arbitrarily long sequence of pseudorandom bits, which are, by definition, unpredictable by the attacker, because 2128 is way too high a number for the attacker to brute force the pool, and "cryptographically strong" really means "ye cannot do better than brute force". On a Linux system, this is called /dev/urandom.

(Note: part of being "cryptographically strong" also includes "forward security": obtaining the pool contents after the key generation should not allow "rewinding" the PRNG to its state prior to the generation. Good PRNG are forward secure.)

There is a common myth about entropy being "used up" like it was some sort of fuel. This is not true, except in a very theoretical sense which applies only if you use the randomness only for One-Time Pad, and your attacker is God. For all other cases, including generating keys for "normal algorithms" like RSA, and not incurring divine wrath, entropy is not consumed when obtaining bytes from /dev/urandom. The /dev/random behaviour on Linux is, in fact, badly flawed (FreeBSD's /dev/random is much better). This Web page explains quite well the differences between /dev/random and /dev/urandom, and why /dev/urandom should be preferred. Read it whole.

The remaining question is then: why would GnuPG use /dev/random and not /dev/urandom for key generation ? Well, there are historical reasons (GnuPG aimed at supporting systems with very poor OS-provided PRNG) and general inertia (a common unwillingness to challenge "established wisdom", even when such "wisdom" is mythical); but the biggest reason is that it is your own fault. GnuPG does cryptography like medicine: not only it must protect the emails, but the human user must also be convinced that some powerful cryptography just occurred. Pills work better when they taste foul; similarly, users want key generation to look like an expensive endeavour. Your own question is a testament to that feeling: though you don't actually know precisely what happens within a CSPRNG (that's a matter for cryptographers), you still believe that "0.5 seconds is too quick". GnuPG complies to your whims by making a big show of the key generation process. This is all security theater and it happens because you want it that way.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 1
    `/dev/random` isn't flawed, it's just badly named. It really is actually an entropy collector, rather than a random generator. There isn't really much an overlap in the use cases between `/dev/random` and `/dev/urandom`, as they have totally different uses and purposes. If you really understand what they really are, picking which one to use is fairly straight forward. OpenBSD's aliasing them into two names are in fact problematic, it would have just been better to provide just one device rather than two equal ones. – Lie Ryan Feb 28 '18 at 14:22
  • @LieRyan OpenBSD aliasing them into two names was done for compatibility, as providing one device would break programs that expect one or the other. It's easy enough just to create another device file. – forest Sep 22 '18 at 11:10
2

by way of measuring the reads of gpg and openssl of the files /dev/random and /dev/urandom which seem to be the entropy/randomness sources used it seems that the answer is that

for a 2048bit RSA keypair we need/use

  • 300bytes "of /dev/random randomness" in case of gpg
  • 32bytes "of /dev/urandom randomness" in case of openssl

The time difference (gpg ca. 30secs) vs (openssl ca 0.5secs) is therefore explained by that fact that /dev/random is blocking when drained of randomness.

humanityANDpeace
  • 1,412
  • 1
  • 12
  • 24
  • 300 bytes (2400 bits!) is incredibly overkill in all situations, no matter how big the private key is. All you would ever need is 256 bits, and even that is more than enough for the average private key (which is about as secure as a 112 bit symmetric key, at least for RSA 2048). Any further needed random data (since it is true that you need more than 256 bits worth of _data_) can be generated with a CSPRNG fed those initial 256 bits. – forest Sep 22 '18 at 11:25
1

To add to what humanityANDpeace said, /dev/urandom is a CSPRNG that uses the kernel internal entropy pool (what's used for /dev/random, also a CSPRNG that blocks with an entropy estimate == 0) but extended by a CSPRNG algorithm. This means reading from /dev/random is very slow unless you have a hardware RNG feeding it.

GnuPG uses the raw data input from /dev/random for key generation. It uses /dev/random because it wants to ensure very high quality entropy for the keys, and /dev/urandom is not a good source of entropy on some systems (or at least was not, I'm not sure if this is still true).

OpenSSL, on the other hand, uses 32 bytes from /dev/urandom to seed its own internal CSPRNG, then generates the data needed for key generation from there. This explains the faster operations.

Unless a weakness is found in the random data generated by /dev/urandom that does not also affect /dev/random, keys generated in either fashion should be secure. /dev/urandom is believed to be cryptographically secure, and is used for many other operations, so a weakness in it would be a big deal.

David
  • 15,814
  • 3
  • 48
  • 73
  • It's not true that `/dev/urandom` was ever a poor source of randomness. It was _always_ a good source. – forest Sep 23 '18 at 01:09