Entropy is required in the following sense: if a PRNG has only n bits of entropy, then this means that it has (conceptually) only 2n possible internal states, and thus could be broken through brutal enumeration of these 2n states, provided that n is low enough for such a brute force attack to be feasible.
Then things become complex, because the "entropy level" reported by the kernel is NOT the entropy. Not the one I talk about above.
From a cryptographic point of view, the algorithm that computes the output of both /dev/random
and /dev/urandom
is a (supposedly) cryptographically secure PRNG. What matters for practical security is, indeed, the accumulated entropy of the internal state. Barring cryptographic weaknesses in that PRNG (none is known right now), that entropy can only increase or remain constant over time. Indeed, "entropy" can also be called "that which the attacker does not know" and if the PRNG is indeed cryptographically secure, then, by definition, observing gigabytes of output yields only negligible information whatsoever on the internal state. That's what cryptographically secure means.
Therefore, if /dev/urandom
had 200 bits of entropy at some point since last boot, then it still has 200 bits of entropy, or even more.
From the point of view of whoever wrote that code (and the dreaded corresponding man page), entropy is "depleted" upon use. This is the stance of someone who assumes, for the sake of the argument, that the PRNG is not cryptographically secure, and is in fact somehow equivalent to simply outputting the internal state as is. From that point of view, if /dev/random
started with n bits of entropy and outputs k bits, then it now has n-k bits of entropy.
However, this point of view is not ultimately tenable, because while it is based on the assumption that the PRNG is utterly broken and a no-operation, it is also based, at the same time, on the assumption that the PRNG is still cryptographically secure enough to turn the "hardware entropy" (the sources of data elements that are assumed to be somewhat random) into a nice uniform sequence of bits. In short words, the notion of entropy depletion works only as long as we take the extreme assumption that the PRNG is utterly weak, but under this assumption the estimate of how much entropy is really there is completely off.
In essence, that point of view is self-contradictory. Unfortunately, /dev/random
implements a blocking strategy that relies on this flawed entropy estimate, which is quite inconvenient.
/dev/urandom
never blocks, regardless of how much "hardware entropy" has been gathered since last boot. However, in "normal" Linux installations, a random seed is inserted early in the boot process; that seed was saved upon the previous boot, and is renewed immediately after insertion. That seed mostly extends the entropy of /dev/urandom
across reboots. So the assertion becomes: if /dev/urandom
had 200 bits of entropy at any point since the OS was first installed, then it still has 200 bits of entropy.
This behaviour can still be somewhat troublesome for some specific cases, e.g. diskless boot. The booting machine may need some randomness before having access to its files (e.g. to establish an IPsec context needed to reach the server that contains the said files). A better implementation of /dev/urandom
would block until a sufficient amount of hardware entropy has been gathered (e.g. 128 bits), but would then produce bits "forever", without implementing some sort of entropy depletion. This is precisely what FreeBSD's /dev/urandom
does. And this is good.
Summary: don't worry. If the PRNG used in the kernel is cryptographically secure, as it seems to be, then the "entropy_avail" count is meaningless. If the PRNG used in the kernel is not cryptographically secure, then the "entropy_avail" count is still flawed, and you are in deep trouble anyway.
Note that VM snapshots break the entropy, since the behaviour of the VM after the restore will always work on the state that was saved in the snapshot, and will diverge only through accumulation of fresh hardware events (which can be tricky in a VM, since the VM hardware is not true hardware). The kernel's "entropy_avail" counter, and /dev/random
blocking behaviour, change nothing at all to that. VM snapshot/restore are a much more plausible security vulnerability for the system PRNG than the academic, purely theoretical scenario that "entropy_avail" tries to capture (and actually fails to).