/dev/urandom
is a good choice, but the getrandom
system call would be ideal, using the default flags.
As for references, this article is not strictly speaking academic but it's a reasonably easy read, and cites a number of experts in support of its explanations. I think this passage, which the article quotes from Daniel Bernstein, is well worth reproducing:
Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
- we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
- we can figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
For a cryptographer this doesn't even pass the laugh test.
The article you've linked is more of a theoretical concern than a practical one. What it means isn't that the Linux RNG design is bad strictly speaking, but that it's not optimal in a number of regards, which however only apply in a very narrow scenario: when an attacker has managed to see the RNG's state at some point in its execution but (a) is no longer able to see the newer states and (b) is able to influence its entropy input. That's very specific—it lies at a very precise distance from normal operation (when the adversary hasn't seen the RNG state) and worst-case scenario (where the attacker has completely compromised the RNG and is able to see the state repeatedly).