1

Let's state that I have a huge bunch of truly unpredictable random data in file "random.bin". The random data has been created outside of my system and has been securely transfered into my system.

Questions:

  • How could I feed this file to the OS entropy pool so that a subsequent call to /dev/random by my fancy software will not block?
  • Would the command 'cat random.bin > /dev/random' (or similar) make it?

Notes:

  1. The question is not about which of /dev/random or /dev/urandom should I use. Long story short, I cannot make use of /dev/urandom due to legacy reason.
  2. Answer must not make use of fancy tool like 'havege'.
  3. Perfect answer would be Unix flavour independent (so working on AIX, Solaris, RHEL, BSD, ...).
    • Shell command is best.
    • If Shell command is not possible, which system call?
  4. It is similar but different than question Feeding /dev/random entropy pool? . The goal is different.
Algiz
  • 111
  • 2
  • 1
    I remember when I tried to seed the RNG myself I had to use a custom C program since just writing to urandom or random normally didn't let add it to the kernel's pool (something to do with using `ioctl` and `RNDADDENTROPY`). If you can do that without any "fancy" tools then you should be able to do it with just shell commands, otherwise you'll need to use existing tools or make your own. This was for Linux, other systems probably behave differently. – user Sep 11 '20 at 15:24
  • I would be glad to move the question. Should I do it myself or let the magic happen? – Algiz Sep 15 '20 at 15:50

1 Answers1

3

This looks a lot like a XY problem: you want to solve X problem doing Y, but don't know how to do Y, so you are asking about Y here.

You are trying to use true secure random numbers and don't trust urandom and want to increase the entropy pool so you can use random. Right?

Don't use /dev/random... That's why /dev/urandom exists. It's seeded by /dev/random, and uses a very strong algorithm to generate random numbers in a non-blocking way. The u on urandom usually means unlimited, so it will never run out of random numbers, unless you are using a diskless station (or a router, or a live CD distro) seconds after booting before /dev/random had time to build some entropy.

Some people will argue a lot about random/urandom, that urandom is not secure enough, that only random have true random numbers, and so on. Don't listen. Use urandom cryptographically secure pseudorandom number generator and be happy. And using random can be a liability and create an incident: it blocks. And that can lead to a DoS not only on your application, but on every other application with developers thinking that using /dev/random is the way to go.

So, if you are loading a large file full of random data, why rely on random or urandom at all? Just read the file. You can even use urandom to define the position where you will read, store the number of records read from that file, and block after reading the random file enough times. It would be terrible for security (the random file is very predictable if someone get a hold of it), performance will be worse than reading urandom, and you have to keep a look on the read count to send another file before the randomness runs out (or it blocks like random).

And Thomas Pornin already wrote about this as well, and this page debunks a lot of myths about randomness too.

ThoriumBR
  • 50,648
  • 13
  • 127
  • 142
  • I downvoted this answer because it does not answer the question. As stated before this is not about the (benefit of the) use of /dev/urandom. It is not a 'XY' problem. – Algiz Sep 14 '20 at 06:42
  • It looks a lot like it. The first question asks specifically about adding entropy so when his software calls `/dev/random` it does no block. – ThoriumBR Sep 14 '20 at 12:05
  • It may look like it but it is not. I explicitly asked not going into the random/urandom debate. See note #1. You did not answer the question about how to add entropy. – Algiz Sep 15 '20 at 15:49