2
So I'm using a board on a ramfs that is pretty much running a barebones Linux kernel that is pretty much just POSIX-compliant (with busybox). For some disk drive testing, I am trying to generate a large (on the order of a Gigabyte) random file.
Currently what I'm doing is as follows:
dd if=/dev/urandom of=./basefile bs=1M count=10
for i in {1..100}; do cat ./basefile >> ./testFile; done
Thus I have a practical solution that meets my needs.
However, on a more academic note, is there an efficient way to generate completely (psuedo-)random files using POSIX utilities only? Openssl is not installed. For the sake of comparison, above command runs in 23.5s, while below command runs in 3m3.179s:
dd if=/dev/urandom of=./testFile bs=1M count=1000
You know urandom needs its entropy pool filled to be fast right? Don't just leave it idling away. – micke – 2012-11-07T21:52:07.663
@micke I thought that that was random (and urandom just looped over whatever random data it had or something like that). – Ross Aiken – 2012-11-07T21:54:04.973
@micke urandom doesn't care about entropy, you're confusing with
/dev/random
. – Renan – 2012-11-07T21:59:01.083