21

Is there some faster way than /dev/[u]random? Sometimes, I need to do things like

cat /dev/urandom > /dev/sdb

The random devices are "too" secure und unfortunately too slow for that. I know that there are wipe and similar tools for secure deletion, but I suppose there are also some on-board means to that in Linux.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
cgp
  • 1,022
  • 3
  • 12
  • 15
  • Equivalent on StackOverflow: http://stackoverflow.com/questions/841356/is-there-an-alternative-to-dev-urandom – David Z May 08 '09 at 19:50
  • possible duplicate of [Fastest, surest way to erase a hard drive?](http://serverfault.com/questions/56280/fastest-surest-way-to-erase-a-hard-drive) – Kyle Brandt May 24 '10 at 11:48
  • 1
    Isn't dd a better way to do this.. Possibly a contender for the UUoC award? – Tom O'Connor Jul 08 '11 at 21:26

15 Answers15

24

Unfortunately Linux has bad implementation of urandom. You could use aes256-ctr with a random key and get several hundred megabytes of pseudo-randomness per second, if your CPU supports AES-NI (hardware acceleration). I am looking forward to urandom switching to a modern approach as well.

openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin

This puppy does 1.0 GB/s on my box (compared to 14 MB/s of /dev/urandom). It uses urandom only to create a random password and then does very fast encryption of /dev/zero using that key. This should be a cryptographically secure PRNG but I won't make guarantees.

Tronic
  • 341
  • 3
  • 3
  • Thank you for this awesome answer, I was able to go up from 9.5 MB/s with /dev/urandom to over 120 MB/s with openssl. – GDR Oct 03 '12 at 15:55
  • Except for the first statement that _Linux has bad implementation of urandom_, I approve this answer. Good enough to wipe (or fill?) a hard disk before encryption. – Vikrant Chaudhary Jan 30 '13 at 16:15
  • 5
    Pass through `pv` for a nice progress indicator. `openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | pv -pterb > /dev/sdb`. – Vikrant Chaudhary Jan 31 '13 at 21:02
  • @VikrantChaudhary urandom produces high quality pseudo random numbers, sure, but that is no excuse for being slow. AES counter mode is much faster and it is difficult to argue how it would be any less secure than /dev/urandom. – Perseids Feb 16 '14 at 01:32
  • 1
    Just to add to the `pv` recommendation, you can pipe to `pv -pterb -s $(blockdev --getsize64 /dev/sdb) >/sdb` to have `pv` show you the progress towards finishing the write. – asciiphil Oct 24 '14 at 20:04
12

If you're looking to do a "secure" erase of a hard drive (or file), you ought to look at the shred utility.

As the previous posters point out, the /dev/*random devices are meant to be used as a source of small chunks of random data.

MikeyB
  • 38,725
  • 10
  • 102
  • 186
  • 1
    According to the man page, 'shred' uses /dev/urandom. So while a good answer for wiping the disk, it won't offer a speedup over any other technique reading from /dev/urandom. (Another tip if using 'shred': most people probably will be happier with 1-2 passes, rather than the giant default count, so that a wipe doesn't take days.) – gojomo Jun 01 '10 at 05:33
  • 4
    Actually shred is **much** faster than /dev/urandom. My guess is it provides its own pseudorandom data using /dev/urandom or /dev/random as a seed. – thomasrutter Feb 15 '11 at 09:41
7

In a quick test under Ubuntu 8.04 on a Thinkpad T60p with T2500 CPU, 1GB of random data from openssl rand was 3-4X faster than /dev/urandom. That is,

time cat /dev/urandom | head -c 1000000000 > /dev/null

...was around 4 minutes while...

time openssl rand 1000000000 | head -c 1000000000 > /dev/null

...was just over 1 minute.

Unsure if there's a difference in random-quality, but either is probably fine for HD-wiping.

gojomo
  • 171
  • 1
  • 3
5

I see a lot of answers saying that using random data isn't important. That's pretty much true if all you are trying to do is wipe the drive, but not so much if you are wiping it in preparation for disk encryption.

If you fill a device with non-random data then place an encrypted partition on it you might run into a problem. The portion of the drive which is storing encrypted data will stand out from the rest of the drive, because the encrypted data will look random and the rest won't. This can be used to determine information about the crypto disk that could be used in cracking it. The link below explains the theory behind how some of the more common attacks work and how to defend against them (on Linux, anyway).

Linux hard disk encryption settings

user104021
  • 51
  • 1
  • 1
  • 1
    Very right. With relatively modern disks (> 20 GB) any single pass overwrite is wipe enough. Even the NSA and the likes would be hard-pressed to get any significant amount of data from the drive. And it's very costly. Think $100.000 per megabyte. The remark about encryption is very true. You want the unused portions of the disk look "as random" as the used portions. – Tonny Dec 13 '11 at 21:33
  • Doesn't your device encryption software randomize the whole disk? – Nathan Garabedian May 24 '12 at 18:06
5

If you need to securely wipe a HD there is one tool very powerful: DBAN

Arg
  • 71
  • 1
  • 3
5

If you want to erase a huge block device then I've found it more robust to use dd and the device mapper instead of output redirection of random data. The following will map /dev/sdb to /dev/mapper/deviceToBeErased en- and decrypting transparantly in between. To fill up the device on the encrypted end, zeros are copied to the plain text side of the mapper (/dev/mapper/deviceToBeErased).

cryptsetup --cipher aes-xts-plain64 --key-file /dev/random --keyfile-size 32 create deviceToBeErased /dev/sdb
dd if=/dev/zero of=/dev/mapper/deviceToBeErased bs=1M

The encrypted data on /dev/sdb is guaranteed to be indistinguishable from random data if there is no serious weakness in AES. The key used is grabbed from /dev/random (don't worry - it uses only 32 bytes).

Perseids
  • 213
  • 1
  • 3
  • 10
4

check frandom

http://billauer.co.il/frandom.html

according to my test it is fastest

MA1
  • 149
  • 1
  • 2
  • 1
    frandom should no longer be considered cryptographically secure, given that is uses RC4. See http://blog.cryptographyengineering.com/2013/03/attack-of-week-rc4-is-kind-of-broken-in.html for an example of a borderline practical (!) attack on TLS when using RC4. – Perseids Feb 16 '14 at 01:23
2

If you want to wipe a hard drive quickly, write nonrandom data to it. This is no less secure than using random data. Either way, when hooked up to a computer the original data can't be read. Overwriting Hard Drive Data: The Great Wiping Controversy shows that the original data can't be read using a microscope either.

sciurus
  • 12,493
  • 2
  • 30
  • 49
2

Format with LUKS, and dd over the encrypted volume. Then use /dev/urandom to wipe the LUKS header.

If you have hardware AES support this is a very fast solution.

Briefly:

cryptsetup luksFormat /dev/sdX
cryptsetup luksOpen /dev/sdX cryptodev
dd if=/dev/zero bs=1M of=/dev/mapper/cryptodev
cryptsetup luksClose cryptodev
# wipe the luks header.  Yes, it uses /dev/urandom but only for 2MB of data:
dd if=/dev/urandom bs=1M count=2 of=/dev/sdX

done!

See my blog: Quickly fill a disk with random bits (without /dev/urandom)

  • Why do you bother with LUKS if all you want to do is overwrite the device? Plain dm-crypt ("plain mode" of cryptsetup) is much easier to use for that. – Perseids Feb 16 '14 at 01:04
2

If you want to erase a harddrive, dd does not delete the content of reallocated sectors, and is very slow if the harddrive is dying. Instead you can use the drives build in erase function, which has been standardized for a long time.

In this example, I am erasing a 500GB mechanical harddrive in only 102 minutes. Even when it is full of reallocated sectors:

root@ubuntu:~# hdparm --security-set-pass Eins /dev/sdaj
security_password="Eins"

/dev/sdaj:
 Issuing SECURITY_SET_PASS command, password="Eins", user=user, mode=high
root@ubuntu:~# time hdparm --security-erase-enhanced Eins /dev/sdaj
security_password="Eins"

/dev/sdaj:
 Issuing SECURITY_ERASE command, password="Eins", user=user

real    102m22.395s
user    0m0.001s
sys     0m0.010s

root@ubuntu:~# smartctl --all /dev/sdaj | grep Reallocated
  5 Reallocated_Sector_Ct   0x0033   036   036   036    Pre-fail Always   FAILING_NOW 1327 

You can see more details at ata.wiki.kernel.org, however their example don't use --security-erase-enhanced, which is necessary to delete the before mention reallocated sectors.

2

The faster your tool the less secure the result will be. Generating good randomness takes time.

Anyway, you could use something like dd if=/dev/zero of=/dev/sdb, but obviously that isn't going to be random, it will just erase much faster.

Another option might be to use this method /sbin/badblocks -c 10240 -s -w -t random -v /dev/sdb it is faster then urandom, but the badblocks PRNG is less random.

Zoredache
  • 128,755
  • 40
  • 271
  • 413
  • 1
    and honestly - this is *PLENTY* of security for the drive – warren May 09 '09 at 00:01
  • Multiple overwrites, as shred does, takes time and provides better security than one overwrite of "perfectly" random data. – Dennis Williamson May 09 '09 at 13:32
  • "The faster your tool the less secure the result will be. Generating good randomness takes time." - That's not true. An AES counter mode (pseudo) random number generator is far better analyzed and orders of magnitude faster than /dev/urandom. (See Tronic's answer.) – Perseids Feb 16 '14 at 01:15
2

/dev/random uses a lot of system entropy, and so produces only a slow data stream.

/dev/urandom is less secure, and faster, but it's still geared towards smaller chunks of data - it's not meant to provide a continuous stream of high speed random numbers.

You should make a PRNG of your own design, and seed it with something from /dev/random or /dev/urandom. If you need it a bit more random, seed it periodically - every few MB (or whatever the length of your prng is). Getting 4 bytes (32 bit value) from urandom or random is fast enough that you can do this every 1k of data (reseed your prng every 1k) and get very random results, while going very, very, quickly.

-Adam

Adam Davis
  • 5,366
  • 3
  • 36
  • 52
  • 7
    It is very rare that someone can write their own random number generator that is better than ones that are already readily available. More often than not, the result is a predictable pattern and a false sense of security. I would recommend using shred on a drive via its /dev entry or very thorough physical destruction. – Dennis Williamson May 09 '09 at 13:29
  • I agree. I would use shred, which by default uses urandom (which I frankly don't find slow). As a note, it is possible to use /dev/random with shred (by specifying --random-source=/dev/random) if you are very patient. – Matthew Flaschen May 09 '09 at 19:10
1

The 'shred' utility is easy and fast. If the SMART attributes of the drive indicate zero re-allocated sectors, 'shred' is likely secure enough.

However, if the drive has re-allocated sectors, the data on damaged sectors will not be overwritten. If the damaged locations contained sensitive data before they were re-allocated, 'shred' may not be good enough. The 'bad' sectors may be read by resetting the drive's allocation map, and (repeatedly) reading them.

The ability to reset the bad sector allocation map varies depending on manufacturer and drive model.

drok
  • 11
  • 1
1

In practise there's probably no need to seed the whole disk from one continuously random stream.

You could create a modest sized chunk of random data and then just repeat that over and over across the disk.

Just make sure that that chunk of data is not a multiple of the disk's normal block size, to ensure that you don't end up overwriting correlated blocks of data with the exact same bit of random data. A chunk size that's a prime number in the ~1MB range should do nicely.

For additional security, just do it a few times more, using a different chunk size each time.

Alnitak
  • 20,901
  • 3
  • 48
  • 81
0

If all you want to do is overwrite the disk, then it doesn't matter what you use because anything at all will beat anything short of a forensics lab and I wouldn't trust anything short of slagging the drive to stop that level of resources.

Just use a non random source like all zeros or ones or a repeating pattern like (I think this will work)

(head -c 4096 /dev/urandom; cat /dev/sdb/) > /dev/sdb
BCS
  • 1,065
  • 2
  • 15
  • 24
  • That may be the case, but sometimes you can't convince management that the security imparted by a random write isn't really any greater than using all zeros given the level of technology required to recover data is the same for both scenarios. In this case it's often better to meet the requirement by building your own fast random number generator. – Adam Davis May 08 '09 at 20:18
  • http://serverfault.com/questions/5024/wipedrive-utility/5069#5069 – BCS May 08 '09 at 20:46