how long to zero a drive with dd?

14

5

How long will it take to zero fill 1TB (using dd dev/zero)?

I'm actually doing two 500G drive simultaneously if it matters.

Miles Marley

Posted 2011-03-25T02:00:28.983

Reputation: 141

Question was closed 2013-02-07T10:05:47.803

3So, tell us, how long did it take to dd the drives with zeros? – Rolnik – 2011-03-28T19:38:28.470

1My WD 1Tb (5400rpm) sata takes about 240 minutes, but it's old and has reallocated sectors. Incidentally it's not that different from the same time that smart reports as the time to expect an extended self test to last (255mins) – barrymac – 2012-03-01T14:57:12.303

2Adding another data point: just ran dd if=/dev/zero of=/dev/sdX bs=8M on two brand new Seagate ST4000DM000 4 TB drives over SATA-300 ports simultaneously (I think it was more or less perfectly parallelizable — CPU usage was constant at ~20% for the first dd process before the second was started, and then both took ~20% each). The first disk finished in 8h50min (530 min), and the second in 8h30min (510 min). It amounts to a write speed of ~130 MB/s per drive, which is not that strange considering the monotone input. The hardware was from 2009 (CPU: C2D E8400; chipset: Intel P43/ICH10). – Daniel Andersson – 2013-03-20T08:42:52.350

Answers

9

It depends on many factors, including but not limited to:

  • Disk speed (RPM)
  • Disk built-in cache
  • Number of platters and whether it can write to multiple platters simultaneously
  • Disk interface (SATA/SCSI, etc)
  • Interface controller performance
  • Configuration of the drives (eg. separate channels or same channel)

Additionally, although zeroing a drive is a simple task for the CPU and RAM, there may still be an effect from:

  • CPU performance
  • Available RAM
  • Speed of RAM
  • Other tasks being done at the same time
  • Power management settings

Assuming a fairly recent computer with middle-grade drives, on a minimal linux boot disk running JUST the zeroing operation (no gui, internet, etc) loaded entirely to RAM, it could be anywhere from 2-12 hours. If I had to throw a single number out, I'd say closer to 3 and a half hours, but again, there's not enough information to get a good estimate other than actually doing it.

If you have more than 1GB free space, you could try mounting the drive and running dd if=/dev/zero bs=512 count=2048 of=/tmp/tempzero or some other file. If you know more about the optimal block size for fastest writing to your drive, you can use that for the bs value (in kilobytes) and set the count to whatever gets you the filesize you want. Then you can use that to get a better estimate without losing data. It will just create a large file that contains zeros.

TuxRug

Posted 2011-03-25T02:00:28.983

Reputation: 1 616

3In my experience on hard drives of the last decade or so, bs=1M is a vast improvement over bs=512 and is good enough to use as a default without worrying too much about finding the optimal. – crazyscot – 2011-04-10T21:52:57.313

@crazyscot yes massive difference there with the bs=1M to override the terrible default of bs=512. Also ddrescue(check how to set block size in it), gives a percentage/progress bar. – barlop – 2013-02-06T22:13:26.027

2

I did a dd with random data on a 750GB drive. I think it took about 20 hours. The thing that really sucked about it, is I had to do that four times for a four disk RAID array. I think the bottleneck is the write speed of your drives. You're being smart to do it to the drives in parallel.

Rolnik

Posted 2011-03-25T02:00:28.983

Reputation: 1 457

Is there any risk of 1TB of zeros being compressed somewhere in the pipeline down to storage, and skewing the results? I don't mean actually compressed on the disk, but in transit as a optimisation – pufferfish – 2014-09-10T10:37:58.957

2A large part of your performance issue was likely your using random numbers. /dev/urandom or any other source is going to try very hard to generate truly random numbers, thereby dropping your throughput. Something like /dev/zero won't have that problem. – Sam Bisbee – 2011-11-11T19:45:44.640

2

With a partition of +100 GB, Acer Aspire 5750G, external sata hdd, usb 2, 5400rpm:

xxxx@acer-ubuntu:~$ sudo dd if=/dev/zero of=/dev/sdb2 bs=8M
[sudo] password for xxxx: 
dd: writing `/dev/sdb2': No space left on device
12500+0 records in
12499+0 records out
104856551424 bytes (105 GB) copied, 2846.87 s, 36.8 MB/s

and

xxxx@acer-ubuntu:~$ sudo dd if=/dev/zero of=/dev/sdb1 bs=8M
[sudo] password for xxxx: 
dd: writing `/dev/sdb1': No space left on device
6579+0 records in
6578+0 records out
55183409152 bytes (55 GB) copied, 1497.23 s, 36.9 MB/s

hsmit

Posted 2011-03-25T02:00:28.983

Reputation: 395

1

I'm guessing, but my guess is that it would depend on the drive controller, the controller on the motherboard, and what else is soaking up CPU/IO.

My guess, on the order of hour or hours. Days seems long. Depending on how your machine is set up, running both at the same time may actually slow things down if you create contention for the drive controller. Even though you're pumping out zeros, nothing in your drive knows that and it needs to write every byte.

Rich Homolka

Posted 2011-03-25T02:00:28.983

Reputation: 27 121

1

If you're just erasing the drives, a great tool to use for parallel throughput is DBAN in simple erase mode. It's available as an ISO and basically does the dd if=/dev/zero command for you on the drives you select.

wajeemba

Posted 2011-03-25T02:00:28.983

Reputation: 11

0

It should take 2-5 hours. Your bottleneck is the disk, not the RAM, the CPU, the cables or controller configuration. Unless you have a very old computer, like an original Pentium, your CPU and memory are way faster than the hard disk's spindle speed, as are your SATA cables. Cache does not even come into play because you are zeroing the drive (unless you have 1 TB of cache).

Jordan

Posted 2011-03-25T02:00:28.983

Reputation: 1