How to wipe free disk space in Linux?

149

85

When a file is deleted, its contents may still be left in the filesystem, unless explicitly overwritten with something else. The wipe command can securely erase files, but does not seem to allow erasing free disk space not used by any files.

What should I use to achieve this?

Alex B

Posted 2009-08-06T23:48:03.757

Reputation: 1 797

The only safe solution may be to save your files elsewhere, wipe the whole partition, recreate the filesystem, and then restore your files. I've run photorec and was shocked by how much stuff could be retrieved even after 'wiping' free space. A compromise solution is to move the left boundary of your partition by 6% of its size after having wiped the apparently free space.

– user39559 – 2010-09-07T12:12:26.933

Answers

113

Warning: Modern disk/SSD hardware and modern filesystems may squirrel away data in places where you cannot delete them, so this process may still leave data on the disk. The only safe ways of wiping data are the ATA Secure Erase command (if implemented correctly), or physical destruction. Also see How can I reliably erase all information on a hard drive?

You can use a suite of tools called secure-delete.

sudo apt-get install secure-delete

This has four tools:

srm - securely delete an existing file
smem - securely delete traces of a file from ram
sfill - wipe all the space marked as empty on your hard drive
sswap - wipe all the data from you swap space.

From the man page of srm

srm is designed to delete data on mediums in a secure manner which can not be recovered by thiefs, law enforcement or other threats. The wipe algorithm is based on the paper "Secure Deletion of Data from Magnetic and Solid-State Memory" presented at the 6th Usenix Security Symposium by Peter Gutmann, one of the leading civilian cryptographers.

The secure data deletion process of srm goes like this:

  • 1 pass with 0xff
  • 5 random passes. /dev/urandom is used for a secure RNG if available.
  • 27 passes with special values defined by Peter Gutmann.
  • 5 random passes. /dev/urandom is used for a secure RNG if available.
  • Rename the file to a random value
  • Truncate the file

As an additional measure of security, the file is opened in O_SYNC mode and after each pass an fsync() call is done. srm writes 32k blocks for the purpose of speed, filling buffers of disk caches to force them to flush and overwriting old data which belonged to the file.

fnord_ix

Posted 2009-08-06T23:48:03.757

Reputation: 2 534

1Filling the disk with zeros has the added benefit of recovering lost space from a VM virtual disk when that disk is stored on a ZFS volume. I currently have one that is 8.9GB apparent size but consumes 968MB according to ZFS after using this method. – haventchecked – 2016-06-15T15:10:35.467

1I found this post 7 years and 10 months since you posted it. Gotta love this database. TNX. – SDsolar – 2017-06-13T04:00:43.013

5It's hard to locate the current "official" homepage of secure-delete. A perhaps older version claims there are no bug reports, but at the same time there is no open bugtracking system where I could report a bug that I have found. The secure-delete homepage also points out that it may not wipe all the unused blocks of data, depending on the filesystem that you use, which is true. – user39559 – 2010-09-07T12:10:44.533

12With modern hard disks (bigger than around 20 GB), it is totally useless to do several passes and wait for ages. So installing specialized tools has also become useless (which may explain why secure-delete has no more home page). Just do this from the appropriate partition: cat /dev/zero >nosuchfile; rm nosuchfile. – mivk – 2011-11-04T11:47:36.453

1@mivk: Why is it useless to do more than one pass? And why use /dev/zero instead of /dev/random? Is that due to speed concerns? – naught101 – 2013-01-05T12:14:14.290

5Using /dev/zero is much faster. If you write free space from /dev/random, the kernel has to generate all that random data on the fly. It's an entertaining way to watch your load average jump up to the maximum... – dafydd – 2013-05-26T02:38:03.463

3

The question of whether multiple wipes are necessary is answered here: Why is writing zeros (or random data) over a hard drive multiple times better than just doing it once?

– sleske – 2014-01-28T10:25:03.627

74

The quickest way, if you only need a single pass and just want to replace everything with zeros, is:

cat /dev/zero > zero.file
sync
rm zero.file

(run from a directory on the filesystem you want to wipe)
(the sync command is a paranoia measure that ensures all data is written to disk - an intelligent cache manager might work out that it can cancel writes for any pending blocks when the file is unlinked)

There will be a time during this operation when there will be no free space at all on the filesystem, which can be tens of seconds if the resulting file is large and fragmented so takes a while to delete. To reduce the time when freespace is completely zero:

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
cat /dev/zero > zero.file
sync
rm zero.small.file
rm zero.file

This should be enough to stop someone reading the old file contents without an expensive forensic operation. For a slightly more secure, but slower, variant replace /dev/zero with /dev/urandom. For more paranoia run multiple steps with /dev/urandom, though if you need that much effort the shred utility from the coreutils package is the way to go:

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
shred -z zero.small.file
cat /dev/zero > zero.file
sync
rm zero.small.file
shred -z zero.file
sync
rm zero.file

Note that in the above the small file is shredded before creating the larger, so it can be removed as soon as the larger is complete instead of having to wait for it to be shredded leaving the filesystem with zero free space for the time that takes. The shred process with take a long time over a large file and unless you are trying to hide something from the NSA isn't really necessary IMO.

All of the above should work on any filesystem.

File Size Limits:

As DanMoulding points out in a comment below, this may have problems with file size limis on some filesystems.

For FAT32 it would definitely be a concern due to the 2GiB file limit: most volumes are larger than this these days (8TiB is the volume size limit IIRC). You can work around this by piping the large cat /dev/zero output output through split to generate multiple smaller files and adjust the shred and delete stages accordingly.

With ext2/3/4 it is less of a concern: with the default/common 4K block the file size limit is 2TiB so you'd have to have a huge volume for this to be an issue (the maximum volume size under these conditions is 16TiB).

With the (still experimental) btrfs both the maximum file and volume sizes are a massive 16EiB.

Under NTFS the max file length is larger than max volume length in some cases even.

Starting points for more info:
http://en.wikipedia.org/wiki/Ext3#Size_limits
http://en.wikipedia.org/wiki/Btrfs
http://en.wikipedia.org/wiki/Ntfs#Scalability

Virtual Devices

As mentioned in the comments recently, there are extra considerations for virtual devices:

  • For sparsely allocated virtual disks other methods such as those used by zerofree will be faster (though unlike cat and dd this is not a standard tool that you can rely on being available in pretty much any unix-a-like OS).

  • Be aware that zeroing a block on a sparse virtual device may not wipe the block on the underlying physical device, in fact I would go as far to say that it is unlikely to - the virtual disk manager will just make the block as no longer used so it can be allocated to something else later.

  • Even for fixed size virtual devices, you may have no control of where the device lives physically so it could be moved around its current location or onto a new set of physical disks at any time and the most you can wipe is the current location, not any previous locations the block may have resided in the past.

  • For the above problems on virtual devices: unless you control the host(s) and can do a secure wipe of their unallocated space afterward wiping the disks in the VM or moving the virtual device around, there is nothing you can do about this after the fact. The only recourse is to use full disk encryption from the start so nothing unencrypted is every written to the physical media in the first place. There may still be call for a free-space wipe within the VM of course. Note also that FDE can make sparse virtual devices much less useful as the virtualisation layer can't really see which blocks are unused. If the OS's filesystem layer sends trim commands to the virtual device (as if it is an SSD), and the virtual controller interprets these, then that may solve this, but I don't know of any circumstances where this actually happens and a wider discussion of that is a matter for elsewhere (we are already getting close to being off topic for the original question, so if this has piqued your interest some experimentation and/or follow-up questions may be in order).

David Spillett

Posted 2009-08-06T23:48:03.757

Reputation: 22 424

I would insert sync before the rm commands in your examples. Without sync a lot of data might be deleted directly from the disk cache in RAM before being actually written to the hard drive. – pabouk – 2016-04-29T08:56:13.397

@pabouk: good suggestion, I've updated the examples accordingly. – David Spillett – 2016-04-29T11:04:58.890

zerofree documentation specifically says it is faster than this method, so this is not "the quickest way". – endolith – 2016-10-04T16:45:13.843

2@endolith: from the description in the man page I'd expect zerofree's variant is only quicker for sparsely allocated virtual disks, in fact it might be slower on real or fixed-size-virtual ones if it is doing a read-before-write to confirm that the block has no content. The ballooning for a virtual disk shouldn't happen either as most sparse disk drivers take all-zeros as "don't allocate this block". Also, cat and dd are available on pretty much any unix-a-like OS as they are considered standard tools where zerofree probably isn't unless it has been explicitly added. – David Spillett – 2016-10-05T09:46:23.840

1@endolith: having said the above, zerofree would certainly work of course, the "entire file system temporarily full" thing mentioned in in the man page (almost but not quite mitigated by the small.file jiggery pokery in my examples) is a genuine concern if you are doing this on a currently active system, and zerofree would indeed be faster in the specific instance it is optimised for: sparsely allocated virtual block devices. Though you can't rely on any wipe on a virtual device for security purposes: the only true answer in that case is to full-device-encrypt from the start. – David Spillett – 2016-10-05T09:52:51.513

4The simple zeroing can apparently also be done with the secure-delete tools: using sfill -llz reduces the whole procedure to one pass which only writes '0's. – foraidt – 2010-10-03T14:38:35.063

I’m all in favor of paranoia and general, portable solutions.  But it seems to me that (1) if you have fairly little free space, you gain fairly little by the two-stage (small and then everything else) approach, and (2) if you have a lot of free space, it’s *probably* safe to remove the small file before the sync.  The sync command should block until all the data are written, so, if you do that first, you still have to wait for a while before you start getting free blocks available again. … (Cont’d) – Scott – 2018-11-17T21:55:04.387

(Cont’d) …  But, if the second cat (creating the large zero.file) takes a long time (and writes a lot of blocks), then there’s a decent chance that all the blocks that dd wrote to zero.small.file will actually have been written to the disk by the time the cat finishes.  No guarantees, of course; if you’re paranoid, be paranoid. … (Cont’d) – Scott – 2018-11-17T21:55:06.720

(Cont’d) … (3) I question how much the two-stage approach really benefits you. If the free space is fragmented, then writing the file will take a long time; but deleting the fragmented file that results shouldn’t take a long time. The kernel doesn’t need to reread the data blocks; all it needs to do is read the *indirect* blocks, and it can put the direct blocks onto the free list in memory. – Scott – 2018-11-17T21:55:08.760

P.S. For those of us who live in the US, the NSA shouldn’t really be a concern — it’s the big bad bogeyman of spy thrillers. Americans need to worry about the FBI, which probably has access to technology as good as what the NSA has. Or, in your case, Scotland Yard and/or MI5. – Scott – 2018-11-17T21:56:15.303

This takes a while. Is it really the quickest way? I guess writing GB of data will always take a while... – endolith – 2011-06-15T02:51:00.757

2@endolith: if you want to blank the free space on an active filesystem then you can't get around the need to write that much data via the filesystem overhead. The secure-delete tools suggested by fnord_ix may be faster, because they are optimised for this type of task. – David Spillett – 2011-06-15T12:04:16.247

Why 'dd' and not 'pv'? – pbies – 2013-12-05T13:53:30.947

@pmbiesiada: where dd is being used above it is only going to be running for a very short time so the progress indicator of pv is of little use. Also pv is not as universally available as dd and cat. You could replaces the instances of cat with pv though it will only tell you what it has done, not what it has left to do and an ETA. shred has an option to display %done as it progresses too, if you want to monitor the process more closely than just setting it off and waiting for it to finish. – David Spillett – 2013-12-05T17:47:07.887

Is there any concern about running up against a maximum file size limit? Could zero.file hit such a limit before all free space on the drive has been used up? – Dan Moulding – 2014-03-17T17:25:44.307

@DanMoulding: that would depend on the filesystem you are using. For FAT32 that would definitely be a concern (2GiB file limit, 8TiB volume size limit), with ext2/3/4 less so: with the default/common 4K block the file size limit is 2TiB so you'd have to have a huge volume for this to be an issue (the maximum volume size under these conditions is 16TiB). Under NTFS the max file length is larger than max volume length in some cases even. See http://en.wikipedia.org/wiki/Ext3#Size_limits and http://en.wikipedia.org/wiki/Ntfs#Scalability amongst other references.

– David Spillett – 2014-03-18T12:04:08.787

47

WARNING

I was shocked by how many files photorec could retrieve from my disk, even after wiping.

Whether there is more security in filling the "free space" only 1 time with 0x00 or 38 times with different cabalistic standards is more of an academic discussion. The author of the seminal 1996 paper on shredding wrote himself an epilogue saying that this is obsolete and unecessary for modern hardware. There is no documented case of data being physically replaced zeroes and recovered afterwards.

The true fragile link in this procedure is the filesystem. Some filesystems reserve space for special use, and it is not made available as "free space". But your data may be there. That includes photos, personal plain-text emails, whatever. I have just googled reserved+space+ext4 and learned that 5% of my home partition was reserved. I guess this is where photorec found so much of my stuff. Conclusion: the shredding method is not the most important, even the multi-pass method still leaves data in place.

You can try # tune2fs -m 0 /dev/sdn0 before mounting it. (If this will be the root partition after rebooting, make sure run -m 5 or -m 1 after unmounting it).

But still, one way or another, there may be some space left.

The only truly safe way is to wipe the whole partition, create a filesystem again, and then restore your files from a backup.


Fast way (recommended)

Run from a directory on the filesystem you want to wipe:

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
dd if=/dev/zero of=zero.file bs=1024
sync ; sleep 60 ; sync
rm zero.small.file
rm zero.file

Notes: the purpose of the small file is to reduce the time when free space is completely zero; the purpose of sync is to make sure the data is actually written.

This should be good enough for most people.

Slow way (paranoid)

There is no documented case of data being recovered after the above cleaning. It would be expensive and resource demanding, if possible at all.

Yet, if you have a reason to think that secret agencies would spend a lot of resources to recover your files, this should be enough:

dd if=/dev/urandom of=random.small.file bs=1024 count=102400
dd if=/dev/urandom of=random.file bs=1024
sync ; sleep 60 ; sync
rm random.small.file
rm random.file

It takes much longer time.

Warning. If you have chosen the paranoid way, after this you would still want to do the fast wipe, and that's not paranoia. The presence of purely random data is easy and cheap to detect, and raises the suspicion that it is actually encrypted data. You may die under torture for not revealing the decryption key.

Very slow way (crazy paranoid)

Even the author of the seminal 1996 paper on shredding wrote an epilogue saying that this is obsolete and unecessary for modern hardware.

But if yet you have a lot of free time and you don't mind wasting your disk with a lot of overwritting, there it goes:

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
sync ; sleep 60 ; sync
shred -z zero.small.file
dd if=/dev/zero of=zero.file bs=1024
sync ; sleep 60 ; sync
rm zero.small.file
shred -z zero.file
sync ; sleep 60 ; sync
rm zero.file

Note: this is essentially equivalent to using the secure-delete tool.


Before the edit, this post was a rewrite of David Spillett's. The "cat" command produces an error message, but I can't write comments on other people's posts.

user39559

Posted 2009-08-06T23:48:03.757

Reputation: 1 783

/: write failed, filesystem is full on FreeBSD – Alex G – 2015-12-17T04:15:27.270

2Root is always able to use the reserved space. So if you do your zero-fill as root, you will be able to fill up the 5% reserved space as well; the tunefs is unnecessary. It is still conceivable that there could be data in other parts of the filesystem. – Nate Eldredge – 2016-07-29T20:32:26.090

You can comment under other people posts with 50 reputation.

– Gnoupi – 2010-08-18T09:44:03.763

1@NateEldredge Do you have any source that would indicate that dd run as root gives access to more of the filesystem than dd without root? I want to believe this is true, but can see no reason to at the moment. – Hashim – 2019-01-24T01:42:56.070

1The cat command is expected to give a "no space left" error in my examples, at the end of its run. You can hide this by redirecting stderr to /dev/null if it is a problem. I usually use pv rather than cat or dd for this sort of thing, in order to get the useful progress indication. – David Spillett – 2011-06-15T12:09:37.900

4...raises the suspicion that it is actually encrypted data. You may die under torture for not revealing the decryption key. Heh, that's exactly what I was thinking. I guess that means I am paranoid... – Navin – 2013-12-14T23:37:57.930

27

There is zerofree utility at least in Ubuntu:

http://manpages.ubuntu.com/manpages/natty/man8/zerofree.8.html

   zerofree — zero free blocks from ext2/3 file-systems

   zerofree  finds  the  unallocated, non-zeroed blocks in an ext2 or ext3
   filesystem (e.g. /dev/hda1) and fills them with zeroes. This is  useful
   if  the  device  on  which this file-system resides is a disk image. In
   this case, depending on the type of disk image, a secondary utility may
   be  able  to  reduce the size of the disk image after zerofree has been
   run.

   The usual way to achieve  the  same  result  (zeroing  the  unallocated
   blocks)  is to run dd (1) to create a file full of zeroes that takes up
   the entire free space on the drive, and then delete this file. This has
   many disadvantages, which zerofree alleviates:

      ·  it is slow;

      ·  it makes the disk image (temporarily) grow to its maximal extent;

      ·  it  (temporarily)  uses  all  free  space  on  the disk, so other
         concurrent write actions may fail.

   filesystem has to be unmounted or mounted  read-only  for  zerofree  to
   work.  It  will exit with an error message if the filesystem is mounted
   writable. To remount the  root  file-system  readonly,  you  can  first
   switch to single user runlevel (telinit 1) then use mount -o remount,ro
   filesystem.

Also check this link about zerofree: Keeping filesystem images sparse - it is from its author - Ron Yorston (9th August 2012)

osgx

Posted 2009-08-06T23:48:03.757

Reputation: 5 419

3It is important that filesystem has to be unmounted or mounted read-only for zerofree to work. – AntonioK – 2016-08-25T12:24:23.533

1It would be nice to include some information on how to do this on the root file system. My feeling is that this won't work, because you'd have to unmount the file system, while simultaenously running the tool from said file system. – Ant6n – 2017-03-16T20:30:43.823

This also comes with CentOS – davidgo – 2018-11-28T04:49:05.547

3

Here's how to do it with a GUI.

  1. Install BleachBit
  2. Run as root by clicking Applications - System Tools - BleachBit as Administrator.
  3. In the preferences, tell it which paths you want. Generally it guesses them well. You want to include one writeable path for each partition. Generally that is /home/username and /tmp, unless they are the same partition, in which case just pick one.
  4. Check the box System - Wipe Free Disk Space.
  5. Click Delete.

The advance of BleachBit over dd (which otherwise is very nice) is when the disk is finally full, BleachBit creates small files to wipe the inodes (which contains metadata like filenames, etc).

Andrew Z

Posted 2009-08-06T23:48:03.757

Reputation:

Inspect Bleachbit's opensource python code for wiping freespace from a drive for your self. – shadowbq – 2013-01-14T15:18:26.067

3

Wipe a drive at top speed.

Typical instructions for encrypting a drive nowadays will tell you to first WIPE the drive.

The command below will fill your drive with AES ciphertext.

Use a live CD if you need to wipe your main boot drive.

Open a terminal and elevate your privileges:

sudo bash

Let us list all drives on the system to be safe:

cat /proc/partitions

NOTE: Replace /dev/sd{x} with the device you wish to wipe.

WARNING: This is not for amateurs! You could make your system unbootable!!!

sudo openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > /dev/sd{x}

I am stunned at how fast this is.

Roger Lawhorn

Posted 2009-08-06T23:48:03.757

Reputation: 131

2

I use dd to allocate one or more big files to fill up the free space, then use a secure deletion utility.

To allocate files with dd try:

dd if=/dev/zero of=delete_me bs=1024 count=102400

This will generate a file named delete_me that is 100 MB in size. (Here bs is the "block size" set to 1k, and count is the number of blocks to allocate.)

Then use your favorite secure deletion utility (I've been using shred) on the files so created.

But NOTE THIS: buffering means even if you do the whole disk, you may not get absolutely everything!


This link recommends scrub for free space wiping. Haven't tried it.

dmckee --- ex-moderator kitten

Posted 2009-08-06T23:48:03.757

Reputation: 7 311

Oh, if memory serves me, I tried scrub once and it corrupted the whole file-system. Fortunately I had the good sense of first experimenting on a testing file-system, NOT on my real data. – landroni – 2014-08-30T16:33:48.837

2

You can wipe your free space by using secure deletion package.

In that package you can find sfill tool, which is designed to delete data which lies on available diskspace on mediums in a secure manner which can not be recovered by thiefs, law enforcement or other threats.

To install secure deletion package in Linux (Ubuntu), install it by the following command:

$ sudo apt-get install secure-delete

Then to erase your data no free space, try the following command:

sfill -f -v -ll /YOUR_MOUNTPOINT/OR_DIRECTORY

Where /YOUR_MOUNTPOINT/OR_DIRECTORY is your mount point (df -h, mount) or directory to wipe the free space.

Read the manual at http://manpages.ubuntu.com/manpages/hardy/man1/sfill.1.html

kenorb

Posted 2009-08-06T23:48:03.757

Reputation: 16 795

1

You probably already have the GNU coreutils package installed on your system. It provides the command shred.

dkaylor

Posted 2009-08-06T23:48:03.757

Reputation: 35

5Shred won't clean up unused disk space without making it into files first... – dmckee --- ex-moderator kitten – 2009-08-07T15:22:24.877

1

Easier is to use scrub:

scrub -X dump

This will create a dump folder in the current location and create file until the disk is full. You can choose a pattern with the -p option (nnsa|dod|bsi|old|fastold|gutmann).

It's not easy to get scrub installed (see the Ubuntu Forums on this), but once the installation is done, you've a really SIMPLE and efficient tool in your hand.

FMaz008

Posted 2009-08-06T23:48:03.757

Reputation: 203

If memory serves me, I tried scrub once and it corrupted the whole file-system. Fortunately I had the good sense of first experimenting on a testing file-system, NOT on my real data. – landroni – 2014-08-30T16:35:27.877

Don't know what you did or what happen, but scrub basically create new file up until it fill the filesystem. It does not play with existing file, neither does it delete any of them ( at least not the command I gave )... – FMaz008 – 2014-08-31T19:33:34.000

1Indeed. Tried scrub -X dump_dir and it seems to have worked nicely. BTW, installing on Ubuntu 14.04 is very straightforward: apt-get install scrub. – landroni – 2014-09-12T21:02:12.107

1

use dd and just zero out the free space. it is a myth data needs to be over written multiple times (just ask peter guntmann) and random data , as opposed to 1's then 0's implies unnatural activity. then end result is a clean drive with way less time spent writing. besides, secure deletion programs cant guarentee they even overwrite the real file on modern file systems(journaled). do yourself a favor and get photorec, scan your drive to see the mess, wipe it with 1's and optionally with zeroes to make it look untouched. if photorec still finds stuff, remember it is scanning everything available so do this carefully again with root user.

remember, the cia/fbi/nsa doesnt have a fancy machine that can read the actual state of your magnetic media bits. that was all just a paper written a long time ago. a "what-if". you only need to wipe 1 time.

fred

Posted 2009-08-06T23:48:03.757

Reputation: 11

1There are few interesting things you've said, but do you actually have any sources to back this information? It's hard to believe that all that overwriting is useless. Also, please improve your post, it's hard to read with punctuation like that. – gronostaj – 2013-05-25T21:14:21.740

@gronostaj: The "it is a myth data needs to be over written multiple times" claim for modern drives at least has been proven by multiple studies. All those 30+ passes recommended by Gutmann are no longer required, as acknowledged by the author himself. – Karan – 2013-05-25T23:47:02.913

1

Here is the "sdelete.sh" script that I use. See comments for details.

# Install the secure-delete package (sfill command).

# To see progress type in new terminal:
# watch -n 1 df -hm

# Assuming that there is one partition (/dev/sda1). sfill writes to /.
# The second pass writes in current directory and synchronizes data.
# If you have a swap partition then disable it by editing /etc/fstab
# and use "sswap" or similar to wipe it out.

# Some filesystems such as ext4 reserve 5% of disk space
# for special use, for example for the /home directory.
# In such case sfill won't wipe out that free space. You
# can remove that reserved space with the tune2fs command.
# See http://superuser.com/a/150757
# and https://www.google.com/search?q=reserved+space+ext4+sfill

sudo tune2fs -m 0 /dev/sda1
sudo tune2fs -l /dev/sda1 | grep 'Reserved block count'

sudo sfill -vfllz /

# sfill with the -f (fast) option won't synchronize the data to
# make sure that all was actually written. Without the fast option
# it is way too slow, so doing another pass in some other way with
# synchronization. Unfortunately this does not seem to be perfect,
# as I've watched free space by running the "watch -n 1 df -hm"
# command and I could see that there was still some available space
# left (tested on a SSD drive).

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
dd if=/dev/zero of=zero.file bs=1024
sync ; sleep 60 ; sync
rm zero.small.file
rm zero.file

sudo tune2fs -m 5 /dev/sda1
sudo tune2fs -l /dev/sda1 | grep 'Reserved block count'

Czarek Tomczak

Posted 2009-08-06T23:48:03.757

Reputation: 131

1

I found a simple solution that works on Linux and on MacOS. Move in the root folder of your disk and launch this command:

for i in $(seq 1 //DISKSPACE//); do dd if=/dev/zero of=emptyfile${i} bs=1024 count=1048576; done; rm emptyfile*;

where //DISKSPACE// is the size in GB of your hard disk.

Enrico

Posted 2009-08-06T23:48:03.757

Reputation: 11

0

This is not an answer! Just a comment for those wishing to use pv...so don't bother voting.

On Linux Mint 17.3 you can use pv (pipe view) to get progress of the writing. For example:

# Install pv (pipe view)
sudo apt-get install pv

# Write huge file of approximate size of /dev/sdb, using urandom data:
pv --timer --average-rate --progress --numeric --eta --interval 5 --size "$(blockdev --getsize64 /dev/sda )" /dev/urandom >rand.file

The advantage here is that you get a progress bar, ETA and continuously updated data rate. The disadvantage is that this is written on one line and when the disk is full (returning an error) it disappears. This happens because the full size is approximate since the OS will likely use the disk while this very long operation is taking place, especially on the OS volume.

On a very old HD, I get a data rate about 13 MB/s using /dev/urandom, and about 70 MB/s, when using /dev/zero. This would probably improve further when using a raw dd or cat, and not pv.

not2qubit

Posted 2009-08-06T23:48:03.757

Reputation: 1 234

0

I sometimes use this bash one-liner:

while :; do cat /dev/zero > zero.$RANDOM; done

When it starts saying that the disk is full, just press Ctrl+C and remove the created zero.* files.

It works on any system, whatever the file size limits.
Ignore any cat: write error: File too large errors.

Nicolas Raoul

Posted 2009-08-06T23:48:03.757

Reputation: 7 766

-13

Once the file is gone off the file system's record, the data that is left on the hard disk is meaningless sequence of 1's and 0's. If you are looking to replace that meaningless sequence with another meaningless sequence, I can advice some commercial products for safely erasing drives, like arconis.

Ilya Biryukov

Posted 2009-08-06T23:48:03.757

Reputation: 87

23Contiguous chunks of former file contents still remain on disk, and are far from meaningless if raw disk data is examined directly. – Alex B – 2009-08-07T00:04:14.687