0

If I "shred" (this is a Linux utility/command) an SSD, can the data still be recovered?

N73k
  • 111
  • 5
  • 1
    Possibly: https://superuser.com/a/856491 – Paul Nov 14 '18 at 01:19
  • 1
    Don't do that. You won't destroy all the data and you will needlessly reduce the lifetime of the SSD. Use secure erase. – Michael Hampton Nov 14 '18 at 20:56
  • On drives where the firmware implements it properly: use [`hdparm --security-erase`](https://linux.die.net/man/8/hdparm) which should erase all data (also data in otherwise inaccessible wear level blocks) with the ATA security feature "Secure Erase". - As far as I know multipass overwrite of an SSD won't destroy 100% of all data (but will still destroy *most* data ...) : See https://serverfault.com/a/637226/37681 – HBruijn Dec 04 '18 at 11:41

2 Answers2

5

The answer is "maybe", but it's best to assume "yes".

When you overwrite an existing file on a solid-state drive, special drivers that handle wear leveling generally often don't actually overwrite the exact location where the old data was. This is because a solid-state drive can only be written to the same location a limited number of times before the cells that hold that location wear out. So, instead of overwriting the old location, the wear leveling driver will look to see what spots on the drive have less wear on them and will map that into the location you are overwriting. The old, overwritten data is itself remapped somewhere that isn't being used.

This is a bit of an over-simplification. This remapping doesn't always happen, and different SSD devices handle this issue differently, depending on whether the operating system is aware that it's a solid-state drive. But you can never ever assume that overwriting any single file will cause that file's data to be destroyed on any SSD, and this includes MicroSD cards, USB sticks, and other portable solid-state devices.

The best way to erase the most old "deleted" data is to fill all that drive's unused space with random data. If you fill the entire amount of unused space on the drive, then you have the best chance of overwriting everything that was previously deleted. This has two caveats though. The first is, you will cause a lot of wear on your drive. You are reducing by one the number of times that every cell on that drive can be written to. The other caveat, is that you won't necessarily remove ALL old data this way. This is because many solid-state drives, including most of the better ones, have a little bit of extra space that isn't shown to the user. This extra space is reserved to be remapped into places where cells are almost worn out. Then, when it's remapped in, the old cell is mapped out permanently, but it still has whatever was in it before. Getting at that data is difficult, but often not impossible. Many manufacturers have proprietary methods of getting into see what is remapped where, and it can be possible to get into that old data. Now, the odds of a cell being taken out of service right after it had sensitive data may be low. It all depends on how much sensitive data is on your drive. But it's not zero chance.

There are two general methods for overwriting old deleted data on the drive. You can do it non-destructively (to the filesystem and current files on the drive) by simply creating a file that grows to fill up all empty space on the filesystem. You can do it destructively by filling that drive's block device from stem to stern with random data.

Non-Destructive: In Linux you can do the non-destructive method like this:

$ dd if=/dev/urandom of=random.bin

You may need to do this as root in order to also overwrite reserved portions of the filesystem. This has the effect of exhausting all free space on the filesystem, so when you do this on the root filesystem there may be ramifications when things like syslog can't write logs any more. This generally isn't catastrophic, though, and if you quickly delete the random.bin file after then reboot, you are usually good to go. A more cautious approach is first boot from a live CD/DVD/USB, then mount your SSD filesystems and do the above in them. If you have more than one filesystem on the SSD, then you need to do this with all the filesystems before you delete the random.bin on any of them.

Destructive: This is easier and has the benefit of being the most complete method you can use shy of the physical destruction of your drive. Boot off a live CD/DVD/USB, then once you determine what the physical device is for your SSD, you use:

$ dd if=/dev/urandom of=/dev/sda bs=1M

Where /dev/sda is replaced with the device of your SSD. You will have to remake all your partitions after doing this, though. There are sometimes ramifications when you make partitions on an SSD to make sure they are aligned properly to the device's internal memory cell and block size, so look into this.

Encrypting a drive in-place and then destroying the key has been proposed as an alternative solution to erasing the drive. This is at best no better than the destructive method above, and at worst can allow recovering almost the entire contents of your drive. Possible issues are the fact that some encryption products don't by default overwrite free space with random data, or have a way of turning that off, that they don't necessarily overwrite slack space in-between existing partitions or at the end of the drive.

In short, there is no 100% reliable way to ensure sensitive data is overwritten on any SSD. The only way to ensure no sensitive data can be extracted from a solid-state drive is to physically destroy it. And by that, I mean making sure the actual silicon wafer in the middle of each memory chip is fractured. There are drive shredders that are designed to do just that. Burning at high temperature is also effective.

It must be noted that almost all this pain can be circumvented (or at least mitigated) by prevention. The use of a whole-disk-encryption system from the very beginning so that no unencrypted data is ever written to the drive is by far the best way of making sure that your data is the most secure. Even so, at the EoL of the drive, however, I would still even then recommend either physical destruction of the drive or the destructive method above.

Kurt Fitzner
  • 280
  • 1
  • 9
  • 1
    What about encryption? – Appleoddity Nov 14 '18 at 05:03
  • 1
    Encrypt and lose the key should be an effective scramble. The trick is to destroy any key, possibly using secure erase commands if supported by the hardware. And perhaps more difficult, convincing compliance and legal that this is sufficient. – John Mahowald Nov 14 '18 at 13:23
  • Encryption and key destruction seems the acceptable way to make cloud data inaccessible. – mdpc Nov 14 '18 at 20:17
  • Encryption is no different from filling the drive with random data. There still can potentially be un-overwritten data in sectors that have been taken out of service. Also, some encryption products don't (or give the option not to) overwrite space that is unused when the drive is encrypted. – Kurt Fitzner Nov 15 '18 at 16:02
  • 1
    Many enterprise class SSDs have up to 50% extra flash to improve wear leveling and performance of the drive. You really cannot overwrite just the logical device space and assume that everything is gone. As I see it, unless you *trust* the secure erase command (which will remove all device data visible through standard interfaces) to also erase *all data from extra flash chips*, you cannot fully erase sensitive data from an SSD. Remember that SSD manufacturers have already made mistakes: https://www.us-cert.gov/ncas/current-activity/2018/11/06/Self-Encrypting-Solid-State-Drive-Vulnerabilities – Mikko Rantalainen Nov 16 '18 at 06:58
  • @Kurt - I just want to check... in your first example, is this correct "of=random.bin" ? So, nothing about /dev/sda there? – N73k Nov 16 '18 at 18:10
  • @N73k - Correct. The first example you aren't writing directly to the drive. You are just making a normal file on the filesystem. That file is intended to grow to be as big as all available space on the filesystem, which will cause all unused blocks to get overwritten. – Kurt Fitzner Nov 21 '18 at 00:08
1

All newer SSD disks contain low-level "trim" functionality. Basically, blocks specified in the low level command cause the SSD to "garbage collect" these blocks, making the deleted content inaccessible and not recoverable.

An SSD disk installed will do a trim function on deleted file space in Windows systems (probably 7 and above). I believe that this can be disabled using a registry key.

In Linux unlike Windows, you must use a special mount option called discard to exercise the "trim" functionality on SSD drives. However, this can be periodically done via the fstrim command in conjunction with cron.

If the question is how to make deleted space not recoverable, then using "trim" functionality on SSDs seems to be a start.

mdpc
  • 11,698
  • 28
  • 51
  • 65
  • Negative! A TRIM is an administrative command and shouldn't be considered a security command. While DRAT (Deterministic Read after Trim) drives will generally return a zero for blocks that are TRIMmed, these blocks are not normally actually erased with the TRIM command. Both [chip-off and non-chip off](https://articles.forensicfocus.com/2014/09/23/recovering-evidence-from-ssd-drives-in-2014-understanding-trim-garbage-collection-and-exclusions/) solutions exist for reading data in a TRIMmed block after TRIM. – Kurt Fitzner Nov 16 '18 at 20:24
  • According to Wikipedia it is extremely difficult to restore the data. In fact, wiki provides this reference: "Too TRIM? When SSD Data Recovery is Impossible". TechGage. TechGage. 2010-03-05. Retrieved 2018-08-21. (https://techgage.com/article/too_trim_when_ssd_data_recovery_is_impossible/) – mdpc Nov 17 '18 at 00:47
  • @kurt Fitzner - I'd say it depends (as everything) how good the trim algorithm is implemented by the specific drive manufacturer. If implemented properly it should be ZEROing the blocks during the "garbage collect". – mdpc Nov 17 '18 at 00:56
  • Some drives physically erase the flash block on trim. Most just mark the block as zeroed in its table and then for normal reads from that block just programmatically provide zeroes (soft zeroing). I've seen many explanations for soft zeroing. Some are honest and admit explicitly its to improve data recoverability, as there are proprietary ways to get at the old data in those blocks. I know of at least one case where the manufacturer said the device hard erased but it was actually soft zeroing. Best to just not trust TRIM for data safety. – Kurt Fitzner Nov 18 '18 at 02:16