12

I need to decommission two SSD disks from one of my Linux hosted servers.

In order to safely delete data stored in the disks I was planning to use: hdparm --security-erase.

I read this document and it suggested not having any disks connected to the host, other than the ones intended for deletion.

And this article points out that if there are kernel or firmware bugs, this procedure might render the drive unusable or crash the computer it's running on.

This server is currently in production, with a software RAID configuration for production disks. There is no RAID controller for the disks I need to remove.

Question:

Is this a rather safe operation to perform in a production environment, or would I be better served by removing the disks and performing this procedure in another host?

Edit: just a link with a nice documented procedure

Matías
  • 447
  • 1
  • 6
  • 16

3 Answers3

18

ATA Secure Erase is part of the ATA ANSI specification and when implemented correctly, wipes the entire contents of a drive at the hardware level instead of through software tools. Software tools over-write data on hard drives and SSDs, often through multiple passes; the problem with SSDs is that such software over-writing tools cannot access all the storage areas on an SSD, leaving behind blocks of data in the service regions of the drive (examples: Bad Blocks, reserved Wear-Leveling Blocks, etc.)

When an ATA Secure Erase (SE) command is issued against a SSD’s built-in controller that properly supports it, the SSD controller resets all its storage cells as empty (releasing stored electrons) - thus restoring the SSD to factory default settings and write performance. When properly implemented, SE will process all storage regions including the protected service regions of the media.

Liberally copied from http://www.kingston.com/us/community/articledetail?ArticleId=10 [via archive.org], emphasis mine.

The problem is, that according to some that both support and proper implementation of ATA Secure Erase by the manufacturers are "lacking".

This research paper from 2011 shows on half the SSDs tested the ATA secure erase failed to effectively destroy the data on the drive.

In that same research paper testing showed that maybe surprisingly to some, traditional multi-pass overwrites of the SSD were actually mostly successful, although still some data (possibly from those reserved area's of an SSD that are outside the disks reported size) could be recovered.

So the short answer is: using software to sanitize a whole SSD may or may not be 100% effective.
It may still be sufficient for your requirements though.

Second, doing it on a server running production: My impression is that most manuals advise booting from a rescue disk to wipe disks for the simple reason that using software to wipe your boot/OS disk will fail miserably and most laptops and PC's have only a single disk.
The universal risks of executing potentially (or rather intentional) destructive commands on production systems apply as well of course.

Encrypting your drives will make (partial) recovery of data from disposed disks (SSD's or the spinning kind) much less likely. As long as the whole drive was encrypted and you didn't have an un-encrypted (swap) partition on it of course.

Otherwise, these always the shredder.

HBruijn
  • 72,524
  • 21
  • 127
  • 192
8

Fundamentally - because of the way SSDs work - it's impossible to 'securely wipe'. Especially for enterprise drives - most of them are bigger than they appear in the first place, because there's 'spare' capacity in them, for wear levelling purposes.

That same wear levelling means 'overwrite' style erasure doesn't do what you think it does either.

At a pretty fundamental level, it depends what the risk you're concerned about is:

  • if you just want to 'clean up' and redeploy hardware within your estate: Format and be done with it.
  • if you're worried about a malicious, well resourced opponent acquiring sensitive material: Don't bother with wiping, destroy physically*.

(*) where by 'destroy physically' I mean shred, incinerate and audit. Resist the temptation to DIY - it's not as much fun on SSDs anyway.

Sobrique
  • 3,697
  • 2
  • 14
  • 34
  • 1
    -1, there is no reason to expect that the disk vendor's ATA Secure Erase implementation does not actually erase *all* blocks. – nobody Oct 15 '14 at 15:49
  • 7
    +1 from me because yes, there is. See, eg, http://cseweb.ucsd.edu/~m3wei/assets/pdf/FMS-2010-Secure-Erase.pdf : "*Disk-based secure erase commands are unreliable*" (out of nine controller-SSD combinations tested it, one refused to do the erase, two didn't do the erase properly, and one didn't do it at all but reported that it had). That report is a few years old, but it means we need positive reasons to trust modern secure erase, rather than just assuming it works now. – MadHatter Oct 15 '14 at 16:21
  • 1
    I'm paranoid. I've seen too many occasions when 'unrecoverable' isn't as unrecoverable as I would assume. However I'd also make the point - most of the time it simply doesn't matter. If you vaguely trust where it's going, and the content isn't amazingly sensitive, it doesn't make much difference. And if it is amazingly sensitive, then why are you letting it leave the building in the first place? – Sobrique Oct 15 '14 at 16:30
  • 1
    I should add that it is *not* impossible. But you would need to trust the manufacturer's implementation as you can't do it reliably using the write sector commands alone. – the-wabbit Oct 16 '14 at 06:34
6

I would certainly not recommend launching Secure Erase operations on a system that has any drives you care about still connected. All it takes is one tiny typo to destroy a still-in-use drive's data beyond any hope of recovery.

If you're going to use Secure Erase, definitely do it in a system that doesn't have any drives you are about attached.

nobody
  • 190
  • 8