18

We host VPSes for customers. Each customer VPS is given an LVM LV on a standard spindle hard disk. If the customer were to leave, we zero out this LV ensuring that their data does not leak over to another customers.

We are thinking of going with SSDs for our hosting business. Given that SSDs have the "wear levelling" technology, does that make zeroing pointless? Does this make this SSD idea unfeasable, given we can't allow customer data to leak over to another customer?

Bart De Vos
  • 17,761
  • 6
  • 62
  • 81
jtnire
  • 777
  • 2
  • 7
  • 15

10 Answers10

23

Assuming that what you are seeking to prevent is the next customer reading the disk to see the old customer's data, then writing all zeros would actually still work. Writing zeros to sector 'n' means that when sector 'n' is read, it will return all zeros. Now the fact is, the underlying actual data may still be on the flash chips, but since you can't do a normal read to get to it, it's not a problem for your situation.

It IS a problem if someone can physically get hold of the disk and take it apart (because then they could directly read the flash chips), but if the only access they have is the SATA bus, then a write of all zeros to the whole disk will do just fine.

Michael Kohne
  • 2,284
  • 1
  • 16
  • 29
  • 1
    Exactly the answer I was looking for, and when I thought about it, I came to the same conclusion :) – jtnire Jun 21 '11 at 13:09
  • 2
    I would imagine (but don't know for certain, and certainly it would depend on the controller chipset being used in the SSD) that writing a sector of zeroes to a SSD would not even hit the actual flash chips. The controller should notice that it's all zeroes and simply mark that sector as "zeroed out" (or just point it at a sector that contains all zeroes). Writing a sector of zeroes is a reasonably common thing to do, and special-casing it would be a cheap and easy way to reduce wear on the flash, so I would be shocked if at least Intel and Sandforce didn't do it. – kindall Jun 21 '11 at 23:43
20

Don't zero-fill an SSD, ever. As a minimum, this will wear out some of the SSD's write lifespan for little or no benefit. In an extreme worst-case scenario, you might put the SSD's controller into a (temporarily) reduced performance state.

From this source:

Repeatedly overwriting the entire disk with multiple repetitions can successfully destroy data, but because of the Firmware Translation Layer (FTL), this is considerably more complicated and time-consuming than on traditional hard disk drives. Based on their results, it is an unattractive option

Your best option, secure erase via full disk encryption:

A few modern SSD's can use full-disk encryption -- examples are Intel's new 320 drives and some Sandforce 2200-series drives. These drives can be securely erased in a simple and fast way, without any drive wear. The drive uses AES encryption for all data written, so a secure erase simply means deleting the old AES key, and replacing it with a new one. This effectively makes all the 'old' data on the drive unrecoverable.

However, Intel's secure erase isn't easy to automate. AFAIK it has to be done from Intel's Windows GUI app, it can only be run on an empty non-boot drive and so forth. See page 21 and onwards in Intels docs.

Your other option, ATA secure erase:

Another option is to issue an ATA Secure Erase command via fx HDPARM on Linux. This will be much easier to automate via scripting.

Provided that the drive implements ATA Secure Erase in a 'good' way, one should expect it to at least delete the "flash translation layer" (FTL). The FTL table holds the mapping between the logical sectors (which the operating system 'sees'), and the physical pages of NVRAM on the drive itself. With this mapping table destroyed it should be very hard -- but probably not impossible -- to recover data from the drive.

However, I'm not aware of any studies that have shown that ATA Secure Erase is consistently and well implemented on all manufacturer's drives, so I'm hesitant to say it will always work -- you should read the manufacturers technical documentation.

For a single partition:

As I read the comments to other answers, it seems OP only wants to securely erase single partitions. One good way to do that would be to only create encrypted volumes, f.x. using L.U.K.S or TrueCrypt. That way you can securely erase the volume by trowing away the encryption key, similar to what the on-drive full disk encryption scheme does.

Conclusion:

If you really, really want to know, then read the paper linked to from Sophos' blog, and read the drive manufacturers tech notes regarding secure erase. However, if you want 'good' secure erase, then an SSD with full disk encryption and a secure wiping & replacement of the encryption keys is probably your best choice. As an alternative, use operating system level encryption, and throw away the key when you want data securely erased.

  • 1
    Caution is good, but I'm not sure that quoting a 2 year old notice on a controller bug that only surfaced in heavily artificial workloads in a specific SSD that has long since been fixed should be given much weight. – Daniel Lawson Jun 22 '11 at 19:30
  • 1
    @Daniel Lawson: Fair point. :-) I re-worded that section and changed it to a temporary performance degradation -- and changed the link to a review of Crucial's M4/C400 drive (a currently shipping drive) which exhibits major slowdowns after heavy write operations. –  Jun 22 '11 at 20:42
  • This comes up as a very high search result on this topic. I'd like to say that one aspect of all modern SSD controllers being ignored here is that they're far more intelligent than they're being given credit for. Most remotely modern SSDs do on-the-fly data compression and block-level deduplication internally. Writing zeroes will not literally write zeroes to every available NAND cell; it'll map all LBAs to a single NAND page full of zeroes, which is only one page write and should implicitly internally TRIM all other LBAs. It varies per controller but most behave this way. – Jody Bruchon Jun 16 '20 at 19:13
6

Wear leveling has nothing whatsoever to do with zeroing out data.

You zero out data to stop other people/applications reading that data. SSDs 'wear-level' their data to ensure that they remain usable for longer due to the 'damage' that writing does to SSDs. Also disks usually do this when they're not busy, in server situations quiet times aren't always available so this work often doesn't get done.

Do you charge your customers for their IO operations? If not what's to stop them basically killing their part of an SSD in hours/days by just constantly writing all the time? SSDs are quite a bit easier to kill than most people would think, especially in write heavy environments.

Chris S
  • 77,337
  • 11
  • 120
  • 212
Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • I was addressing the 'does that make zeroing pointless' bit Chris, they're two different things – Chopper3 Jun 21 '11 at 12:32
  • 1
    `if (time < 9am) chriss_try_again()` – Chris S Jun 21 '11 at 12:36
  • haha - don't worry dude :) – Chopper3 Jun 21 '11 at 12:37
  • While true about the data, less true about the killing. "Enterprise Flash Drives" have a rather better longevity as well as size than consumer SSDs, but just like enterprise HDDs you pay the premium for performance. According to seagate, their "Pulsar" ESD has a 5 year lifespan, which amounts to about 6 petabytes of writes. http://www.anandtech.com/show/2739/2 - On Pulsar Drive http://virtualgeek.typepad.com/virtual_geek/2009/02/solid-state-disks-enterprisevmware-ready-or-not.html - On EFDs – Daniel B. Jun 21 '11 at 12:38
  • 3
    I know Daniel, I buy lot of enterprise SSDs which I put into 99+% read environment but when we tested them we still found it surprisingly easy to 'kill' them, for instance we put two HP ones in as a mirrored pair in as the log disk for a reasonably busy Oracle 10 box and things started to go wrong within 4 weeks. Now this was ~10 months ago so wasn't HP's rebadged version of that Pulsar drive. 6PB equates to ~38MB/s over 5 years or ~180MB/s over a single year - so you couldn't use a Pulsar to capture a single channel of HD video without it breaking in under a year. – Chopper3 Jun 21 '11 at 12:51
  • Well color me corrected :) That is ... distressing. – Daniel B. Jun 21 '11 at 13:15
  • @Daniel - we're getting there though, another 12-24 months and they'll be fully ready I reckon, plus the capacity will continue up and the price go down. For my video streamers they're perfect but not yet for heavy writes - not in my opinion anyway :) – Chopper3 Jun 21 '11 at 13:20
  • @Chopper3: Random aside, have you seen [Intel's whitepaper on manually overprovisioning SSDs for longevity and performance](http://cache-www.intel.com/cd/00/00/45/95/459555_459555.pdf)? Also, it hasn't been mentioned here, but every SSD vendor I've seen quotes their longevity data in terms of 4k 100% random writes - a worst case scenario, so it makes sense, but it also means your single channel of HD video scenario may perform better than you think. – Daniel Lawson Jun 22 '11 at 19:25
3

So it's worth reading articles such as this. If someone has physical access to the disc then retrieving information is easier. Have you considered encrypting the data on the SSD and then all you need to do is securely forget about the private key which should be an easier problem. I can see SSD being a big win on vps's because of the much better random access performance.

James
  • 2,212
  • 1
  • 13
  • 19
3

Although one answer is already accepted, I think the command blkdiscard /dev/sdX is still worth mentioning here.

According to Arch Wiki: SSD, the blkdiscard command will discard all blocks and all data will be lost. It's recommended to use before "you want to sell your SSD".

I am not familiar with how TRIM works so I don't know whether there is a guarantee that the data will be erased. But I think it's better than doing nothing.

BTW, I am afraid that this command only works on a whole device, not for a single partition.

Hope this helps. :)

mkvoya
  • 31
  • 2
  • `blkdiscard` seems to not be safe in all cases, because `TRIM` itself is only a request and not all SSD controllers honour it. You can read more about it [here](https://askubuntu.com/questions/42266/what-is-the-recommended-way-to-empty-a-ssd/351791#comment1679745_351791) and [here](https://askubuntu.com/questions/42266/what-is-the-recommended-way-to-empty-a-ssd/351791#comment1679746_351791). But as explained there, the Linux kernel maintains a whitelist of which devices are known to honour TRIM. – nh2 May 06 '18 at 17:59
  • So if `hdparm -I /dev/theSSD` contains `Deterministic read ZEROs after TRIM`, `blkdiscard` should be fast and guaranteed zeroes being read afterwards. Otherwise a _Secure Erase_ seems like the better solution. However, given that the question is about customer security, _Secure Erase_ might be the better solution because it seems designed for such use cases. – nh2 May 06 '18 at 18:15
2

You definitely do not want to use traditional methods of erasing SSD's, such as using dd to zero out data, or other methods that write random data to the disk. Those methods are better suited for platter based disks. It is effective in erasing the SSD, but it will also unnecessarily use up a lot of the SSD's limited write operations, thus decreasing the SSD's expected life. That would get expensive quickly. It can also decrease the SSD's performance over time.

SSD's have different method for secure erase. I will say that it seems to be very cumbersome to do, because you usually need a certain type of SATA controller that can do IDE emulation, and the procedure can be complicated. Some manufacturers provide tools to secure erase their own SSD's, but you can also do it with hdparm on Linux: https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase. But you'll notice in those instructions that you have to make sure the drive is not "frozen" before you can proceed. This is one of the more difficult steps because it requires finding a motherboard & SATA controller that will let you "unfreeze" the drive while the system is booted up, which usually involves unplugging it from it's SATA cable, then plugging it back in.

Anyway, my recommendation is to do your research & pick an SSD that comes with a secure erase utility that can be used on a system convenient to you.

churnd
  • 3,977
  • 5
  • 33
  • 41
  • Using dd to zero out the disk, as long as you use a suitably large block size and not the default of 512 bytes, won't use up a lot of the write/erase cycles at all. It'll use approximately 1. I say approximately because I concede that if your filesystem alignment is wrong, you may end up writing to the same flash block twice in some cases. Using `dd bs=1M` will result in minimal wear. – Daniel Lawson Jun 22 '11 at 19:18
1

The very best way to clear out data from a virtual machine image is to use the TRIM feature. Many newer operating systems support this. Almost all current SSDs support it too.

And, what makes this option even better is that many SANs also support this feature under its SCSI name of UNMAP. It is a great command for SANs which implement sparse provisioning, which is a great feature for virtual machines, especially when combined with block de-duplication.

When the TRIM command is given to a SSD, the firmware will immediately mark those blocks free for reuse. Some SSDs will always return zeros for TRIM'd blocks. Other drives will return implementation-defined (ie, random) data.

On operating systems that support TRIM, a simple delete of the file will mark the blocks for TRIM. The actual TRIM operation may happen right away or it might be batched up to perform later. Sometimes there are tools that will force-TRIM a file or scan a partition for all unused blocks.

TRIM performance on Linux is still spotty so if you are using that you will want to investigate your options. On Windows it seems pretty solid.

Zan Lynx
  • 886
  • 5
  • 13
  • Sorry but this makes no sense at all. TRIM has, as far as I'm aware, nothing to do with thin provisioning, it simply tells an SSD to not bother wear-levelling sectors that it doesn't know have been deleted by the filesystem, it also very specifically doesn't zero out blocks. – Chopper3 Jun 21 '11 at 16:56
  • @Chopper3: You cannot see that ATA TRIM and SCSI UNMAP are the same command? UNMAP is definitely used in thin provisioning. TRIM does zero out blocks on at least some SSD drives. After TRIM the data is *gone*: there is no supported method to retrieve it, the drive might as well return zeros and some do. – Zan Lynx Jun 21 '11 at 23:39
  • "the drive might as well return zeros" and "the data is unrecoverable" are two very different things. In any case the user doesn't actually want to erase the SSD at all, he just doesn't want the next customer to be able to get the old customer's data, which isn't the same thing either. – Chris S Jun 22 '11 at 00:19
0

You need a "SSD Secure Erase Utility". If you use something like dd the wear leveling may kick in and you'll end up with reserve sectors that still contain old client data. A secure erase utility will erase all sectors on the device (not just those presented as a disk to the OS).

Some of these utilities are specific to a particular manufacturer, ask the manufacturer of your drive(s) for their recommendation, they'll know better than us.

Chris S
  • 77,337
  • 11
  • 120
  • 212
  • Sorry this isn't what I'm looking for, as I'm not looking to erase the whole disk. I'm not concerned about client data being still available on the physical SDD disk - I just don't want another customer to be able to access is via data leakage onto their LV. – jtnire Jun 21 '11 at 13:09
0

I don't think that writing 0s will help you preventing other customer from reading the disk.

In SSDs, when you write something, the process is very different from a normal Hard Disk.

Imagine the following situation: an "empty" memory cell in a SSD drive is all filled with 1s. When you write something to it, it write down the 0s and leave the 1s unchanged.

After, when you want to save something different, the previous content and the new one are compared. If the previous one can become the new one by writing down some 0s, ok. If it's not possible to do that, another memory cell is used.

"clear": 11111111 1st save: 11011011

new data: 00110011 there's no way to make 11011011 become 00110011 (notice that it would be necessary to turn one 0 to 1, and it's not possible to do that in SSDs). So, another memory cell will be used.

When you TRIM a drive, you are reseting all the unused memory cells to 1. So, they'll be clear to be used again. And the saved data is preserved.

To do what you want: first, erase (delete) the files. The memory cells to that files will be marked as free. Then do a TRIM: all those memory cells will become 1's, without any sign of data.

woliveirajr
  • 166
  • 2
  • 9
0

Easy to answer: Use Linux to reformat the partition as EXT4, that tells SSD all blocks to be ready for erase, like doing a trim on all sectors of a partition. Side effect is a small number of writes (EXT4 structures).

Forget ˋddˋ with random, that will reduce a lot SSD life. Some SSD are intelligent and if they see a full sector filled with zeros they do not write, they mark it for erase.

Since you can not know internals on the firmwares, your best option is to re-format partition as ext4 (without full re-format, just a fast one, only structures) on a modern linux kernel it will do a trim on the whole partition prior to format it.

Fot all those talking about secure erase, that is for whole SSD at once, what is asked is just one partition of the SSD and WITHOUT losing the rest of the info stored on it (partition level, not SSD level).

Conclusion: reformat as Ext4, then if you need other format, reformat again on such other format.

Dave M
  • 4,494
  • 21
  • 30
  • 30
Laura
  • 1