Does full-disk encryption on SSD drive reduce its lifetime?

55

9

I would assume that a full-disk encryption deployment would introduce additional writes each time the computer is booted up and shut down. Given that solid state disks are considered to have a lower average capacity for writes before failure, can a full-disk encryption solution lower the expected lifetime of the disk on which it is deployed?

If my assumptions are incorrect, then I suppose this is a moot point. Thanks in advance.

Bill VB

Posted 2012-07-14T22:52:28.000

Reputation: 679

1

@JarrodRoberson you mean this question? http://superuser.com/questions/39719/what-is-the-lifespan-of-an-ssd-drive Either way, the related questions all have inferior answers, so I wouldn't close them as dupes of those

– Ivo Flipse – 2012-07-15T20:10:54.840

@IvoFlipse the crux of the question I flagged as a duplicate states *"...Will this effectively put the drive into a fully used state and how will this effect the wear leveling and performance of the drive?..."* that is exactly this same question. – None – 2012-07-15T20:16:42.460

Answers

49

Think of encryption as an adapter. The data is simply encoded before being written or decoded before being read. The only difference is that a key is passed at some point (usually when the drive/driver is initialized) to be used for the encryption/decryption.

Here is a (rough) graphic I threw together to show the basic pattern:

Schematic demonstrating full—drive-encryption

As you can see, there is no need to perform extra reads or writes because the encryption module encrypts the data before the data is written to the platters and decrypts it before it is sent to the process that performed the read.

The actual location of the encryption module can vary; it can be a software driver or it can be a hardware module in the system (e.g., controller, BIOS, TPM module), or even in the drive itself. In any case, the module is “in the middle of the wire” between the software that performs the file operations and the actual data on the drive’s platters.

Synetech

Posted 2012-07-14T22:52:28.000

Reputation: 63 242

This answer is true only if the cleartext data being written is uncompressible (e.g. jpeg, mpeg, zip files). If the data is compressible, most SSD controllers take advantage of that to improve lifetime, and encryption defeats compressibility unless directly integrated into the SSD controller. – Wheezil – 2015-11-22T18:29:30.423

so everything happens before writing thanks for explaining – datatoo – 2012-07-15T00:04:38.473

24This answer is logically incorrect! It depends on how many blocks OS encrypt at a time. Suppose it encrypts 4K at a time, then simply modifying a byte will cause writes to 8 512-byte-blocks to SSD, while without encryption, OS (if it optimizes well) only needs to write to 1 512-byte-block. So encryption adds to 8x disk writes.

In practise, OS may choose an appropriate block size for encryption, but the answer doesn't address this problem and make any assertion for that. So in practise this answer may be correct but logically it is wrong, at least it is incomplete. – icando – 2012-07-15T02:58:26.163

21

@icando, it’s a generic simplification. Besides, what you are talking about is a stream-cipher. The most common/popular full-disk-encryption program, TrueCrypt, uses block ciphers. If you can point out a full-disk-encryption system that is poorly designed and/or uses a stream-cipher that incurs that sort of impact, then please do so and I will happily expound the answer.

– Synetech – 2012-07-15T03:05:54.840

@Synetech, actually I was talking about block ciphers. That's why I was asking what the block size is in common OSes. – icando – 2012-07-15T03:10:43.480

@icando, you mean clusters or sectors? Until recently, all consumer drives used 512B sectors, but have since moved to 4KB sectors with [Advanced Format]. Clusters on the other hand can be anything from 512B to 64KB depending on what was selected during format.

– Synetech – 2012-07-15T03:18:58.083

1If the encryption module is in the drive itself, then you can be sure that it takes the nature of SSDs into account (otherwise the mfg is dumb and you want a refund). If the module is in the BIOS, then it can easily be updated to include a better algorithm if needed. – Synetech – 2012-07-15T03:32:51.140

1

As for software, that is even easier to update. TrueCrypt for example has been updated analyzed in terms of SSD wear [1] [2] and the main problem isn’t the wear, but that it may vulnerable to attack.

– Synetech – 2012-07-15T03:33:55.227

9The concern with full disk encryption is that DISCARD/TRIM is usually disabled for security reasons. All SSD drives have a logical 4kb block size, the actual underlying implementation below this layer is kept secret by most manufacturers, even with the newer drives that show as 8kb page sizes, they are still 4kb under the covers with firmware doing translations. None of this is a concern the firmware does the correct thing with concatenation of writes so the assertion that encryption adds anything much less 8X writes is ignorance of encryption and filesystem and firmware write strategies. – None – 2012-07-15T06:27:51.313

1@Synetech SSD don't actually have clusters or sectors as traditionally defined in relationship to HD; those are implementation details of spinning media and filesystems. clusters in particular a filesystem specific. Older SSD has 512b sector, AF has 4kb ( maps to 512b for compatibility ), newer drives claim 8kb but are in most cases firmware tricks over 4kb still. Unlike spinning disks, SDD can write only 512b to a 4kb sector without having to touch any unrelated sectors, because the 4kb sector is a logical sector made up of a collection of 512b physical sectors. Format size is irrelevant. – None – 2012-07-15T06:40:52.203

@JarrodRoberson et everyone: dm-crypt allows to forward TRIM requests: allow_discards option (https://code.google.com/p/cryptsetup/wiki/DMCrypt). I'm not a security specialist and I cannot predict how exactly this will affect security.

– whitequark – 2012-07-15T12:06:12.213

@whitequark IIRC it 'just' leaks information what % of disk is actually used. Traditionally full disk encrytpion pre-filled the disk with (pseudo)random noise so any attacker cannot say if he is dealing with disk filled in 5%, 10% or 95%. With trim passing he can say (assuming access to firmware) that. – Maciej Piechotka – 2012-07-16T05:53:37.517

> @Synetech SSD don't actually have clusters or sectors as traditionally defined in relationship to HD   What is this, geometry class? Of course they don’t have them in the way that drives with platters do, but they still have pages, blocks, whatever you want to call them. The drive may not internally use sectors or clusters, but the operating system/drivers abstract the specific implementation away and maintain the simple format. Flash drives don’t have sectors or clusters either, but I can still format them in DOS using a specific cluster size and view their “sectors” in a disk/hex editor. – Synetech – 2012-07-21T19:35:37.967

@hit-and-run-downvoter, care to bother explaining why you down-voted? You’re the only one who had a problem with this answer, so it seems reasonable to expect you to explain why you felt the need to go against so many other people who liked it. – Synetech – 2014-01-29T14:18:19.203

I am slightly confused. I always thought the max writes were per block, and if the disk is unencrypted, the disk smartly cycles through all free space in deciding where to write, spreading the writes evenly across all free space. With FDE, the whole disk looks to be in use, hence can't be cycled, hence there are a lot more reads and writes for sectors that change frequently which results in faster wear. Is that not correct? – Cookie – 2014-03-14T08:10:52.690

With FDE, the whole disk looks to be in use, hence can't be cycled @Cookie, I don’t understand what you mean. How is an [entire] encrypted disk “in use”? – Synetech – 2014-03-14T16:57:21.133

Maybe I am incorrect in this, but I thought with FDE, if I have a 100 GB file system, the whole 100 GB are filled with data, even if only 20 GB are used really - the SSD always thinks its full. It can't distinguish empty space from used space - that happens higher up. Maybe I am wrong? – Cookie – 2014-03-14T17:00:54.880

@Cookie, the container (virtual volume) may take up the whole space, but at any given time, only as much data as necessary is actually being written to (or for that matter, read from) the drive. It’s not like the entire drive gets read or written every time you try to read or write a file—that would be insanity. the SSD always thinks its full That shouldn’t happen, the drive/user needs to know how much space is available. It could a be a faulty or poor encryption or hardware implementation. What encryption program are you using? – Synetech – 2014-03-14T20:27:33.643

dm-crypt. But I thought it was more a theoretical issue - I used to keep a few GB outside the encrypted partition free so they could be used by the drive for cycling. The user of course sees the free space - I was just not sure the drive does, because the drive gets the data already encrypted. – Cookie – 2014-03-16T16:39:03.900

@Cookie, the physical drive (hardware/firmware) sees the whole thing, including the invisible blocks that are not available to you and meant for remapping bad sectors. The encryption program/driver will/should only see/use the amount that you specifically allocate to it; it can’t/shouldn’t mess around with other partitions. – Synetech – 2014-03-16T16:43:27.153

24

Short answer:
If the disk controller does not use compression, then Synetech's answer is correct and encryption will not change anything. If the controller uses compression then encryption will probably reduce the lifespan of the disk (compared to an identical disk where encryption is not used).

Long answer:
Some SSD controllers use compression in order to minimize the amount of data written to the actual flash chips and in order to improve read performance (SandForce controllers are a prime example, there may be others). This will work best if the data written to the disk is easily compressible. Text files, executables, uncompressed images (BMP for example) and similar can usually be compressed quite a lot while files that are already compressed or are encrypted are almost impossible to compress since the data will look almost completely random to the compression algorithm in the controller.

Tom's Hardware made a nice test about precisely this on an Intel SSD 520 which can be found at
http://www.tomshardware.com/reviews/ssd-520-sandforce-review-benchmark,3124-11.html

What they basically do is measure the write amplification (the ratio of the amount of data written to flash and the amount of data sent to the drive) of the drive when writing completely compressible data and completely random data. For completely random data, the write amplification is 2.9* which means that for every GB of data sent to the disk, 2.9 GB are written to flash. The article notes that this seems to be roughly the same number measured on drives that do not use compression. For completely compressible data, the ratio is 0.17 which is quite a bit lower.

Normal usage will probably end up somewhere in between unless the data is encrypted. The lifetime predictions in the article are somewhat academic, but shows that encryption could definitely affect lifetime on an SSD with a SandForce controller. The only way to get around this would be if the controller itself can do the encryption after compression has occurred.

*The article does not specify why 2.9 is considered a normal value and I have not really researched it. A logical explanation could be that most SSDs use MLC NAND which is a bit error prone (bit flips in other parts of erase blocks can occur while writing if I recall correctly). In order to correct for this, data is probably written to several places so that recovery or correction is always possible.

Leo

Posted 2012-07-14T22:52:28.000

Reputation: 487

SSD controllers improve wear-leveling by shifting the compressed data around within a block and leaving other bits alone. The "write amplification" of which the Leo speaks is a phenomenon related to the fact that SSDs must write, not only the data blocks being updated, but perhaps several adjacent blocks and some meta data: https://en.wikipedia.org/wiki/Write_amplification "Without compression, write amplification cannot drop below one. Using compression, SandForce has claimed to achieve a typical write amplification of 0.5, with best-case values as low as 0.14 in the SF-2281 controller."

– Wheezil – 2015-11-22T18:26:06.047

Wanted to add that this most likely doesn't apply if the drive is self encrypting (assuming the manufacturer did the sane thing of compressing before encrypting) or if someone does compression before encryption – kingW3 – 2020-02-16T13:13:03.790

encrypted data isn't bigger it is just encrypted, encryption doesn't cause the data to grow in size. who uses automatically compressing file systems in 2012? – None – 2012-07-15T08:29:06.857

7@JarrodRoberson: SandForce SSD controllers compress data to minimise writes. There may well be other examples too. – John Bartholomew – 2012-07-15T12:30:19.360

@JohnBartholomew I said, filesystems, which come before disk controllers. And to your unrelated point, the SandForce compression scheme supposedly detects "uncompressable" or "precompressed" data and doesn't compress it in its attempts to miminize writes, this is a secret so we will never know for sure. Either way, it doesn't take up more space, just more time in that specific case. – None – 2012-07-15T16:18:04.380

1@JarrodRoberson: The point is that if the controller tries to compress everything then performance (in time and in space) will be worse if all the data you send to the disk is encrypted. It will be worse in time because the controller will waste time detecting that the data is uncompressible, and it will be worse in space compared to giving the disk unencrypted (and therefore, in some cases, compressible) data. – John Bartholomew – 2012-07-15T16:22:29.227

@JohnBartholomew there is no "worse in space" consideration in the SandForce straw man, their documentation states that they can't use the "left over space" in anyway, so it isn't wasted, it would have been used anyway and can't be "reused" or "reclaimed". They do this secret compression only to save writes as per their own documentation. The is only a time component that is variable, and they say it is negligible because of their proprietary/secret sauce! – None – 2012-07-15T17:08:03.600

@JarrodRoberson If the unencrypted data isn't compressible, does that mean that more writes will be required? – Mark Allen – 2012-07-15T19:50:29.637

4MB unencrypted will be 4MB encrypted, with compression it may get minuscule amount larger depending on the compression format meta-data. All modern compression techniques can identify un-compressible data very quickly by checking magic number bytes, headers or with only a sample of a few kb in most cases. But again, the question isn't about compression, it is about a false assumption about how SSD and filesystems work and how they work together. So 4MB of data would take the same number of writes, it is 4MB of bits, there is no penalty for un-compressed data, just no benefit of less writes. – None – 2012-07-15T20:06:54.657

4@JarrodRoberson: Not getting the benefit of less writes sounds exactly like what the OP was asking about and is a direct consequence of encryption being used. – Leo – 2012-07-16T13:46:17.487

6

Full disk encryption does not increase the amount of data written to a disk, aside from any metadata that the encryption layer needs to store along with the filesystem (which is negligible). If you encrypt 4096 bytes, 4096 bytes are written.

Michael Hampton

Posted 2012-07-14T22:52:28.000

Reputation: 11 744

1

The answer depends on what you mean by "full disk encryption".

If you simply mean that all files and filesystem metadata are encrypted on the disk, then no, it should have no impact on SSD lifespan.

However, if you mean a more traditional "The entire contents of the disk, including unused space, is encrypted" then yes, it will reduce the lifespan, perhaps significantly.

SSD devices use "wear levelling" to spread the writes across the device so as to avoid wearing out a few sections prematurely. They can do this because modern filesystem drivers specifically tell the SSD when the data in a particular sector is no longer being used (has been "discard"ed), so then the SSD can set that sector back to zero and proceed to use whatever sector has the least amount of use for the next write.

With a traditional, full-disk encryption scheme, none of the sectors are unused. The ones that do not contain your data are still encrypted. That way an attacker doesn't know what part of your disk has your data, and what part is just random noise, thereby making decryption much more difficult.

To use such a system on an SSD, you have two options:

  1. Allow the filesystem to continue performing discards, at which point the sectors that don't have your data will be empty and an attacker will be able to focus his efforts on just your data.
  2. Forbid the filesystem to perform discards, in which case your encryption is still strong, but now it can't do significant wear levelling, and so the most-used sections of your disk will wear out, potentially significantly ahead of the rest of it.

Perkins

Posted 2012-07-14T22:52:28.000

Reputation: 111