Over-provisioning an SSD - does it still hold?

14

2

Multiple (but not very recent) sources suggest that ~7% of the SSD space should be left unallocated in order to reduce the drive wear. Is it still valid as for now or has the situation changed?

marmistrz

Posted 2015-07-24T08:14:46.023

Reputation: 455

It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux! – sourcejedi – 2015-07-24T18:10:35.303

the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: http://www.howtogeek.com/165472/6-things-you-shouldnt-do-with-solid-state-drives/ - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---

– Sun – 2015-07-30T16:42:05.693

Answers

15

Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.

Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.

Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.

So it's necessary if and only if

  • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.
  • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).

It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.

Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.


[*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :). The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.

sourcejedi

Posted 2015-07-24T08:14:46.023

Reputation: 2 292

Is it possible to detect a Sandforce controller once having an installed SSD? – marmistrz – 2015-07-24T13:54:05.067

There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce. – sourcejedi – 2015-07-24T17:37:02.677

This assumes that you actually have free space on the filesystem, right? :D – endolith – 2016-09-15T01:27:00.847

Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place. – sourcejedi – 2016-09-15T07:30:16.513

7

Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.

SSDs must cope with the limitations of flash memory when writing data

SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, typically several hundred KBs to several MBs in size in modern SSDs.

NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, to different blocks. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.

SSDs need free space to function optimally, but not every workload is conducive to maintaining free space

If the drive has little or no free space remaining, it will not be able to spread out writes. Instead, the drive will need to erase blocks right away as writes are sent to the drive, rewriting any valid data within those blocks into other blocks. This results in more data being written to the NAND than is sent to the drive, a phenomenon known as write amplification. Write amplification is especially pronounced with random write-intensive workloads, such as online transaction processing (OLTP), and needs to be kept to a minimum because it results in reduced performance and endurance.

To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which blocks no longer contain valid data so they can be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, which hinders the drive's ability to maintain adequate free space.

However, TRIM is sometimes not possible, such as when the drive is in an external enclosure (most enclosures do not support TRIM) or when the drive is used with an older operating system. Furthermore, under highly-intensive random-write workloads, writes will be spread over large regions of the underlying NAND, which means that forced rewriting of data and attendant write amplification can occur even if the drive is not nearly full.

Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning

The earliest SSDs had much less mature firmware that would tend to rewrite data much more often than necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, sometimes exceeding 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, are much better able to handle these situations, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.

Overprovisioning provides the drive with a larger region of free space to handle random writes and avoid forced rewriting of data. All SSDs are overprovisioned to at least some minimal degree; some use only the difference between GB and GiB to provide about 7% of spare space for the drive to work with, while others have more overprovisioning to optimize performance for the needs of specific applications. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.

If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it has been written to before; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")

If you're just a normal consumer, overprovisioning is generally not necessary

In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.

Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.

bwDraco

Posted 2015-07-24T08:14:46.023

Reputation: 41 701

1

The size of additional space differs very much between SSD drive models, but in general this is still true.

Tomasz Klim

Posted 2015-07-24T08:14:46.023

Reputation: 782

Do you have any reference for this? – Léo Lam – 2015-07-28T03:04:42.087

Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: http://www.samsung.com/global/business/semiconductor/minisite/SSD/downloads/document/03_NAND_Basics.pdf

– Tomasz Klim – 2015-07-28T05:41:56.490

1Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives. – Léo Lam – 2015-07-28T05:44:24.807

The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change. – Tomasz Klim – 2015-07-28T05:50:57.167

My answer is that you should be able to provide that additional space by a) enabling TRIM b) not abusing your filesystem by filling above 90% or something; this will also benefit the filesystem. It does rely on you having the discipline not to use the filesystem at near-100% full for an extended period. I guess your point is that requirement may be unreasonable. The unfortunate exception is that if you abuse a Sandforce drive and then clean it up by deleting files, the automatic TRIM will not restore the performance (and therefore write amplification must be staying high causing extra wear). – sourcejedi – 2015-08-02T08:47:45.333

1From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload". – sourcejedi – 2015-08-02T08:48:42.147