Disadvantages of partitioning an SSD?

84

45

A wise guy who goes by the name of NickN maintains a lenghty forum post on his views about building a powerful computer (directed towards playing Microsoft's Flight Simulator X, a very demanding piece of software).

He sums up points about SSD drives somewhere, and he concludes the list as follows:

DO NOT PARTITION SSD

He doesn't elaborate on this unfortunately, but I wonder why he says this. What are the drawbacks of partitioning an SSD? (Partitioning in this context meaning >= 2 partitions)

MarioDS

Posted 2014-09-08T15:45:31.257

Reputation: 1 362

13Well, you'd have to partition it in order to use it. Presumably, he means not to create more than one partition, though why he'd recommend that is beyond me. – ChrisInEdmonton – 2014-09-08T15:54:09.517

1@ChrisInEdmonton yes that's what he means. – MarioDS – 2014-09-08T15:56:16.417

4I can't fathom why this would matter. Even if you're using logical partitions instead of phsyical partitions, once the OS tells the filesystem driver what section of the drive to use, partitions don't matter. The only thing that might matter is partition alignment, but that has nothing to do with the number of partitions. – Darth Android – 2014-09-08T15:59:19.287

2I can only guess that he means: "Do not partition a very small drive", regardless if it is an HDD or a SDD. – Hennes – 2014-09-08T16:02:14.470

1@Hennes good thinking, although his post dates from july 2013. SSDs with 120GB or 240GB capacity were already pretty affordable back then. – MarioDS – 2014-09-08T16:05:10.520

6

The only other option I can thing of is (wrongly) assuming that an SSD controller needs the free space on a volume to work with. It needs free space on the disk to be efficient; not perse free space in a mounted volume. Tuning that is just a matter of deciding on over-provisioning and setting the host protected area.

– Hennes – 2014-09-08T16:12:49.493

Answers

126

SSDs do not, I repeat, do NOT work at the filesystem level!

There is no 1:1 correlation between how the filesystem sees things and how the SSD sees things.

Feel free to partition the SSD any way you want (assuming each partition is correctly aligned, and a modern OS will handle all this for you); it will NOT hurt anything, it will NOT adversely affect the access times or anything else, and don't worry about doing a ton of writes to the SSD either. They have them so you can write 50 GB of data a day, and it will last 10 years.

Responding to Robin Hood's answer,

Wear leveling won't have as much free space to play with, because write operations will be spread across a smaller space, so you "could", but not necessarily will wear out that part of the drive faster than you would if the whole drive was a single partition unless you will be performing equivalent wear on the additional partitions (e.g., a dual boot).

That is totally wrong.  It is impossible to wear out a partition because you read/write to only that partition. This is NOT even remotely how SSDs work.

An SSD works at a much lower level access than what the filesystem sees; an SSD works with blocks and pages.

In this case, what actually happens is, even if you are writing a ton of data in a specific partition, the filesystem is constrained by the partition, BUT, the SSD is not. The more writes the SSD gets, the more blocks/pages the SSD will be swapping out in order to do wear leveling. It couldn't care less how the filesystem sees things!  That means, at one time, the data might reside in a specific page on the SSD, but, another time, it can and will be different. The SSD will keep track of where the data gets shuffled off to, and the filesystem will have no clue where on the SSD the data actually are.

To make this even easier: say you write a file on partition 1. The OS tells the filesystem about the storage needs, and the filesystem allocates the "sectors", and then tells the SSD it needs X amount of space. The filesystem sees the file at a Logical Block Address (LBA) of 123 (for example). The SSD makes a note that LBA 123 is using block/page #500 (for example). So, every time the OS needs this specific file, the SSD will have a pointer to the exact page it is using. Now, if we keep writing to the SSD, wear leveling kicks in, and says block/page #500, we can better optimize you at block/page #2300. Now, when the OS requests that same file, and the filesystem asks for LBA 123 again, THIS time, the SSD will return block/page #2300, and NOT #500.

Like hard drives nand-flash S.S.D's are sequential access so any data you write/read from the additional partitions will be farther away than it "might" have been if it were written in a single partition, because people usually leave free space in their partitions. This will increase access times for the data that is stored on the additional partitions.

No, this is again wrong!  Robin Hood is thinking things out in terms of the filesystem, instead of thinking like how exactly a SSD works. Again, there is no way for the filesystem to know how the SSD stores the data. There is no "farther away" here; that is only in the eyes of the filesystem, NOT the actual way a SSD stores information. It is possible for the SSD to have the data spread out in different NAND chips, and the user will not notice any increase in access times. Heck, due to the parallel nature of the NAND, it could even end up being faster than before, but we are talking nanoseconds here; blink and you missed it.

Less total space increases the likely hood of writing fragmented files, and while the performance impact is small keep in mind that it's generally considered a bad idea to defragement a nand-flash S.S.D. because it will wear down the drive. Of course depending on what filesystem you are using some result in extremely low amounts of fragmentation, because they are designed to write files as a whole whenever possible rather than dump it all over the place to create faster write speeds.

Nope, sorry; again this is wrong. The filesystem's view of files and the SSD's view of those same files are not even remotely close. The filesystem might see the file as fragmented in the worst case possible, BUT, the SSD view of the same data is almost always optimized.

Thus, a defragmentation program would look at those LBAs and say, this file must really be fragmented!  But, since it has no clue as to the internals of the SSD, it is 100% wrong. THAT is the reason a defrag program will not work on SSDs, and yes, a defrag program also causes unnecessary writes, as was mentioned.

The article series Coding for SSDs is a good overview of what is going on if you want to be more technical about how SSDs work.

For some more "light" reading on how FTL (Flash Translation Layer) actually works, I also suggest you read Critical Role of Firmware and Flash Translation Layers in Solid State Drive Design (PDF) from the Flash Memory Summit site.

They also have lots of other papers available, such as:

Another paper on how this works: Flash Memory Overview (PDF).  See the section "Writing Data" (pages 26-27).

If video is more your thing, see An efficient page-level FTL to optimize address translation in flash memory and related slides.

Time Twin

Posted 2014-09-08T15:45:31.257

Reputation: 1 626

Hello, can you please add some links to sources that back up your information? It may very well be that the other answer is factually incorrect, but I've no way of knowing that you are correct either. – MarioDS – 2016-05-30T13:07:12.103

4From Windows Internals 6th ed., part 2, ch. 9 (Storage Management) and 12 (File Systems), you can learn how I/O requests to files go through the file system driver, then the volume driver, and finally the disk driver (also used for SSDs). The FSD translates blocks-within-a-file to blocks-within-a-partitoin; the volume driver translates the latter to blocks-within-a-disk, i.e. LBAs. So by the time the requests reach the disk driver all file- and partition-related context is GONE. The disk can't be aware of files or partitions because that info just isn't in the requests that come to it. – Jamie Hanrahan – 2016-05-30T15:21:03.070

5RobinHood is also mistaken in the claim "Like hard drives nand-flash S.S.D's are sequential access". These are random-access devices. If they were sequential access, then you couldn't tell them "read or write block n"; the only block you could access would be the one immediately following, or maybe the one immediately preceding, the one you just accessed. It is true that internally, NAND-flash SSDs can only write data in large "pages" at a time, but that doesn't make them sequential access. Tapes are sequential access. Look it up. – Jamie Hanrahan – 2016-05-30T15:26:27.757

I added another pdf in addition to the first link I had in my answer. – Time Twin – 2016-05-30T21:01:24.850

1@TimeTwin Man, the more I re-read your answer, the dumber I feel for blindly trusting Robin Hood's answer, which indeed contains statements that make SSD design look very stupid, had they been true. This is a reminder why we need to remain critical about information even if found on trustworthy sites and with many upvotes.

You've made a rather spectacular entry on this site, enjoy the rep boost and please continue to spread your (verified) knowledge. – MarioDS – 2016-06-09T08:16:22.343

This answer gets lots of upvotes, but it is wrong. Partitions need to be aligned on the SSD block to function efficiently. Modern OS will today generally try to protect against such errors, but do not listen to "Feel free to partition the SSD any way you want". – harrymc – 2016-10-29T05:54:40.173

@harrymc well, of course it has to be aligned, and the OS will do that for you. If you want to partition the SSD via some other way, you already know all the risks of doing it that way anyway. – Time Twin – 2016-12-04T07:02:11.637

See http://superuser.com/a/162195/28322 on fragmentation and trimming. SSD is affected by filesystem after all.

– Basilevs – 2016-12-04T11:55:44.253

@Basilevs, no, that isn't correct. The FTL layer is below the filesystem, it don't care what filesystem there is, all that is important is the FTL. Now, when a SSD is low on space, then yes, it gets slower, since it has less room to do upkeeping. Fragmentation on filesystem level has nothing to do with how the files are stored via FTL. – Time Twin – 2016-12-05T20:11:09.850

The slides from the last presentation can't be downloaded unfortunately. – Yaroslav Nikitenko – 2017-11-24T15:45:22.487

15

Very long answers here, when the answer is simple enough and follows directly just from the general knowledge of SSDs. One does not need more than read the Wikipedia term of Solid-state drive to understand the answer, which is:

The advice "DO NOT PARTITION SSD" is nonsense.

In the (now distant) past, operating systems did not support SSDs very well, and especially when partitioning did not take care to align the partitions according to the size of the erase block.

This lack of alignment, when an OS logical disk sector was split between physical SSD blocks, could require the SSD to flash two physical sectors when the OS only intended to update one, thus slowing disk access and increasing Wear leveling.

Currently SSDs are becoming much larger, and operating systems know all about erase blocks and alignment, so that the problem no longer exists. Maybe this advice was once meant to avoid errors on partition alignment, but today these errors are all but impossible.

In fact, the argument for partitioning SSDs is today exactly the same as for classical disks :
To better organize and separate the data.

For example, installing the operating system on a separate and smaller partition is handy for taking a backup image of it as a precaution when making large updates to the OS.

harrymc

Posted 2014-09-08T15:45:31.257

Reputation: 306 093

4

There are no drawbacks to partitioning a SSD, and you can actually extend its life by leaving some unpartitioned space.

Wear leveling is applied on all the blocks of device (ref. HP white-paper, linked below)

In static wear leveling, all blocks across all available flash in the device participate in the wear-leveling operations. This ensures all blocks receive the same amount of wear. Static wear leveling is most often used in desktop and notebook SSDs.

From that, we can conclude partitions don't matter for wear-leveling. This makes sense because from the HDD & controller point of view, partitions don't really exists. There are just blocks and data. Even partition table is written on the same blocks (1st block of the drive for MBR). It's the OS which then reads the table, and decides to which blocks to write data and which not. OS sees blocks using LBA to give a unique number to each block. However, the controller then maps the logical block to an actual physical block taking wear-leveling scheme into consideration.

The same whitepaper gives a good suggestion to extend live of the device:

Next, overprovision your drive. You can increase the lifetime by only partitioning a portion of the device’s total capacity. For example, if you have a 256 GB drive— only partition it to 240 GB. This will greatly extend the life of the drive. A 20% overprovisioning level (partitioning only 200 GB) would extend the life further. A good rule of thumb is every time you double the drive’s overprovisioning you add 1x to the drive’s endurance.

This also hints that even unpartitioned space is used for wear-levelling, thus further proving the point above.

Source: Technical white paper - SSD Endurance (http://h20195.www2.hp.com/v2/getpdf.aspx/4AA5-7601ENW.pdf)

JollyMort

Posted 2014-09-08T15:45:31.257

Reputation: 359

1

Disk sectors have been 512 bytes for a long time, and mechanical disks have the property that the only thing that affects how long it takes to read/write a sector is the seek delay. So the main optimzation step with mechanical hard drives was try to read/write blocks sequentially to minimize seeks.

Flash works vastly different than mechnical hard drives. On the raw flash level, you do not have blocks, but pages and "eraseblocks" (to borrow from Linux MTD terminology). You can write to flash a page at a time, and you can erase flash an eraseblock at a time.

A typical page size for flash is 2KBytes, and a typical size for eraseblocks is 128KBytes.

But SATA SSDs present an interface that works with 512 byte sector sizes to the OS.

If there is a 1:1 mapping between pages and sectors, you can see how you would run into trouble if your partition table started on an odd page or a page in the middle of an eraseblock. Given that OSes prefer to fetch data from drives in 4Kbyte chunks since this aligns with x86 paging hardware, you can see how such a 4Kbyte block could straddle an eraseblock, meaning updating it would require erasing, then rewriting 2 blocks instead of 1. Leading to lower performance.

However, SSD firmware does not maintain a 1:1 mapping, it does a Physical Block Address (PBA) to Logical Block Address (LBA) translation. Meaning you do not ever know where say sector 5000 or any other given sector is really being written to in the flash. It's doing a lot of things behind the scenes by design to try always write to pre-erased eraseblocks. You can't know for sure exactly what its doing without a disassembly of the firmware, but unless the firmware is completely junk the firmware probably steps around this.

You may have heard about 4Kn hard drives. These are mechnical hard drives that internally use a sector size of 4Kbytes, but still present a 512-byte sector interface to the operating systems. This is needed because the gaps between sectors need to get smaller on the platter to fit more data.

That means internally it always reads and writes 4K sectors but hides it from the OS. In this case, if you do not write to sectors that fall on a 4KByte boundary, you will incur a speed penalty because each such read/write will result in two internal 4KByte sectors being read and rewritten. But this does not apply to SSDs.

Anyway this is the only situation I can think of why it is suggested not to partition SSDs. But it doesn't apply.

LawrenceC

Posted 2014-09-08T15:45:31.257

Reputation: 63 487

-1

What these answers ignore are Windows SSD optimizations. I do not know if this means that partitioning becomes better, but for a partitioned C-drive as Windows-drive you can:

  1. turn of indexing
  2. do not need to keep track of time of last access
  3. do not need to store old 8 character dos-names
  4. bypass Windows trash

Ruud van den Berg

Posted 2014-09-08T15:45:31.257

Reputation: 7

Turning off indexing not only slows down searches but also means that you're unable to search inside files. It's not a good suggestion. – Richard – 2016-06-05T22:19:25.333

-2

I decided some background information might be helpful in making this answer clear, but as you can see I went a bit OCD so you might want to skip to the end and then go back if needed. While I do know a bit, I'm not an expert on S.S.D.s so if anyone sees a mistake EDIT it. :).

Background Information:

What Is An S.S.D.?:

An S.S.D. or solid state drive is a storage device with no moving parts. The term S.S.D. is often intended to specifically refer to nand-flash based solid state drives intended to act as a hard drive alternative, but in actuality they are just one form of S.S.D., and not even the most popular one. The most popular type of S.S.D. is nand-flash based removable media like usb sticks (flash drives), and memory cards, though they are rarely refered to as an S.S.D.. S.S.D.s can also be ram based, but most ram-drives are software generated as opposed to physical hardware.

Why Do Nand-flash S.S.D.s Intended To Act As A Hard Drive Alternative Exist?:

In order to run an operating system, and it's software a fast storage medium is required. This is where ram comes into play, but historically ram was expensive and cpu's couldn't address massive quantities. When you run an operating system, or program the currently-required portions of data are copied to your ram, because your storage device isn't fast enough. A bottleneck is created, because you have to wait for the data to be copied from the slow storage device to the ram. While not all nand-flash S.S.D.s recieve better performance than the more traditional hard drive, the ones that do help reduce the bottleneck by giving faster access times, read speeds, and write speeds.

What Is Nand-flash?:

Flash storage is a storage medium that uses electricity rather than magnetism to store data. Nand-flash is flash storage that uses a NAND gateway. Unlike A nor-flash which is random access, nand-flash is sequentially accessed.

How Do Nand-flash S.S.D.s store data?:

Nand-flash storage is composed of blocks, those blocks are split into cells, the cells contain pages. Unlike a hard drive which uses magnetism to store data, flash mediums use electricity, because of this data cannot be over-written; data must be erased in order to re-use the space. The device cannot erase individual pages; erasal must occur at a block level. Since data cannot be written to a block that is already used (even if not all the pages in it are) the entire block must be erased first, and then the now blank block can have data written to it's pages. The problem is that you would lose any data already in those pages, including data you don't want to discard! To prevent this existing data to be retained must be copied somewhere else before performing the block erasal. This copying proceedure is not performed by the computer's operating system, it is performed at a device level by a feature known as garbage collection.

On hard drives a magnetic plate is used to store data. Much like vinyl records the plate has tracks, and these tracks are divided into sections called sectors. A sector can hold a certain amount of data (typically 512 bytes but some newer ones are 4KB). When you apply a filesystem sectors are grouped into clusters (based on a size you specify, called an allocation size or cluster size), and then files are written across clusters. It is also possible to divide a sector to make clusters smaller than your sector size. The space unused in a cluster after a file is written across a cluster (or several) is not usable, the next file starts in a new cluster. To avoid lots of unusable space people typically use smaller cluster sizes, but this can decrease performance when writing large files. Nand-flash S.S.D.s do not have a magnetic plate, they use electricity passing through memory blocks. A block is made of cells containing pages. Pages have X capacity (usually 4 KB), and thus the number of pages will determine the capacity of a block (usually 512 KB). On SSD's a page equates to sector on a hard drive, because they both represent the smallest division of storage.

What Is Wear Leveling?:

Nand-flash storage blocks can be written to, and erased a limited number of times (refered to as their lifecycle). To prevent the drive from suffering of capacity reduction (dead blocks) it makes sense to wear down the blocks as evenly as possible. The limited lifecycle is also the main reason why many people suggest not having a page file or swap partition in your operating system if you are using a Nand-flash based S.S.D. (though the fast data transfer speeds from the device to ram are also a major factor in that suggestion).

What Is Over Provisioning?:

Over Provisioning defines the difference between how much free space there is, compared to how much there appears to be. Nand-flash based storage devices claim to be smaller than they are so that there is garanteed to be empty blocks for garbage disposal to use. There is a second kind of over provisioning called dynamic over provisioning which simply refers to known free space within the shown free space. There are two types of dynamic over provisioning: operating system level, and drive controller level. At the operating system level Trim can be used to free blocks that can then be written to immediatley. At the controller level unallocated drive space (not partitioned, no filesystem) may be used. Having more free blocks helps keep the drive running at it's best performance, because it can write immediately. It also increases the likely hood of having blocks that are sequentially located which reduces access times because Nand-flash S.S.D.s use sequential access to read and write data.

What Is Write Amplification?:

Because Nand-flash mediums require a block to be erased before it can be written, any data within the block that isn't being erased must be copied to a new block by garbage disposal. These additional writes are called write amplification.

What Is Trim.?:

Operating systems are built with traditional hard drives in mind. Remember a traditional hard drive can directly overwrite data. When you delete a file the operating system marks it as deleted (okay to over-write), but the data is still there until a write operation occurs there. On Nand-flash based S.S.D.s this is a problem, because the data must first be erased. The erasal occurs at a block level so there may be additional data that isn't being deleted. Garbage disposal copies any data that isn't up for deletion to empty blocks, and then the blocks in question can be erased. This all takes time, and causes unneccesary writes (write amplification)! To get around this a feature called Trim was made. Trim gives the operating system the power to tell the S.S.D. to erase blocks with pages containing data the operating system has marked as deleted during periods of time when you aren't requesting a write operation there. Garbage collection does it's thing, and as a result blocks are freed up so that writes can hopefully occur to blocks that don't need to be erased first which makes the process faster, and helps reduce the write amplification to a mimimum. This is not done on a file basis; Trim uses logical block addressing. The L.B.A. specifies which sectors (pages) to erase, and the erasal occurs at a block level.

The Answer To Your Question "Disadvantages of partitioning an SSD?":

Ram Based S.S.D.s:

There is absolutely no disadvantage because they are random access!

Nand-flash Based S.S.D.s:

The only disadvantages that come to my mind would be:

  1. Wear leveling won't have as much free space to play with, because write operations will be spread across a smaller space , so you "could", but not necessarily will wear out that part of the drive faster than you would if the whole drive was a single partition unless you will be performing equivalent wear on the additional partitions (eg: a dual boot).

  2. Like hard drives nand-flash S.S.D's are sequential access so any data you write/read from the additional partitions will be farther away than it "might" have been if it were written in a single partition, because people usually leave free space in their partitions. This will increase access times for the data that is stored on the additional partitions.

  3. Less total space increases the likely hood of writing fragmented files, and while the performance impact is small keep in mind that it's generally considered a bad idea to defragement a nand-flash S.S.D. because it will wear down the drive. Of course depending on what filesystem you are using some result in extremely low amounts of fragmentation , because they are designed to write files as a whole whenever possible rather than dump it all over the place to create faster write speeds.

I'd say it's okay to have multiple partitions, but wear leveling could be a concern if you have some partitions getting lots of write activity, and others getting very little. If you don't partition space you don't plan to use, and instead leave it for dynamic over provisioning you may recieve a performance boost because it will be easier to free blocks and write sequential data. However there is no garauntee that over provisioning space will be needed which brings us back to point #1 about wear leveling.

Some other people in this thread have brought up discussion of how partitioning will affect Trim's contributions to dynamic over provisioning. To my understanding TRIM is used to point out sectors (pages) that have data flagged for deletion, and so garbage disposal can free erase those blocks. This free space acts as dynamic over provisioning within THAT partition only, because those sectors are part of clusters being used by that partition's filesystem; other partitions have their own filesystems. However I may be totally wrong on this as the whole idea of over provisioning is a bit unclear to me since data will be written to places that don't even have filesystems or appear in the drives capacity. This makes me wonder if perhaps over provisioning space is used on a temporary basis before a final optomized write operation to blocks within a filesystem? Of course Trim's contributions to dynamic over provisioning within the filesystem would not be temporary as they could be written to directly since they're already in usable space. That's my theory at least. Maybe my understanding of filesytems is wrong? I've been unable to find any resources that go into detail about this.

Robin Hood

Posted 2014-09-08T15:45:31.257

Reputation: 3 192

17

"1. Wear leveling won't have as much free space to play with, because write operations will be spread across a smaller space (...)". This seems not to be true as wear levelling is performed at lower level by SSD controller (at least with SSD and operating system that support Trim. http://superuser.com/a/901521/517270

– misko321 – 2015-12-01T20:00:22.933

4NAND based memories allow random access to blocks. What it does not allow is random access to bits inside a block. So partitions can be accessed randomly because they are multiples of the block size (at least should be, if the user didn't mess with the memory somehow, i.e. using partitioning apps without knowing what is happening) – Miguel Angelo – 2016-04-28T02:45:30.100

5points 1 and 2 seem to be totally false – underscore_d – 2016-06-08T14:36:33.233

-14

No, this makes sense.

The speed of a SSD directly connects to the amount of usable space on the in-use partition. If you partitioned the drive into small sections the efficiency of the SSD will be hit because of the lack of free space.

So there are no drawbacks of partitioning a SSD, but there are drawbacks of not having free space on the drive.

Refer to this SuperUser post.

Mark Lopez

Posted 2014-09-08T15:45:31.257

Reputation: 925

1Creating logical partitions doesn't necessarily fill them up does it? I don't see how you lose free space automatically in doing this. – MarioDS – 2014-09-08T16:17:08.627

1The OS knows what blocks can be used and what blocks are free, the drive cannot. By partitioning the OS has less free blocks that it knows about, blocks that can be used. This reduces performance. TRIM is executed on the partition level by the OS. – Mark Lopez – 2014-09-08T16:39:10.750

10But it can. That’s what TRIM is for, after all. TRIM is executed on the sector level and the SSD doesn’t care about partitions. It only cares about sectors (aka flash cells). As such, partitions have only a negligible impact (space used by filesystem overhead) on performance. – Daniel B – 2014-09-08T17:26:11.780

1Actually HDDs are the ones you shouldn't create partitions on – Suici Doga – 2016-06-09T01:39:37.667