163

SSD drives have been around for several years now. But the issue of reliability still comes up.

I guess this is a follow up from this question posted 4 years ago, and last updated in 2011. It's now 2013, has much changed? I guess I'm looking for some real evidence, more than just a gut feel. Maybe you're using them in your DC. What's been your experience?

Reliability of ssd drives


UPDATE:

It's now 2016. I think the answer is probably yes (a pity they still cost more per GB though).

This report gives some evidence:

Flash Reliability in Production: The Expected and the Unexpected

And some interesting data on (consumer) mechanical drives:

Backblaze: Hard Drive Data and Stats

hookenz
  • 14,132
  • 22
  • 86
  • 142
  • 2
    Why do you say that the reliability issue still comes up? – ewwhite May 14 '13 at 05:53
  • 6
    My wife's laptop SSD stops working every few months and requires a strange "power on but don't try booting for twenty minutes" fix. Then it's fine again. New technology, new ways of failing. – Jaydee May 14 '13 at 09:07
  • 27
    You don't want a reliable drive anyway. If it fails at 2PM every day you'll be able to rely on it to set your watch. What you want is a resilient drive. – Alan B May 14 '13 at 11:05
  • check this out http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm – David Fregoli May 14 '13 at 12:41
  • 3
    Just single data point, I'm afraid: I got a MacBook Retina Pro in September last year, and had terminal SSD failure within 60 days. Replacement unit has been fine, but I'm very wary of it now simply because user-replacement/upgrades of these is really not an option. – Roddy May 14 '13 at 09:46
  • I've had an OCZ 240GB SSD since summer last year. I've not had any problems with it in around 8 months of use in my home desktop. – tombull89 May 14 '13 at 15:21
  • 3
    SSDs have been around for *a lot* longer than “several years now”. More like 40, actually, and even if you mean Flash-based SSDs, we’re talking nigh on 20 years. – al45tair May 14 '13 at 15:26
  • @Jaydee: If this continues to be a problem, try upgrading the firmware. I had this issue myself and upgrading the firmware killed the problem. – Michael B May 15 '13 at 07:43
  • @Kyle thanks, I'll try that when she is away for a few days. – Jaydee May 15 '13 at 08:17
  • I've had two intel SSD break downs in my HP notebook. Last break down was one year ago. The first break down was after I had unplugged the power and ripped out the battery (had to catch a train and couldn't wait for Win to shut down). The 2nd might be due to the computer becoming very warm. I would not say that they are as reliable as regulars, you have to "treat them with respect". If you need "emergency shut down" on a notebook press the power button and hold it. Do backups. When a SSD dies it is just luck if you can get any files out of it. But I am no "hardware man", just a software guy... – mortb May 15 '13 at 08:49
  • 1
    @alastair: Where do you get "40 years"? Wikipedia claims the first [flash-based SSD](http://en.wikipedia.org/wiki/solid-state_drive) was in 1989 (27 years ago). Or are you referring to earlier non-flash solid state memory, such [magnetic-core memory](http://en.wikipedia.org/wiki/magnetic-core_memory)? Wikipedia claims the first core memory was installed on Whirlwind in 1953 (63 years ago). – David Cary Sep 29 '16 at 01:26
  • @DavidCary I was referring to the SCSI DRAM/SRAM-based SSDs that you used to be able to get (probably still can… haven’t looked), which have a lot in common with modern SSDs (they look like disks and are read/written in disk block sized units). They were mostly used to accelerate database lookups. I wasn't referring to core memory, which was used more like RAM. – al45tair Sep 29 '16 at 07:39
  • @matt regarding your update "resounding YES. If not more reliable" - I'm not denying it but we kind of like proof or reference here as you know - are there any? genuinely interested. – Chopper3 Sep 29 '16 at 09:47
  • Yes perhaps that was a bit premature. It's my own personal observation with a number of drives in a datacentre. Instead I'll put a more meaningful link there. – hookenz Sep 29 '16 at 21:31

5 Answers5

176

This is going to be a function of your workload and the class of drive you purchase...

In my server deployments, I have not had a properly-spec'd SSD fail. That's across many different types of drives, applications and workloads.

Remember, not all SSDs are the same!!

So what does "properly-spec'd" mean?

If your question is about SSD use in enterprise and server applications, quite a bit has changed over the past few years since the original question. Here are a few things to consider:

  • Identify your use-case: There are consumer drives, enterprise drives and even ruggedized industrial application SSDs. Don't buy a cheap disk meant for desktop use and run a write-intensive database on it.

  • Many form-factors are available: Today's SSDs can be found in PCIe cards, SATA and SAS 1.8", 2.5", 3.5" and other variants.

  • Use RAID for your servers: You wouldn't depend on a single mechanical drive in a server situation. Why would you do the same for an SSD?

  • Drive composition: There are DRAM-based SSDs, as well as the MLC, eMLC and SLC flash types. The latter have finite lifetimes, but they're well-defined by the manufacturer. e.g. you'll see daily write limits like 5TB/day for 3 years.

  • Drive application matters: Some drives are for general use, while there are others that are read-optimized or write-optimized. DRAM-based drives like the sTec ZeusRAM and DDRDrive won't wear-out. These are ideal for high-write environments and to front slower disks. MLC drives tend to be larger and optimized for reads. SLC drives have a better lifetime than the MLC drives, but enterprise MLC really appears to be good enough for most scenarios.

  • TRIM doesn't seem to matter: Hardware RAID controllers still don't seem to fully support it. And most of the time I use SSDs, it's going to be on a hardware RAID setup. It isn't something I've worried about in my installations. Maybe I should?

  • Endurance: Over-provisioning is common in server-class SSDs. Sometimes this can be done at the firmware level, or just by partitioning the drive the right way. Wear-leveling algorithms are better across the board as well. Some drives even report lifetime and endurance statistics. For example, some of my HP-branded Sandisk enterprise SSDs show 98% life remaining after two years of use.

  • Prices have fallen considerably: SSDs hit the right price:performance ratio for many applications. When performance is really needed, it's rare to default to mechanical drives now.

  • Reputations have been solidified: e.g. Intel is safe but not high-performance. OCZ is unreliable. Sandforce-based drives are good. sTec/STEC is extremely-solid and is the OEM for a lot of high-end array drives. Sandisk/Pliant is similar. OWC has great SSD solutions with a superb warranty for low-impact servers and for workstation/laptop deployment.

  • Power-loss protection is important: Look at drives with supercapacitors/supercaps to handle outstanding writes during power events. Some drives boost performance with onboard caches or leverage them to reduce wear. Supercaps ensure that those writes are flushed to stable storage.

  • Hybrid solutions: Hardware RAID controller vendors offer the ability to augment standard disk arrays with SSDs to accelerate reads/writes or serve as intelligent cache. LSI has CacheCade and its Nytro hardware/software offerings. Software and OS-level solutions have also exist to do things like provide local cache on application, database or hypervisor systems. Advanced filesystems like ZFS make very intelligent use of read and write-optimized SSDs; ZFS can be configured to use separate devices for secondary caching and for the intent log, and SSDs are often used in that capacity even for HDD pools.

  • Top-tier flash has arrived: PCIe flash solutions like FusionIO have matured to the point where organizations are comfortable deploying critical applications that rely on the increased performance. Appliance and SAN solutions like RanSan and Violin Memory are still out there as well, with more entrants coming into that space.

enter image description here

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 13
    TRIM is really important in drives with very little over-provisioning, which is manly the case in consumer drives where the $/GB is all-important. Most enterprise drives have enough over-provisioning that TRIM doesn't make any difference. – Mr Alpha May 14 '13 at 08:32
  • "Don't buy a cheap disk meant for desktop use". Is there any reason for this discrimination against desktop users? I think SSDs are common now thanks in part to the demand for high performance gaming rigs. And about the price, I think you can find bargains if you spend some time on the search, and these are still very good quality. – Mister Smith May 14 '13 at 12:51
  • 2
    @MisterSmith See what I wrote above. Different SSDs have different characteristics. Use the right tool for the job. If I were to take a [Corsair](http://www.corsair.com/en/ssd.html) or other consumer-level drive and use it as an SSD for an active write-heavy database system or as the [**ZIL** log device](http://nex7.blogspot.com/2013/04/zfs-intent-log.html) for a ZFS storage array, I'd burn through it in a month or two. – ewwhite May 14 '13 at 13:00
  • @ewwhite A PC gamer usually buys 1 drive from time to time, and usually does not care about the price as much as an engineer having to buy 1k drives for a server room. If you are building a top-end gaming rig you usually buy the best drive available, but for a datacenter you'd look more the $/GB ratio. I mean I don't think SSDs are good or bad depending on the computer type. – Mister Smith May 14 '13 at 13:10
  • @MisterSmith a hard drive failing in a datacenter is much more catastrophic than a hard drive failing on a gamers PC (well, in some respects). – rickyduck May 14 '13 at 13:30
  • 1
    Excellent post. One pet peeve of mine: RAID controllers are not always the right choice with SSDs. RAID controllers were designed for striping data and adding error correction codes across multiple magnetic disks. SSD controllers already *natively stripe data and add error correction codes* across multiple banks of NVRAM. Also, adding a RAID controller introduces one extra SPOF, the RAID controller itself. Using a separate RAID controller is *often* the right choice, but *sometimes* using a *better SSD* (higher grade SATA/SAS or even PCI-E cards like Fusion-IO) is a *better* choice. –  May 14 '13 at 14:11
  • 1
    @JesperMortensen I deal with a lot of SSDs in ZFS storage setups. There I use SAS HBA cards. There's an entire set of best-practice for using SSDs and RAID controllers. HP Smart Array, for instance, has limits on how many SSDs can sit on the same controller before you encounter diminishing performance returns. LSI has their [**Fast Path**](http://www.lsi.com/channel/products/storagesw/Pages/MegaRAIDFastPathSoftware.aspx) solution for MegaRAID which basically disables the normal RAID controller optimizations meant for spinning disks, thus improving performance for SSD arrays. – ewwhite May 14 '13 at 14:23
  • 5
    @rickyduck, Actually in a data center, a single drive failure is protected by RAID, and means spening a few $$$ to replace it with no downtime; whereas in a gaming rig, a failure of the single drive is total data loss, and OS re-install. –  May 14 '13 at 19:10
  • 2
    @MisterSmith even high end desktop SSDs are cheaper per GB than ones targeted at typical data center uses. In addition to more powerful controllers in some instances (or just low volume firmware tuned for throughput instead of burst performance), they get the best flash skimmed off the top of the production run because many server workloads are orders of magnitude more IO intensive than desktop drives and would quickly kill a consumer drive. – Dan Is Fiddling By Firelight May 14 '13 at 21:10
  • 1
    @ewwhite, Does "partitioning the drive the right way" mean partition such that half-the-drive is unused. The consumer drives i provision for "half use" have lasted longer. It seems to give them more endurance, maybe not. – rjt May 14 '13 at 21:25
  • 1
    @rjt Yes, either by overprovisioning with a firmware tool or even something as simple as partitioning. It definitely helps drive endurance. – ewwhite May 14 '13 at 21:27
  • @ewwhite would you add anything in 2018? – LueTm Apr 25 '18 at 06:06
59

Every laptop at my work has either a SSDs or Hybrid since 2009. My SSD experience in summary:

  • What I'll call "1st Generation" drives, sold around 2009 mostly:
    • In the first year about 1/4 died, almost all from Syndrome of Sudden Death (SSD - It's funny, laugh). This was very noticeable to end users, and annoying, but the drastic speed difference made this constant failure pattern tolerable.
    • After 3 years all of the drives have died (Sudden Death or Wear-out), except two who are still kicking (actually L2Arc drives in a server now).
  • The "2nd Gen" drives, sold around 2010-11, are distinct from the previous generation as their Syndrome of Sudden Death rates dropped dramatically. However, the wear-out "problem" continued.
    • After the first year most drives still worked. There were a couple of Sudden Deaths. A couple failed from wear-out.
    • After 2-3 years a few more than half are still working. The first year rate of failure has essentially continued.
  • The "3rd Gen" drives, sold 2012+ are all still working.
    • After the first year all still work (knock on wood).
    • The oldest drive I've got is from Mar 2012, so no 2-3 year data yet.

SSD Failure (Cumulative)


May 2014 Update:
A few of the "2nd Gen" drives have since failed, but about a third of the original drives are still working. All the "3rd Gen" drives from the above graphic are still working (knock on wood). I've heard similar stories from others, but they still carry the same warning about death on swift wings. The vigilant will keep their data backed up well.

Stuart Brock
  • 115
  • 6
Chris S
  • 77,337
  • 11
  • 120
  • 212
  • 2
    My experience echoes this. That said, we still provide USB hard drives for employees to use as time machine backups (in addition to our standard offsite backup regimen), to allow for quick, granular restores in the event of catastrophic failure or loss. – EEAA May 14 '13 at 04:37
  • My OCZ Vertex 3 (use for about a year) works, but two or three time part of my data has been corrupted – KindDragon May 14 '13 at 11:12
  • 22
    Give the 3rd generation another year or two. ;) – Andy May 14 '13 at 12:27
  • 7
    Also worth noting that SSDs are far more likely to suffer from firmware bugs than HDDs; the firmware is both more complicated and less mature, which is not a good combination. – al45tair May 14 '13 at 15:29
  • 2
    @Andy is right. Starting with 2009, you say the half-life was about a year. For the 2010 to 2011 we're barely at 3 years old for the middle of that range right now, where you indicate a half-life of 3 years. That could only be based on current observations. Whether the half-life for 2012 & 2013 drives has improved beyond 3 years can't really be known until at least 2016. (We could try to extrapolate from early failures, but those would likely just be isolated manufacturing defects, not caused by long-term regular use.) – Andrew Vit May 14 '13 at 17:15
  • @AndrewVit We can *guess* based on non-natural endurance testing. If drive A can sustain maximum speed writes for five times longer than drive B, and you can repeat this with the majority of drives A and B, then you can extrapolate from that that drive A is significantly more durable. I wouldn't say five times more durable as that non-natural test ignores a lot of natural degradation, but it's the cell write limit that left early generation drives broken, not other factors (in general). – Phoshi May 15 '13 at 10:42
  • Er, no... Sudden death (controller failure?) accounted for about 3/4 of my 1st Gen failures, and usually occurred within the first year. I do still have 2 of those 1st Gens around, they're at 30% and 60% life currently. The 2nd Gem seemed more likely to "die" from use, roughly reversing the 1:3 ratio from the 1st Gen. 3rd Gen has notably bucked this trend; not a single failed drive within the first year. Not saying they wont all die tomorrow, but the first year trends are different with statistical significance. – Chris S May 15 '13 at 14:30
  • Great answer and very interesting results. It seems fair to say that the life of SSD drives is not quite on par yet with mechanical drives. That being said, it would appear that the trend is toward longer life and of course higher capacity. – hookenz May 15 '13 at 21:26
  • @matt I think that's wrong. For server applications, enterprise mechanical disks tend to fail more often than SSDs of the equivalent tier. But as I said, it depends on the type of drive being used. – ewwhite May 15 '13 at 21:47
  • 1
    I've been using 2 x Intel X25M 80Gb G1's since 2010. My work involves heavy read/write processes. Every few months or so I reset the drive (I don't have to) to bring back performance, but otherwise zero problems. I added a 256Gb Samsung 230 last year. No problems thus far! – Antillar Maximus May 16 '13 at 11:51
  • 1
    We're now in May 2014, so a little over two years for the oldest Gen 3 drive. How is it holding up? – user May 17 '14 at 20:03
  • 1
    Brief update.. I'll see about graphing the data on Monday when I'm at work (all of the data is from my work experience, none of my home SSDs have ever failed, knock on wood) – Chris S May 18 '14 at 02:39
  • In the update you wrote that a third of the 1 gen drives were working? wouldn't this be ~66% cumulative failure rate at 6 years? And what is the unit, percent? It would be nice to see graphs if you have time :) – Nisse Jun 13 '14 at 03:30
  • @Nisse That's 1/3 of the original "2nd Gen" drives. The "1st Gen" drives are all dead except 2 that refuse to die. Unit counts are about 50 drives for each Gen. – Chris S Jun 13 '14 at 14:16
  • @alastair Why would they be more likely to suffer from firmware bugs, given that storing data on ICs is dramatically simpler than storing data on rotating metal platters? Do you have a source link for that info, because I've heard exactly the opposite is true. – NickG Feb 10 '15 at 13:07
  • @NickG Two years ago, when I wrote that comment, the firmware in SSDs (which is more complicated, not less, because it has to run a wear-levelling algorithm as well as handling the fact that Flash erase block size is different from Flash page size) was relatively new the evidence I saw (I run a disk utility company) was that SSDs were more likely to exhibit problems. Now, I would say, that’s no longer the case; SSD firmware is typically as reliable as HDD firmware. – al45tair Feb 11 '15 at 11:19
  • OK understood :) The algorithms for controlling rotary drives are far from simple though. Working out exactly when to start writing data on a rapidly spinning disk with several platters/faces heads and handling legacy protocols (cylinders/heads/and sectors & LBA) as well as the more modern addressing mechanisms is pretty complex. You also have to optimise when to read it (for when the right bit of the platter comes around) which isn't an issue on SSD. I think SSD firmware was only less reliable then because it was less bedded-in - I very much doubt it's truely more complex. – NickG Feb 11 '15 at 11:29
  • @ChrisS It has been another year, how are your SSDs faring? – Erbureth Jul 22 '15 at 11:15
18

In my experience, the real problem are the dying controllers, not the flash memory itself. I've installed around 10 Samsung SSDs (830, 840 [not pro]) and none of them has made any problems so far. The total opposite are drives with Sandforce controllers, i had several problems with OCZ agility drives, especially freezes in irregular time intervals, where the drive stops working until i poweroff/-on the computer. I can give you two advices:

  1. If you need a high reliability, choose a drive with MLC, better SLC flash. Samsungs 840 f.e. has TLC flash, and a short warranty, i think not without any reason ;)

  2. Choose a drive with a controller that is known to be stable.

klingt.net
  • 281
  • 1
  • 4
  • Reminds me of the dying controllers on current LED based lighting. The LED's last a very long time but the controllers don't seem to. – hookenz May 16 '13 at 02:53
  • 1
    Who knows, but maybe it's part of industries planned obsolescence :) – klingt.net May 16 '13 at 07:52
11

www.hardware.fr one of the biggest French hardware news site is partner with www.ldlc.com one of the biggest French online reseller. They have access to their return stats and have been publishing failure rate reports (mother boards, power supplies, RAM, graphics cards, HDD, SSD, ...) twice a year since 2009.

These are "early death" stats, 6 months to 1 year of use. Also returns direct to the manufacturer can't be counted, but most people return to the reseller during the first year and it shouldn't affect comparisons between brands and models.

Generally speaking HDD failure rates have less variations between brands and models. The rule is bigger capacity > more platters > higher failure rate, but nothing dramatic.

SSD failure rate is lower overall but some SSD models were really bad with around 50% returns for the infamous ones during the period you asked for (2013). Seems to have stopped now that that infamous brand was bought.

Some SSD brands are "optimising" their firmware just to get a bit higher results in benchmarks and you sometime end up with freezes, blue screens, ... This also seems to be less of a problem now than it was in 2013.

Failure rate reports are here:
2010
2011 (1)
2011 (2)
2012 (1)
2012 (2)
2013 (1)
2013 (2)
2014 (1)
2014 (2)
2015 (1)
2015 (2)
2016 (1)
2016 (2)

Sacha K
  • 367
  • 4
  • 18
  • Here is a link to an automatic translated version of the french article http://translate.googleusercontent.com/translate_c?act=url&depth=1&hl=de&ie=UTF8&prev=_t&rurl=translate.google.de&sl=auto&tl=en&u=http://www.hardware.fr/articles/893-7/ssd.html&usg=ALkJrhjrDde7MRnf8NWJEm3UYQrrseA0Nw – SDwarfs May 14 '13 at 09:37
0
toffitomek
  • 131
  • 1
  • 3
  • 12