SSD cache in SSHD compromises the lifecycle compared to standard HDD?

10

3

The limited writings and capacity\price of the SSD are known drawbacks as the sensitivity to shocks of the HDD.

The hybrid SSHD (standard HDD+SSD cache) have a classic HD combined with a small SSD used as cache and managed by the firmware of the SSHD.

Now I want to know what happens when the SSD cache reaches the writing limit quote for all the cells, the two possible alternatives are:

a) simply the firmware stop to use the SSD caching and the SSHD becomes a standard HDD

b) the SSHD becomes unusable

What is the right answer? (The b alternative would make the SSHD choice, the less durable alternative and the worst choice ever for a server.)

I have searched reliable sources about this, but I haven't found anything.

The mechanical problems of HDD are very rare if they they are not subjected to shocks during the read\write operations, in standard condition the MTF range between 1 million to 1.5 million hours for a modern HDD. In the SSD, particularly SSD TLC, wearing is a problem, MLC typical maximum PE-cycle-per-block numbers range from 1500 to 10,000 (5000 mostly). Reach 5000 cycles is relatively easy if the storage is used intensively (especially in server use). So It's really important the SSD cache durability, and in addition must be considered also that cache-SSD sectors in SSHD are used even most intensively of a standard SSD that can distribute the wearing on a larger space.

Silverstorm

Posted 2013-04-11T02:09:25.083

Reputation: 531

1I've never had a modern hard drive fail on me without some extenuating circumstance (i.e. shock to the drive or something) -- but I bought two SSHDs about a year ago (Seagate), and one of them, with only standard usage (no shocks, etc.) has failed already. :( -- I'm thinking ISRT with regular drives is the way to go. – BrainSlugs83 – 2015-09-12T08:09:22.103

1@BrainSlugs83 Could you tell what exactly happened to your broken SSHD? (If bad sector, power issue or whatever else) – Silverstorm – 2015-09-12T22:57:48.253

Should probably point out that this will probably never happen, due firstly to the algorithms in use, and secondly because the cache in an SSHD uses different type of memory to that of SSDs which has hundreds of times higher lifespan - i.e. SLC or high endurance MLC. – qasdfdsaq – 2015-10-16T19:07:41.697

Answers

5

Cache

+David Schwartz, when you mentioned about it's already in the OS cache, I cannot agreed any more.

But the problems is size matter, if the file cache managed by operating system is smaller than the size of SSD cache, the SSD cache in SSHD can still save your time to read from disk.

In my situation, I am running OpenSuSE 12.3 x64 on Lenovo ThinkCentre Edge 72z with 16GB of RAM. My file cache is about 3GB after 14 hours. If your computer is 64GB of RAM, the file cache might be more than 8GB. As mentioned above, the 8Gb SSD cache is less useful than file cache. That's why Seagate provides Seagate Enterprise Turbo SSHD with 32GB SSD cache.

Wear Out

Toshiba provides a faq to explain what shall happen when SSD cache wear out. It should be function as a normal hard drive.

Conculsion

Before the SSD cache wear out, there will be more and more damaged chips in SSD which means the available SSD cache is becoming smaller and smaller. User shall notice that the performance is decline slowly without any warning from S.M.A.R.T.

You may check my SSHD will function as normal Hard Drive when the SSD Cache wear out for detail and my update on this matter.

Amigo

Posted 2013-04-11T02:09:25.083

Reputation: 201

1

Given the write speed of a typical HDD, the write endurance of a typical SSD, and the logic of a typical SSHD, this is an almost impossible failure mode to trigger. Long before you hit the write endurance of the SSD, the HDD would likely have mechanically failed. Honestly, this is basically the last thing you should worry about.

Update: Unlike with a standard SSD, an SSHD never has to write anything to flash. It only writes things to flash if its firmware decides to. If the write volume is high, there's no point in using the flash to buffer them (because it will just fill up eventually and stop providing any benefit). If the write volume is low, then it won't age the flash significantly. Similarly for reads from the HDD, it only makes sense to cache things that are frequently read and rarely changed. There can't be much of that, it's mathematically impossible. Because all modern OSes access their drives through a cache, there's no point in caching data that has just been read or written because the OS will never read it back again soon -- it's already in the OS cache.

David Schwartz

Posted 2013-04-11T02:09:25.083

Reputation: 58 310

Except the SSHD cache can be exposed to the OS, and the OS can manage it as an extension of its own cache (it's in the UEFI options for SSHD drives). -- Honestly though, I saw worse performance that way than letting the drives manage their own caches in hardware... – BrainSlugs83 – 2015-09-12T08:14:04.383

One thing an SSHD could do, if it were smart, is to be aware of the disk blocks that are most commonly read during boot-up, assuming the user reboots often enough for it to matter, and keep those in cache. On systems with 8 GB or more of page cache in RAM, boot time on mechanical HDDs is still a bottleneck, even if normal operation is blazing fast after the RAM page cache is warm. – allquixotic – 2015-10-14T15:29:36.843

The mechanical problems of HDD are very rare if they are not subjected to shocks during the read\write operations, (in standard condition the MTF range between 1 million to 1.5 million hours for a modern HDD). The SSD especially MLC typical maximum PE-cycle-per-block numbers range from 1500 to 10,000 (5000 mostly). Reach 5000 cycles is relatively easy if the storage is used intensively (especially in a server). So is really important the SSD cache durability (consider that cache-SSD sectors are used even most intensely of a standard SSD that can distribute the wearing on a larger space.) – Silverstorm – 2013-04-11T15:40:21.927

@Silverstorm: See updates to my answer. – David Schwartz – 2013-04-11T19:56:14.343

0

I don't have a definite reference yet but I would be pretty sure the answer would be A. As the SSD cell is used over time electrons slowly build up in the insulator layer, shrinking the voltage range that can be used for programming. This will result in the controller either having multiple read/write retries (when it can't determine an value), errors (when the wrong value is returned) or blocks marked as unusable. The SSD as a whole wouldn't cease to work but may not work as well.

As side note the wear-level algorithms and firmware controllers keep getting better and better at preventing this. TechReport just did a review of the new Seagate drive here.

Seagate doesn't publish an endurance specification for the Laptop Thin SSHD's flash component, but the drive is covered by a three-year warranty. Even for worst-case workloads, Burks says there's a "really high level of wear-level margin."

Brad Patton

Posted 2013-04-11T02:09:25.083

Reputation: 9 939

You would hope that they programmed the firmware to fail over to HDD only usage in the event of worn flash, but economic pressures may compel them to behave otherwise. No way to really know if Seagate doesn't say without disassembling and analyzing the original firmware. – LawrenceC – 2013-04-11T15:06:55.913

Thanks for the answer, unfortunately I have tried to contact Seagate support but nobody seem know reliable info about this. – Silverstorm – 2013-04-11T15:51:56.667

ANother resource http://www.storagesearch.com/ssdmyths-endurance.html

– Brad Patton – 2013-04-11T15:57:11.830