45

We (and by we I mean Jeff) are looking into the possibility of using Consumer MLC SSD disks in our backup data center.

We want to try to keep costs down and usable space up - so the Intel X25-E's are pretty much out at about 700$ each and 64GB of capacity.

What we are thinking of doing is to buy some of the lower end SSD's that offer more capacity at a lower price point. My boss doesn't think spending about 5k for disks in servers running out of the backup data center is worth the investment.

These drives would be used in a 6 drive RAID array on a Lenovo RD120. The RAID controller is an Adaptec 8k (rebranded Lenovo).

Just how dangerous of an approach is this and what can be done to mitigate these dangers?

Jeff Atwood
  • 12,994
  • 20
  • 74
  • 92
Zypher
  • 36,995
  • 5
  • 52
  • 95
  • 4
    What is the rationale for using SSD instead of spinners? The folk wisdom on SSD performance is "pay up or don't bother", but there certainly are other aspects that might be an advantage. – peterchen Feb 02 '11 at 09:57
  • I'm curious about the problem that you're trying to solve here. If it's simply one of costs why are SSDs being considered in place of conventional drives? – John Gardeniers Feb 02 '11 at 23:28
  • @peterchen, you can use either a couple of SSDs or fifty 15K spindles. – Mircea Chirea Feb 17 '11 at 04:33
  • @iconiK - do you mean "for a server, you need to spend a lot ofmoney anyway"? If so - yes, that's why I was wondering, too. – peterchen Feb 17 '11 at 08:19

9 Answers9

62

A few thoughts;

  • SSDs have 'overcommit' memory. This is the memory used in place of cells 'damaged' by writing. Low end SSDs may only have 7% of overcommit space; mid-range around 28%; and enterprise disks as much as 400%. Consider this factor.
  • How much will you be writing to them per day? Even middle-of-the-range SSDs such as those based on Sandforce's 1200 chips rarely appreciate more than around 35GB of writes per day before seriously cutting into the overcommitted memory.
  • Usually, day 1 of a new SSD is full of writing, whether that's OS or data. If you have significantly more than >35GB of writes on day one, consider copying it across in batches to give the SSD some 'tidy up time' between batches.
  • Without TRIM support, random write performance can drop by up to 75% within weeks if there's a lot of writing during that period - if you can, use an OS that supports TRIM
  • The internal garbage collection processes that modern SSDs perform is very specifically done during quiet periods, and it stops on activity. This isn't a problem for a desktop PC where the disk could be quiet for 60% of its usual 8 hour duty cycle, but you run a 24hr service... when will this process get a chance to run?
  • It's usually buried deep in specs but like cheapo 'regular' disks, inexpensive SSDs are also only expected to have a duty cycle of around 30%. You'll be using them for almost 100% of the time - this will affect your MTBF rate.
  • While SSDs don't suffer the same mechanical problems regular disks do, they do have single and multiple-bit errors - so strongly consider RAIDing them even though the instinct is not to. Obviously it'll impact on all that lovely random write speed you just bought but consider it anyway.
  • It's still SATA not SAS, so your queue management won't be as good in a server environment, but then again the extra performance boost will be quite dramatic.

Good luck - just don't 'fry' them with writes :)

TristanK
  • 8,953
  • 2
  • 27
  • 39
Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • there is the SF-1500 though http://www.anandtech.com/show/3661 – Jeff Atwood Feb 01 '11 at 21:21
  • 2
    Do you mean 400% for the extra space, or 40%? I was going to edit your answer but couldn't find a citation, so I suppose it could be 400%. (It's a very good point, by the way) – ChrisInEdmonton Feb 01 '11 at 21:48
  • 9
    It's also not always clear if TRIM is supported on a RAID configuration. Remember, the SSDs are abstracted away from the OS with RAID. Be sure to check with the RAID vendor. – Matt Sherman Feb 01 '11 at 21:51
  • 5
    I meant 400 Chris, specifically the ones used in FC SANs, very spendy though, very. – Chopper3 Feb 01 '11 at 21:51
  • didn't know about overcommit...nice and dandy +! – iamgopal Feb 02 '11 at 05:51
  • 5
    One trick to get more reserve space out of a drive is to do the secure erase, then partition it with a large fraction unused. This free space will add to the SSD's performance and lifetime. – Zan Lynx Feb 02 '11 at 06:18
  • 1
    Just want to +1 with @ZanLynx .. I usually only partition about 80% of the drive when I'm using SSD + Raid. – Tracker1 Jan 04 '13 at 18:16
12

I did find this link, which has an interesting and thorough analysis of MLC vs SLC SSDs in servers

In my view using an MLC flash SSD array for an enterprise application without at least using the (claimed) wear-out mitigating effects of a technology like Easyco's MFT is like jumping out of a plane without a parachute.

Note that some MLC SSD vendors claim that their drives are "enterprisey" enough to survive the writes:

SandForce aims to be the first company with a controller supporting multi-level cell flash chips for solid-state drives used in servers. By using MLC chips, the SF-1500 paves the way to lower cost and higher density drives servers makers want. To date flash drives for servers have used single-level cell flash chips. That's because the endurance and reliability for MLC chips have generally not been up to the requirements of servers.

There is further analysis of these claims at AnandTech.

Additionally, now Intel has gone on the record saying that SLC might be overkill in servers 90% of the time:

"We believed SLC [single-level cell] was required, but what we found through studies with Microsoft and even Seagate is these high-compute-intensive applications really don't write as much as they thought," Winslow said. "Ninety percent of data center applications can utilize this MLC [multilevel cell] drive."

.. over the past year or so, vendors have come to recognize that by using special software in the drive controllers, they're able to boost the reliability and resiliency of their consumer-class MLC SSDs to the point where enterprises have embraced them for high-performance data center servers and storage arrays. SSD vendors have begun using the term eMLC (enterprise MLC) NAND flash to describe those SSDs.

"From a volume perspective, we do see there are really high-write-intensive, high-performance computing environments that may still need SLC, but that's in the top 10% of even the enterprise data center requirements," Winslow said.

Intel is feeding that upper 10% of the enterprise data center market through its joint venture with Hitachi Global Storage Technologies. Hitachi is producing the SSD400S line of Serial Attached SCSI SSDs, which has 6Gbit/sec. throughput -- twice that of its MLC-based SATA SSDs.

Intel, even for their server oriented SSD drives, has migrated away from SLC to MLC with very high "overprovisioning" space with the new Intel SSD 710 series. These drives allocate up to 20% of overall storage for redundancy internally:

Performance is not top priority for the SSD 710. Instead, Intel is aiming to provide SLC-level endurance at a reasonable price by using cheaper eMLC HET NAND. The SSD 710 also supports user-configurable overprovisioning (20%), which increases drive endurance significantly. The SSD 710's warranty is 3 years or until a wear indicator reaches a certain level, whichever comes first. This is the first time we've seen SSD warranty limited in this manner.

Jeff Atwood
  • 12,994
  • 20
  • 74
  • 92
7

Always base these sorts of things on facts rather than supposition. IN this case, collecting facts is easy: record longish-term read/write IOPS profiles of your production systems, and then figure out what you can live with in a disaster recovery scenario. You should use something like the 99th percentile as your measurement. Do not use averages when measuring IOPS cpacity - the peaks are all that matter! Then you need to buy the required capacity and IOPS as needed for your DR site. SSDs may be the best way to do that, or maybe not.

So, for example, if your production applications require 7500 IOPS at the 99th percentile, you might decide you can live with 5000 IOPS in a disaster. But that's at least 25 15K disks required right there at your DR site, so SSD might be a better choice if your capacity needs are small (sounds like they are). But if you only measure that you do 400 IOPS in production, just buy 6 SATA drives, save yourself some coin, and use the extra space for storing more backup snapshots at the DR site. You can also separate reads and writes in your data collection to figure out just how long non-enterprise SSDs will last for your workload based on their specifications.

Also remember that DR systems might have smaller memory than production, which means more IOPS are needed (more swapping and less filesystem cache).

rmalayter
  • 3,744
  • 19
  • 27
6

Even if the MLS SSD only lasted for one year, in a years time the replacements will be a lot cheaper. So can you cope with having to replace the MLS SSD when they where out?

Ian Ringrose
  • 870
  • 1
  • 6
  • 12
5

As the original question is really interesting but all answers are quite old, I would like to give an updated answer.

As of 2020, current consumer SSDs (or at least the one from top-tier brands) are very reliable. Controller failure is quite rare and they correctly honor write barriers / syncs / flushes / FUAs, which means good things for data durability. Albeit using TLC flash, they sport quite good endurance rating.

However, by using TLC chips, their flash page size and program time is much higher than old SLC or MLC drives. This means that their private DRAM cache is critical to achieve good write performance. Disabling that cache will wreak havok on any TLC (or even MLC, albeit with lower impact) write IOPs. Moreover, any write patter which effectively bypasses the write-combining function of the DRAM cache (ie: small synchronous writes done by fsync-rich workload) is bound to see very low performance. At the same time write amplification will skyrocket, wearing the SSD much faster than expected.

A pratical example: my laptop has the OEM variant of a Samsung 960 EVO - a fast M.2 SSD. When hammered with random writes it provide excellent IOPs, unless using fsync writes: in this case it is only good for ~300 IOPs (measured with fio), which is a far cry from the 100K+ IOPs delivered without forcing syncs.

Point is that many enterprise workload (ie: databases, virtual machines, etc) are fsync heavy, being unfavorable to consumer SSDs. Of course if your workload is read-centric, this would not apply; however, if using something as PostgreSQL on a consumer SSDs you can be deluded by the results.

Another thing to consider is the eventual use of a RAID controller with BBU (or powerloss-protected) writeback cache. Most such controllers disable the SSD DRAM private cache, leading to much lower performance than expected. Some controller supports re-enabling it, but not all of them pass down the required sync/barrier/FUAs to get reliable data storage on consumer SSDs.

For example, older PERC controllers (eg: 6/i) announced themselves as write-through devices, effectively telling the OS to not issue cache flushes at all. A consumer SSD connected to such a controller can be unreliable unless its cache is disabled (or the controller using extra undocumented care), which means low performance.

Not all controllers behave in this manner - for exampler, newer PERC H710+ controllers announce themselves as write-back devices, enabling the OS to issues cache flushes as required. The controller can ignores these flushes unless the attached disks have their cache enabled: in this last case, they should pass down the required sync/flushes.

However this is all controller (and firmware) related; being HW RAID controllers black boxes, one can not be sure about their specific behavior and only hope for the best. It is worth noting that open sources RAID implementation (ie: Linux MDRAID and ZFS mirroring/ZRAID) are much more controllable beasts, and generally much better at extracting performance from consumer SSDs. For this reason I use opensource software RAID whenever possible, especially when using consumer SSDs.

Enterprise-grade SSD with a powerloss protected writeback cache are immune from all these problems: having a non-volatile cache they can ignore sync/flush requests, providing very high performance and low write amplification irrespective of HW RAID controllers. Considering how low the prices for enterprise-grade SATA SSDs are nowadays, I often see no value in using consumer SSDs in busy servers (unless the intended workload is read-centric or otherwise fsync-poor).

shodanshok
  • 44,038
  • 6
  • 98
  • 162
4

A Whitepaper on the differences between SLC and MLC from SuperTalent puts the endurance of MLC and a 10th of the endurance of an SLC SSD but the chances are the MLS SSD's will outlive the hardware you are putting them into anyway. I'm not sure how reliable those statistics/facts are from SuperTalent though.

Assuming you get a similar level of support from the supplier of the MLC SSD's then the lower price point makes it worth a shot.

Jeff Atwood
  • 12,994
  • 20
  • 74
  • 92
chunkyb2002
  • 688
  • 3
  • 9
  • 1
    5 year lifetimes for typical desktop use have been mentioned. If that is an accurate estimate then they are not going to outlive the server in a datacenter environment! – JamesRyan Feb 01 '11 at 21:24
  • @JamesRyan: Although not shown in most calculations, the lifetime is very dependent on the fraction of free space. – Ben Voigt Feb 02 '11 at 00:00
  • 1
    In the organisations I've worked for we've always put server hardware refresh at 3 years. I was under the impression that was generally accepted best practice but do correct me if I'm wrong. – chunkyb2002 Feb 02 '11 at 21:10
3

If we set the write quantity problem aside (or prove that consumer level SSDs can handle it), I think SSDs are a good thing to add to enterprise-level environments. You will probably be using the SSDs in a RAID array. RAID5 or RAID6. And the problem with these is that after a single drive failure, the array becomes increasingly vulnerable to failure. And the time to rebuild it depends heavily on the volume of the array. A several TB array can take days to rebuild, while being constantly accessed. In case of SSDs, the RAID-arrays will a) be inevitably smaller b) rebuild time decreases drastically.

Vlad
  • 31
  • 1
3

You should just calculate the amount of daily writes you have with your current set-up and compare that with what the manufacturer guarantees their SSD drives can sustain. Intel seems to be the most up-front about this - for example, take a look at their mainstream SSD drive datasheets: http://www.intel.com/design/flash/nand/mainstream/technicaldocuments.htm

Section 3.5 (3.5.4, specifically) of the specs document says that you're guaranteed to have your drive last at least 5 years with 20GB of writes per day. I assume that's being calculated when using the entire drive capacity and not provisioning any free space for writes yourself.

Also interesting is the datasheet regarding using mainstream SSDs in an enterprise environment.

cearny
  • 31
  • 3
  • Unfortunately it is not at all that simple because wear leveling amplifies writes (remember it is designed to spread writes not reduce them) in a manner that is proprietary and can vary hugely in it's effectiveness based on the usage pattern. – JamesRyan Feb 02 '11 at 17:41
  • Hm, very good point. Also, losing the TRIM command if using the drives in a RAID setup should also increase the write amplification. I guess it all comes down to each manufacturer's idea of the typical usage pattern. – cearny Feb 02 '11 at 20:56
2

I deployed a couple of 32gb SLC drives a couple of years ago as a buffer for some hideously poorly designed app we were using.

The application was 90% small writes (< 4k) and was running consistently (24/7) at 14k w/s once on the SSD drives. They were configured RAID 1, everything was rosy, latency was low!

However roughly one month in and the first drive packed up, literally within 3 hours, the second drive had died as well. RAID 1 not such a good plan after all :)

I would agree with the other posters on some sort of RAID 6 if nothing else it spreads those writes out across more drives.

Now bear in mind this was a couple of years ago and these things are much more reliable now and you may not have a similar I/O profile.

The app has been re-engineered, however as a stop gap which may or may not help you, we created a large ram disk, created some scripts to rebuild/backup the ram disk and take the hit of a hour or so loss on data/recovery time.

Again, your the life cycle of your data may be different.

sysboy
  • 21
  • 1