As the original question is really interesting but all answers are quite old, I would like to give an updated answer.
As of 2020, current consumer SSDs (or at least the one from top-tier brands) are very reliable. Controller failure is quite rare and they correctly honor write barriers / syncs / flushes / FUAs, which means good things for data durability. Albeit using TLC flash, they sport quite good endurance rating.
However, by using TLC chips, their flash page size and program time is much higher than old SLC or MLC drives. This means that their private DRAM cache is critical to achieve good write performance. Disabling that cache will wreak havok on any TLC (or even MLC, albeit with lower impact) write IOPs. Moreover, any write patter which effectively bypasses the write-combining function of the DRAM cache (ie: small synchronous writes done by fsync-rich workload) is bound to see very low performance. At the same time write amplification will skyrocket, wearing the SSD much faster than expected.
A pratical example: my laptop has the OEM variant of a Samsung 960 EVO - a fast M.2 SSD. When hammered with random writes it provide excellent IOPs, unless using fsync
writes: in this case it is only good for ~300 IOPs (measured with fio
), which is a far cry from the 100K+ IOPs delivered without forcing syncs.
Point is that many enterprise workload (ie: databases, virtual machines, etc) are fsync
heavy, being unfavorable to consumer SSDs. Of course if your workload is read-centric, this would not apply; however, if using something as PostgreSQL on a consumer SSDs you can be deluded by the results.
Another thing to consider is the eventual use of a RAID controller with BBU (or powerloss-protected) writeback cache. Most such controllers disable the SSD DRAM private cache, leading to much lower performance than expected. Some controller supports re-enabling it, but not all of them pass down the required sync/barrier/FUAs to get reliable data storage on consumer SSDs.
For example, older PERC controllers (eg: 6/i) announced themselves as write-through devices, effectively telling the OS to not issue cache flushes at all. A consumer SSD connected to such a controller can be unreliable unless its cache is disabled (or the controller using extra undocumented care), which means low performance.
Not all controllers behave in this manner - for exampler, newer PERC H710+ controllers announce themselves as write-back devices, enabling the OS to issues cache flushes as required. The controller can ignores these flushes unless the attached disks have their cache enabled: in this last case, they should pass down the required sync/flushes.
However this is all controller (and firmware) related; being HW RAID controllers black boxes, one can not be sure about their specific behavior and only hope for the best. It is worth noting that open sources RAID implementation (ie: Linux MDRAID and ZFS mirroring/ZRAID) are much more controllable beasts, and generally much better at extracting performance from consumer SSDs. For this reason I use opensource software RAID whenever possible, especially when using consumer SSDs.
Enterprise-grade SSD with a powerloss protected writeback cache are immune from all these problems: having a non-volatile cache they can ignore sync/flush requests, providing very high performance and low write amplification irrespective of HW RAID controllers. Considering how low the prices for enterprise-grade SATA SSDs are nowadays, I often see no value in using consumer SSDs in busy servers (unless the intended workload is read-centric or otherwise fsync-poor).