I see no reason not to use SAS SSDs over SAS HDD. However, if presented with the choice between a SAS HDD and a SATA SSD, my enterprise choice might well be the SAS drive.
Reason: SAS has better error recovery. A non-RAID edition SATA HDD might hang the whole bus (and with that possibly deny usage of the whole server) when it dies. A SAS-based system would just lose one disk. If that is a disk in a RAID array then there is nothing stopping the server from being used until end of business, followed by a drive replacement.
Note that this point is moot is you use SAS SSD's.
[Edit] tried to put this in a comment but I have no markup there.
I never said that the SAS controller will connect to another drive. But it will handle failure more gracefully and the other drives on the same backplane will remain reachable.
Example with SAS:
SAS HBA ----- [Backplane]
| | | |
D1 D2 D3 D4
If one drive fails, it will get dropped by the HBA or the RAID card.
The other 3 drives are fine.
Assuming the drives are in a RAID array, the data will still be there and will remain accessible.
Now with SATA:
SATA ----- [port multiplier]
| | | |
D1 D2 D3 D4
One drive fails.
The communication between the SATA port on the motherboard and the other three drives will likely lock up. This can happen because either the SATA controller hangs or the port multiplier has no way to recover.
Although we still have 3 working drives, we have no communication with them.
No communication means no access to the data.
Powering down and pulling a broken drive is not hard, but I prefer to do that outside business hours. SAS makes it more likely that I can do that.
4Even cheapo desktop motherboards support multi-tier storage, using an SSD to cache one or more spinning disks. Random-read should be better on a 10k HDD than a SSD-cached 7k2 HDD, since random-read will generally miss the cache a lot. Besides that, I can't think of any other reasons. – Mark K Cowan – 2014-11-02T18:47:28.657
8Not all workloads are ramdom, think about CCTV setup so that the 20 streams are written so that. C1 is on B1, B21, B 41 etc hence no ramdom access in normal useage. – Ian Ringrose – 2014-11-03T19:47:11.213
2
@IanRingrose has a point. You can build a very large RAID array (ton of up-to-6TB 3.5" drives) with lots of streaming I/O capacity out of HDDs, like a http://aws.amazon.com/ec2/instance-types/#HS1 -- some applications like analytics databases (think Amazon Redshift) or genomic sequencing do a ton of I/O and need a ton of space but it's all streaming, and a big spinning-disk array is perfect. (With enough drives, 10K is still unnecessary, though: 100MB/s/"regular" drive * lots of drives will still max out the I/O interface, or you'll hit other bottlenecks.)
– twotwotwo – 2014-11-05T05:28:24.5372Another way of spinning (ha) this: for your desktop, the price of a 256GB SSD is a fraction of the whole system's cost and the performance difference is huge; for a 48TB RAID array for an analytics database, the cost difference is bigger and there's less performance difference because it's mostly sequential access. Again, though, I'm really talking about whether regular HDDs (7.2K RPM) still have a niche in high-performance applications at all, not whether 10K RPM VelociRaptors are a good deal. For your desktop, I'd say def. not. – twotwotwo – 2014-11-05T17:16:23.663
1
Can't add this as as answer, so would just say that there's an article on The Register - "Why solid-state disks are winning the argument" (http://www.theregister.co.uk/2014/11/07/storage_ssds/) that covers the issues and (ignoring costs) finishes by saying "so long as you follow the instructions on the tin when selecting the right SSD for the job, there is absolutely no reason not to buy one." Of course, there's quite a discussion in the comments about some of the issues that may not have been addressed, but I felt it worth mentioning here.
– Gwyn Evans – 2014-11-08T22:43:44.913@DragonLord : What about a 30KRPM drive (yes they do exist)??? – user2284570 – 2015-02-12T12:41:39.007