7

I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price.

I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work.

Is there an enclosure / disk compatibility issue I should watch out for here?

On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like.

For background, the D2700 already has 12x 300GB 10k SAS drives in it, and I was planning on getting 8x 500GB (or thereabouts) SSDs to create another zpool. Whole thing is connected to an HP X1600 running Solaris 11.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
growse
  • 7,830
  • 11
  • 72
  • 114

4 Answers4

10

Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.

I've done quite a bit of SSD testing on ZFS and with this enclosure.

Here's the lowdown.

  • The D2700 is a perfectly-fine JBOD for ZFS.
  • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.
  • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.
  • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.
  • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.
  • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.
  • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.
  • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.
  • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.

Which controllers are you using? I probably have detailed data for the combination you have.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment.... – growse Apr 17 '12 at 18:43
  • So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't *need* multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For [ZFS, basic SAS controllers](http://serverfault.com/questions/84043/zfs-sas-sata-controller-recommendations) are preferred. You *will* have problems with low-end SSDs and the HP controllers. – ewwhite Apr 17 '12 at 19:28
  • Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris. – growse Apr 17 '12 at 19:34
  • Yes, lots of suggestions. They may be better suited to Server Fault chat, though. – ewwhite Apr 17 '12 at 20:17
5

Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?

If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.

Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.

Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.

Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.

Skyhawk
  • 14,149
  • 3
  • 52
  • 95
  • How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller? – 3molo Apr 17 '12 at 18:16
  • Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first. – growse Apr 17 '12 at 18:27
  • Of course the SATA zpool will benefit more from caching, but you also have the option of [dividing your L2ARC and ZIL devices between the two arrays](http://mail.opensolaris.org/pipermail/zfs-discuss/2011-January/046975.html). If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2. – Skyhawk Apr 17 '12 at 19:18
  • @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for *sequential* writes to the ZIL device itself. – Skyhawk Apr 17 '12 at 19:49
3

First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.

But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.

Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.

You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.

Alexander
  • 724
  • 2
  • 11
  • 19
  • I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs. – growse Apr 17 '12 at 15:12
  • I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :) – Alexander Apr 17 '12 at 15:19
  • By the way, you won't need zpool for performance – Alexander Apr 17 '12 at 15:21
  • You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data. – Alexander Apr 17 '12 at 15:38
  • I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices. – growse Apr 17 '12 at 15:43
  • One SSD is not enough. You need separate ZIL (SLC, small, 10+ GB) and L2ARC (MLC, large, 100+ GB) devices. I would suggest Intel if you want to go with "cheap" SSDs because Intel offers small SLC drives that are meant for use as cache as well as MLC drives that allow you to track usage against design wear limits. – Skyhawk Apr 17 '12 at 17:18
  • Of course, I'm not suggesting just one SSD going forward, I meant just one to test compatibility. If it works and gives decent performance in testing, I'll up that number. For L2ARC, would I be better off with SLC? – growse Apr 17 '12 at 18:18
  • I use MLC for L2ARC. But at this point, I'll only use SAS SSDs. Maybe SATA SSDs for pure SSD zpool scenarios, but it's worth trying to use enterprise disks where you can. – ewwhite Apr 17 '12 at 19:08
  • By the way, it is unclear for me how you'll add SSDs to your enclosure without right brackets. Looks like it be better for you to install SSDs to your server, this way you don't need to care for enclosure controller and/or ports (if I recall it right, you'll find some free SATA ports there). – Alexander Apr 18 '12 at 06:08
3

If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.

They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.

Basil
  • 8,811
  • 3
  • 37
  • 73
  • I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :( – growse Apr 17 '12 at 15:11
  • I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired. – Skyhawk Apr 17 '12 at 18:13
  • Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell. – growse Apr 17 '12 at 18:21
  • And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units... – ewwhite Apr 17 '12 at 22:28
  • If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported. – Basil Apr 18 '12 at 13:59