1

We recently purchased a few HP 380p G8 servers to add some VM capacity, and decided to add a pool of SSDs to our standard build, to create a "fast" RAID 1+0 array for some of our VMs that have higher performance requirements. (e.g. log servers and dbs) Since the HP drives are super-duper expensive, we went with Plextor PX-512M5Pro SATA SSDs, since we had good luck with Plextor SSDs in our previous G7 servers.

However, in 3 out of 3 servers, 3 of the 4 drives have entered the failed state, shortly after being configured, before we even attempted to put them in use. The reliability of the failures leads me to believe it's incompatibility between the RAID controller and the drives, and when the RMA replacements arrive, I'm assuming they'll fail, too. Any tips or tricks that might help with this issue, besides just buying the HP official drives?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
righdforsa
  • 283
  • 4
  • 13
  • 2
    I suggest the official HP drives. There are deals to be had, and it's possible to buy the right equipment at favorable prices... [Or go with PCIe-based SSDs](http://serverfault.com/questions/556265/force-renegotiation-of-pci-express-link-speed-x2-card-reverts-to-x1-width). – ewwhite Dec 03 '14 at 03:55

2 Answers2

6

You can't use non-HP SSDs in HP ProLiant servers like this. Just because this worked on your G7 server doesn't mean it is okay for your Gen8 ProLiant servers.

(basically, why buy enterprise gear, then cripple it with incompatible components?)

Please see:

3rd party SSD drives in HP Proliant server - monitoring drive health

or Third-party SSD solutions in ProLiant Gen8 servers

or Third-party SSD in Proliant g8?

and HP DL380p Gen8 and PCIe SSD?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
0

I suspect this will also solve the issue you are having

https://serverfault.com/a/1048582/590139

I'm running an ML310e Gen8 v2 with SATA SSDs on the built in b120i port 5/6 (no arrays/jbod - just using controller to make port 6 bootable) and have found a solution to stop the SSD drives showing as failed on reboot! This is homelab running linux/zfs, but I would certainly use in production.

  1. Open Smart Storage Administrator (SSA) - I did this on POST using F5 once controller was detected. You can also do through Intelligent Provisioning (F10).

  2. Go to Modify Controller Settings and disable "Surface Scan Analysis Priority". This will stop the controller from trying to surface scan your SSDs which stops them being marked as failed.

  3. Enjoy 3rd party SATA SSDs without them being disabled as failed on boot.

The HP Smart Storage Administrator User Guide had the hint at the solution. It appears the surfacescanmode causes the issues with SSDs being marked failed. This manual shows other ways of accessing SSA - and I believe surfacescanmode can be turned off per slot using the ssacli tool (maybe via serial/iLo also?) if you really want to use this on other drives (however I didn't try this since I don't use any of the array controller functionality...use smart monitoring instead)

Per the SSA user guide, this setting should work with Gen9 also. Also with Gen6, Gen7 (access SSA through download image) - I suspect Gen5 would work also since it supports ssacli.

deeess
  • 51
  • 1