4

I'm currently speccing a Hyper-V host for my company and would like to use solid state drives for local storage. The problem is, most oem drives have a hefty premium when compared to retail server-grade drives. I'm currently leaning towards using 4x Samsung 845dc EVO's in raid 10 with one hot spare.

Are there any downsides aside from the drives not being included in the server's warranty?

Edit: The server is a Dell T320 and will host a few linux and windows VMs. The most performance intensive tasks are all disk related: wsus, redirected folders, a file share with large solidworks assemblies among other things.

IsAGuest
  • 937
  • 9
  • 14
  • You mean other than they might not work at all or not very well, or that you might lose your warranty? But at the very least you have to tell use what kind of server you are considering, because this depends on the manufacturer. – Sven Dec 05 '14 at 14:57
  • 3
    What is the cost to the business if the Hyper-V host was down due to a problem with non-OEM or non-supported drives? – joeqwerty Dec 05 '14 at 15:06
  • Side note: the Dell H3x0 RAID controllers have poor performance. You almost certainly want H7x0. – Dan Pritts Dec 05 '14 at 20:49

6 Answers6

7

In addition to the other valid remarks:

That particular drive, the Samsung 845DC is in the words of the manufacturer "designed for read intensive, <10% write content" and a write lifetime of 600TB which, depending on the IO profile of your VM's, may result in an early death, not covered by the 5 year warranty.

Server SSD's are typically specified for particular IO workload due to the finite number of write cycles NAND cells can support. A common metric is the total write capacity, usually in TB.
To allow a more convenient comparison between different makes and differently sized sized drives the write capacity is often converted to daily write capacity as a fraction of the disk capacity.

Assuming that a drive is rated to live as long as it's under warranty:
a 100 GB SSD may have a 3 year warranty and a write capacity 50 TB:

        50 TB
---------------------  = 0.46 drive per day write capacity.
3 * 365 days * 100 GB

The higher that number, the more suited the disk is for write intensive IO. At the moment value server line SSD's have a value of 0.3-0.8 drive/day, mid-range is increasing steadily from 1-5 and high-end seems to sky-rocket with write endurance levels of up to 25 * the drive capacity per day for 3-5 years.

Those Samsung drives come in at a daily write capacity of 600/(5*365*0.960)= 0.34 .

HBruijn
  • 72,524
  • 21
  • 127
  • 192
  • 1
    I'd add though that practical realworld lifespans *may* be higher - Someone put through a 840 pro (the consumer version of this drive) through *2 PB* of writes. Quality modern SSDs (and samsung is amongst the best here) do actually often outlast their rated life cycles. – Journeyman Geek Dec 06 '14 at 01:07
6

This all depends.

If HP or IBM, I'd say use their respective drives. (just because)

If Dell, probably use their drives... If you can't afford the Dell-spec'd disks, look harder. But refurbished Dell disks if you have to in order to save money and retain support.

But also know that Dell PERC RAID controllers are manufactured by LSI, and LSI controllers have a very wide compatibility list. Given that, it's acceptable to use whatever disks you want (within reason) on LSI-based RAID controllers. Just know the drawbacks of self-supporting your system. It's a cost-benefit analysis: less expensive disks, but you need to keep a spare or two. Versus more expensive disks and 4-hour or NBD support...

ewwhite
  • 194,921
  • 91
  • 434
  • 799
4

The manufacturers have spent time validating OEM drives and possibly creating custom firmware to deal with compatibility/optimisation issues with specifically their RAID controllers. There is some value but it is very untangible.

Some products simply won't accept non-proprietary drives.

Also until very recently server grade SSD products simply were not available directly to the consumer or at least not at a lower price. Noteably consumer grade SSDs don't have capacitors to allow all the buffered data to be written in case of power loss.

Also you will have to source caddies, they are not officially available seperately.

If you are prepared that there might be issues and you may have to fall back to proprietary drives then there are certainly savings to be had by trying this kind of drive. If you need to go by the book and have someone else to blame if there is an issue then you may be more comfortable with the traditional and sticking to offically supported drives.

JamesRyan
  • 8,138
  • 2
  • 24
  • 36
  • I've always found the "validation/custom firmware" argument hard to swallow, for low end commodity servers like the poster has. The corollary to that is that, for instance, LSI doesn't test its controllers with seagate OEM enterprise hard drives. – Dan Pritts Dec 05 '14 at 20:33
2

I'll play devil's advocate here.

If you are considering name-brand ENTERPRISE GRADE drives, with appropriate specifications, then in general they will be fine in commodity x64 servers.

If you are considering CONSUMER GRADE drives, you are taking your chances.

Other posters and commenters have explained the difference in quality and performance between enterprise grade and consumer grade SSD's.

For magnetic disks, there's one big thing to know about. Consumer drives might "stall" occasionally when trying really hard to read an iffy sector. This is the right behavior for a standalone consumer drive. It's not the right behavior when using a RAID controller - it will cause the controller to fail the drive. This is the purpose of "RAID edition" drives. In general, Enterprise-branded drives will work well.

You also have to consider the consequences of drive failure. Are there life/safety implications? Huge financial penalties? In those cases playing it safe with the Dell drive may be the right call. I"m guessing in your case there aren't, but think about the big picture.

Dan Pritts
  • 3,181
  • 25
  • 27
0

OEMs test some drives (or change the firmware slightly and get them rebranded as theres) with there servers, and that can give you piece of mind. I typically use regular drives in my servers, and have come across a couple of issues - Using drives > 2tb in an HP system didn't work, and using regular consumer grade drives in an Intel server was excruciatingly slow - swapping these out for server class drives and drives < 2tb fixed my problems.

Note that we don't rely on our suppliers for much support after we have purchased our systems - I'd imagine that vendors might try and not support systems "which don't have their drives", but there policys no-doubt vary.

davidgo
  • 5,964
  • 2
  • 21
  • 38
  • Your HP issue was probably a controller firmware problem. Most modern HP systems (2006 and beyond) should be able to address 2TB+ disks. – ewwhite Dec 05 '14 at 20:58
  • Quite probably. We just banged 2tb disks in it when we worked out what was going on (It was a while ago, so I can't recall what happened - I think the drives appeared to work, but very slowly or something along those lines). – davidgo Dec 05 '14 at 21:17
0

My company has three datacenters. They bought drives from Newegg and stuck them in one of the Dell servers to see what would happen. It's been fine for two years.

The only problem is that they can't reboot it remotely. If you reboot it it stays stuck on its POST screen until you clear the warnings complaining that it doesn't have real Dell drives in it.

qel
  • 101
  • 1