2

I am having a HP Proliant DL 160 Gen9 Server with a HP H240 Host Bus Adapter. 6x 1 TB Samsung SSD's configured in a raid 5 directly using the internal storage of the machine. After installing a VM on it using VMware (6.0) I did a benchmark with the following result:

Benchmark interal storage

After some research I came to the following conclusion:

A Controller without Cache will have trouble to calculate the raid 5 stripes and I pay this in write performance. But 630MB/s Read and 40MB/s seem to be a bit poor. Anyhow I found others having the same problem.

Since I can't change the controller today, is there a way to test if the controller is on the edge? Or do I really have to try a better one and see the result? What are my options? I am pretty new to Server/Hardware/installation since in my previous company this was managed by a outsourced hosting provider.

EDIT UPDATE

Here now the performance with write cache enabled. The read went up even before I did the change. Not sure what happened, I just played around in bios settings of the windows machine. Today I go update the firmware to latest version, let's see what it gives us. Bench with cache

Here a screenshot of a Benchmark with the new Controller P440 with a 4GB Cache activated. (enabling HP SSD smart path didn't bring a performance improvement btw.) But with a Cache we get much better results. Of course I tested it with files > 4GB, to make sure to test the disk and not the cache.

Bench with new Controller

RayofCommand
  • 1,451
  • 8
  • 26
  • 36
  • 1
    Try it with RAID 1+0. – ewwhite Oct 13 '15 at 12:10
  • @eewhite, please see my update. Now with new Controller and active Cache of 4GB. – RayofCommand Oct 28 '15 at 11:02
  • 1
    Very good. Are you happy with the performance? – ewwhite Oct 28 '15 at 12:36
  • more than happy, I am currently experimenting to learn all the settings. BTW. my raid was not dropped during the change of controller and firmware. My machines are still the same, I didn't rebuild the raid. HP support told me that the Raid is saved on the controller and it will be dropped to 100% if I change it. maybe because I always had a 140 card in there all time? No idea... – RayofCommand Oct 28 '15 at 12:54
  • 1
    RAID metadata is saved on the disks and mainly compatible across Smart Array controllers. – ewwhite Oct 28 '15 at 12:55
  • Just a note that some newer enterprise quality SSDs (eg. Samsung's 845 DC, where DC means "data center") have a capacitor that holds enough charge to let the SSD flush it's internal volatile RAM cache to the underlying non-volatile memory, in the case of an unplanned power failure (eg. accidentally unplugging the box) – Ben Slade Mar 04 '16 at 20:08

4 Answers4

3

The HP H240 is not a RAID controller. It's a host bus adapter that's intended to provide direct disks access to a host operating system. This applies to people using software RAID, ZFS, Hadoop, Windows Storage Spaces, etc. It has some limited RAID capability, but as you can see, it's not sufficient.

For VMware purposes, you want an HP Smart Array RAID controller like the HP Smart Array P440.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 1
    thank you sir, the disks are directly attached. Do you really think creating a raid 10 will give much better results with the same HBA? Or better upgrade that hba by buying a P440 with cache? Of course I will try raid 10 as soon as I can enter the server room ) – RayofCommand Oct 13 '15 at 12:22
  • I'd try RAID 1+0 first... test... then see what the results are. – ewwhite Oct 13 '15 at 12:23
  • thanks, I will come back with results tomorrow, latest in 2 days and update you. – RayofCommand Oct 13 '15 at 12:27
  • 1
    The HP H240 is not a "real" Raid Controller with a own Raid chip so it's commonly used to create a software raid with it. The main benefit is to connect a bunch of disks to your system(JBOD). Especially Raid 5 needs a lot of performance while writing because it writes to all disks at the same time. I think RAID10 will be much better with your HBA configuration. But if you need native Hardware RAID performance you should update your controller like ewwhite told. – DjangoUnchained Oct 13 '15 at 12:40
  • Today I enabled the Phsyical Drive Write Cache and the write performance went up to 170MB/S for SEQ Q32T1 – RayofCommand Oct 14 '15 at 14:36
  • Just for info, today I purchased a new smart Array P440/4GB - as soon as it arrives I will test it and post results. – RayofCommand Oct 16 '15 at 14:00
2

As you already discovered, the low write speed had nothing to do with slow parity calculation (modern CPU are very fast at that), but was due to the disabled disk's private DRAM cache, and more precisely on how badly flash memory need it to give good sustained performance.

I'll quote myself:

Even my laptop's ancient CPU (Core i5 M 520, Westmere generation) have XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s over a single execution core.

The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache give very low latency for random write access (and reads that hit) and basically transform random writes into sequential writes. A RAID controller without such a cache is near useless. Moreover, some low-end RAID controllers do not only came without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: if newer firmware has not changed that, they totally disable the disk's private cache (and it can not be re-enabled while the disks are connected to the RAID controller). Do a favor yourself and do not, ever, never buy such controllers. While even higher-end controller often disable disk's private cache, they at least have they own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant.

This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSD really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controller let you re-enable disk's private cache (eg: PERC H700/710/710P let the user re-enable it), if that private cache is not write-protected you risks to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache, I had no losses during multiple, planned power loss testing), giving uncertainty and much concerns.

and some more info:

Some RAID card will forcibidy disable the disk's private cache. This kill performance for consumer-level SSD, as they make heavy use of private DRAM cache both to cache their indirection table and to mask the heavy latency involved into erasing/programming MLC NAND. For example, an otherwise very fast Crucial M550 240GB drive write at incredibly slow rate of 5 MB/S when its internal cache is disabled

Bottom line: while enabling the disk's private cache can greatly increase your I/O speed, please be sure (by mean of testing) that a power outage will not cause any unexpected data loss.

shodanshok
  • 44,038
  • 6
  • 98
  • 162
1

Raid 5 always has poor write performance. I suggest to use Raid 10 but anyway did you installed the drivers for VMware ESXI from the HP website? Also consider to make a firmware update. If the Raid is still in the status of building/initializing the array performance is temporarily downgraded. This sometimes can take up to a couple of days if it'a full initialization.

[http://h20565.www2.hpe.com/hpsc/swd/public/readIndex?sp4ts.oid=7553524&swLangOid=18&swEnvOid=4183][1]

0

Does the h240 have a real ROC processor? You dont need an FBWC for RAID 5 with SSDs, because the RAM is slower than the SSD RAID. With my 8x 256gb 850 pro, I get 2.9Gb/s with an old LSI 9260 and disabled write cache. With enabled write cache, I have only 900Mb/s.

Falcon Momot
  • 24,975
  • 13
  • 61
  • 92
weby
  • 1