15

Our SQL server is becoming pretty heavily loaded, and all indications point to the disk channel being the bottleneck. The current HP server has a fairly low-end array card, and we're looking to augment this server with a Smart Array card and external storage array with SSD drives.

Current config is:

  • DL360 G7
  • Smart Array P410i
  • Windows Server 2008R2
  • 32Gb RAM
  • Current array is 2 x 300Gb SAS RAID1 logical drive for boot/OS and 1 x 120Gb SATA SSD drive for data.

The database server hosts one fairly large database (~100Gb), containing both live and historical data. For many reasons, splitting the database isn't an option, so the current thinking is to have multiple logical drives on the new array, each on it's own channel, and then split the database into logical SQL partitions.

For example, the array might have:

  • 2 x SSD (RAID1)
  • 2 x SSD (RAID1)
  • 4 x SSD (RAID1+0)

Currently, we're looking at something like a D2600 with a high-end Smart Array card.

In order to get the maximum performance, we really need each logical drive to run as fast as possible. HP's specs suggest that their top-end SSDs could come close to max'ing out the 6Gb connection that the Smart Array cards support.

However, some of the bigger SA cards suggest they support "multiple channels"; what's not clear to me is how this work. Does this mean, with a single cable from SA to D2600, each RAID set could be configured to get it's own 6Gb channel? Or is 6Gb the limit on the interconnection, and if so is there any configuration option (or even different HP product - not trying to get around the "no subjective questions" rule, honest :) ) that would overcome this limit?

EDIT: I can't see any HP server that will do it, but if there is a decent Proliant box that will allow me to split the internal drive cage into two (or more) channels, that might be a "Plan B" - does anyone know of such a server?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
KenD
  • 1,127
  • 2
  • 17
  • 35
  • 1
    You can't do RAID 1+0 with 2 drives. – Grant Aug 09 '13 at 19:07
  • My mistake, I had 4 x SSD originally on each line before I saw the price of them :) – KenD Aug 09 '13 at 19:08
  • @Grant Though, oddly, HP refers to RAID1 on 2x drivers as RAID1+0 – Dan Aug 09 '13 at 19:28
  • 1
    @dan I don't even know what to say to that. Good job hp. Keep making things even MORE confusing. – Grant Aug 09 '13 at 19:46
  • Information I need: server model and generation, what type of smart array controllers are involved, which operating systems, and how much raw/usable disk capacity do you need? What is the current drive setup, and why do you think it's a bottleneck? – ewwhite Aug 09 '13 at 20:33
  • Added spec above, thanks. SSMS Activity Monitor and Perfmon show that most of the time the server is waiting for the disk. Realistically we need to plan on the database being around ~200Gb. Also, see my note at the bottom of the question: if buying a new server to get the disk "throughput" is a cost-effective option, we could go down that route ... – KenD Aug 09 '13 at 20:40
  • @kend I see the spec. I'm mobile now. There's a lot to say, but I'll have to write later. That 120GB SSD is SATA, correct? – ewwhite Aug 09 '13 at 20:45

3 Answers3

14

Okay. This is an interesting question, as there are a number of options available to you.

Some concepts to clarify and understand, as they relate to this situation:

  1. Perceptions of "speed" or "fast".
  2. RAID controller performance.
  3. SAS topology.
  4. Benchmarking a system and/or identifying bottlenecks.

In order to get the maximum performance, we really need each logical drive to run as fast as possible.

Storage performance is not always about bandwidth!! Latency, I/O read and write patterns, queuing, application behavior, caching, etc. are all factors. Given what you've described, you're nowhere near saturating the link to your storage.

The current HP server has a fairly low-end array card

No it doesn't. The Smart Array P410i controller is the onboard controller available on the G6 and G7 ProLiant servers. It performs just fine, as long as a battery-backed (BBWC) or flash-backed (FBWC) module is installed. It's limited to the internal bays of the server and has no SAS oversubscription. There are two SAS SFF-8087 4-lane connectors linking the motherboard to the backplane, each providing 6Gbps full-duplex bandwidth.

Currently, we're looking at something like a D2600 with a high-end Smart Array card.

The other RAID controllers in HP's portfolio for that server generation perform similarly (Smart Array P411 and P812). They differ in that they provide more flexible or external connectivity. The D2600 enclosure would potentially be a step-down in raw throughput, depending on its configuration. However, it's absolutely the wrong choice for this setup, as it only accommodates large-form-factor 3.5" disks. The D2700 enclosure is the variant that houses small-form-factor 2.5" disks.

SSMS Activity Monitor and Perfmon show that most of the time the server is waiting for the disk

This is an issue with the single 120GB SATA SSD you're using. I have one sitting here. It's a low-end, slow-ass SSD. That's all. It maxes out at ~180 Megabytes/second sequential and is just an overall poor performer. HP should not sell it! It's relatively low-latency, compared to spinning disks, but is terrible for what you're trying to do. It's worse that you only have one drive. Four of them would be acceptable.

I would recommend either a pair of 400GB MLC HP Enterprise disks (made by Pliant/Sandisk) if you are not planning much growth beyond the 200GB you're using now. Otherwise, four disks would be better. Unfortunately, they are not cost-effective ($2800US+ each).

When I don't use the HP Enterprise SSDs and need to consider cost, I purchase the Sandforce-based OWC Mercury Extreme Pro drives and place them in HP drive carriers. Works great, inexpensive and is a much better deal for the generation of hardware you're using. Use RAID 1+0 and follow the P410 SSD configuration guidelines from HP. I spend a lot of time with SSDs...

   array B (Solid State SATA, Unused Space: 1012121  MB)

      logicaldrive 3 (400.0 GB, RAID 1+0, OK)

      physicaldrive 1I:1:3 (port 1I:box 1:bay 3, Solid State SATA, 480.1 GB, OK)
      physicaldrive 1I:1:4 (port 1I:box 1:bay 4, Solid State SATA, 480.1 GB, OK)
      physicaldrive 2I:1:7 (port 2I:box 1:bay 7, Solid State SATA, 480.1 GB, OK)
      physicaldrive 2I:1:8 (port 2I:box 1:bay 8, Solid State SATA, 480.1 GB, OK)

   SEP (Vendor ID PMCSIERA, Model  SRC 8x6G) 250 (WWID: 500143802335E8FF)

I have a few of these drives sitting here as I type...

Left to right: 400GB SAS MLC Enterprise SSD, 200GB SAS SLC Enterprise SSD, 120GB SATA MLC crap SSD enter image description here

The rest of the items in your question are not an issue...

  • You don't need external storage. External storage actually shares a 4-lane SAS connection (24Gbps == 4 x 6Gbps) back to the controller. The "multiple channels" you refer to are the same as "dual domain" or simply multipath SAS links. This is more of a resiliency feature rather than performance in this context. See: Using both expanders in HP D2700
  • Internal disks are fine, as they each have dedicated 6Gbps links back to the P410i RAID controller.
  • Your problem here is the SSD you're using. Even 4 300GB 10k RPM SAS drives will run better than the one HP SATA SSD you have now.

Further reading:

HP D2700 enclosure and SSDs. Will any SSD work?

Third-party SSD in Proliant g8?

Why are enterprise SAS disk enclosures seemingly so expensive?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Fantastic - thank you very much for the detailed advice. If possible, we'd like to keep the 2 "spinning" disks in the server - which only has 4 drive bays. If we buy the SFF "Small Form Factor Hard Drive Backplane Kit" - which should give us 4 extra drive bays, at the expense of losing the optical drive (no hardship) - and fill that with 4 x decent SSD's, would this mean each drive would get it's own 6Gb link back to the P410? – KenD Aug 12 '13 at 18:47
  • 1
    Yes, that will work. Each disk gets a link back to the controller. No oversubscription. – ewwhite Aug 12 '13 at 18:49
6

The D2600/2700 has dual 6Gbps SAS channels on the backplane, the cables you connect them with carry four x 6Gbps SAS channels to allow you to daisy-chain another shelf off the first one without any port blocking when connected to a four channel card such as a P812/822.

By the way if I were you I'd simply create one large RAID 10 array and then create the number of logical disks you need from that array - it'll perform far better than the R1+R1+R10 suggestion. Come back to us if you have further queries, this is right up my alley ;)

Chopper3
  • 100,240
  • 9
  • 106
  • 238
3

To my understanding the D2600 chassis (and the D2700) has a single backplane (channel), and the SmartArray cards with multiple channels allow you to chain multiple enclosures together to create very large arrays. As you expect, this does not allow you to leverage multiple channel support in the HBA in a single enclosure.

To get what you're looking for you'll need to look outside of HP.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • 2
    All of the current HP external array enclosures support SAS multipath. This requires dual-ported drives and an HBA with two SAS SFF-8088 ports. HP refers to it as "dual-domain". – ewwhite Aug 09 '13 at 20:43