2

Reading the D2600/D2700 User guide, there's a section called "Cabling examples", but none of the examples match what we intend to do.

I just want to make sure that it makes sense, before we do it.

So here's the plan:

We have 1xP822 in a Gen8 rack server and 2xD2700 enclosures. We have 24 drives in each D2700. Both D2700 is configured with the exact same disk layout. We create one RAID10 array on top of the D2700 enclosures (meaning one side of the mirror is on each D2700 enclosure - the HP ACU/SSM automatically makes sure of this).

We then connect the P822 controller with 4 cables in total to the D2700 enclosures (NO cascading):

P822 Port 1E: D2700 box1, IO Module A P822 Port 2E: D2700 box2, IO Module A P822 Port 3E: D2700 box1, IO Module B P822 Port 4E: D2700 box2, IO Module B

Not thinking about the expansion options here, would this be the correct way of cabling, in terms of getting maximum performance from the drives/enclosures/p822 controller?

Thanks :)

EDIT: So what I'm reading from the comments so far, is that this approach is not "WRONG/INCORRECT", it's just not really beneficial in any way?...

ewwhite
  • 194,921
  • 91
  • 434
  • 799
N-3
  • 118
  • 2
  • 11
  • I don't like this design. It's not ideal unless you *realllllly* think you need that many drives in a mirror. I would like some more information about the OS and filesystems that will be in use and the specific disks you're planning to specify for this. Also, what is your performance *metric*? IOPS? Storage capacity? Raw sequential throughput? – ewwhite Mar 20 '14 at 18:36
  • We will probably have more than one raid-10 spanning the two enclosures, so don't worry too much about the disk layout. It was just to simplify the explanation. We use 300GB 15K drives, typically. What I'm looking for is IOPS and throughput in terms of mb/s I guess.. – N-3 Mar 21 '14 at 13:57
  • I should mention that typically, the RAID-10 arrays will host SQL Server data drives. But we also have similar setups for just regular smb file shares (on windows server 2008 R2/Server 2012) – N-3 Mar 21 '14 at 14:05
  • You can do this. It's not necessarily *wrong*. But what do you hope to achieve? If this is a capacity thing, use larger disks. If this is a performance thing, use SSD. If you're looking to create 24-disk RAID0 per enclosure and mirror the enclosures at the host controller, understand that your environment would be extremely susceptible to drive/cable/controller failures. – ewwhite Mar 21 '14 at 14:46
  • I don't see how the environment will be "extremely susceptible to drive/cable/controller failures". We've done this type of installations on MSA70s for years. Now we're moving to D2700s, and I was just curious how to maximize the investment in the new enclosures. But what I'm hearing so far, is that I gain NOTHING from hooking each enclosure up with dual connections to the controller...Which I guess wasn't the answer I was hoping for - but none the less, it was the one I expected ;) – N-3 Mar 27 '14 at 11:18
  • Using dual cables to the enclosures is for resiliency. You get the benefit of protection against the D2700's controller failure and cable failure. – ewwhite Mar 27 '14 at 12:34
  • Thanks. I get that - I just don't get WHY it won't increase performance as well as add resiliency. – N-3 Mar 28 '14 at 08:21

2 Answers2

1

I have lots of full D2700 enclosures... You will be oversubscribed at the enclosure level, due to the SAS expander backplane in the D2700. You'll have either 4 or 8 lanes of 6Gbps bandwidth available to you.

24 x 6Gbps-linked SAS disks, each really capable of 2Gbps == 48Gbps sequential capability (minus overhead).

That's versus your 4 x 6Gbps = 24Gbps SAS SFF-8088 link to the host.

You should be looking into a Dual-Domain configuration, where you're leveraging the multipath SAS connections between the host and the array and disks. This also provides some resiliency.

IOPS will be a function of workload and array layout, not the cabling arrangement.

Max throughput will be well below the PCIe 3.0 full-duplex 8 Gigabytes/second capability of the PCIe slot. The bottlenecks in raw throughput will be your D2700 enclosure, followed by the RAID controller.

There's no cabling arrangement that will yield an appreciable difference in that throughput, short of going to a dual-domain multipath configuration.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • But if I have 8 lanes available (which I would/do have, if I connect two cables to each enclosure), wouldn't it be a better "match" to the drives, because then I would (this is all theoretically of course) have 48Gbps (because the disks distribute their active paths between each controller in the enclosure) of SAS bandwith to the controller - allowing me to (again theoretically) maximize the capabilities of the drives. But you're saying that the backplane of the D2700 would be the hard limit then? What's the capabilities of the backplane? - I take it that it's not 48Gbps then? – N-3 Mar 27 '14 at 11:24
  • @n-3 24 x 6Gbps-linked devices == 144Gbps. Your SAS drives aren't capable of 6Gbps throughput, though. With a full enclosure of disks and RAID0 and a sequential workload, you'll hit the enclosure's expander limit (4 x 6Gbps = 24Gbps) before anything else. Having dual-domain doesn't mean your bandwidth magically doubles. It's for resiliency more than anything. – ewwhite Mar 27 '14 at 12:28
  • Thanks! That's all I wanted to know ;) So even though there's two IO modules in the D2700 (compared to the MSA70s one module), the backplane still only support 4x6Gbps?...That seems kind of foolish?... – N-3 Mar 27 '14 at 12:33
  • That's 4 x 6Gbps per path. But your Smart Array isn't doing round-robin I/O. I have LSI controllers connected to D2700's that *DO* have round-robin I/O policies... so there's a performance increase and better resiliency in that configuration. – ewwhite Mar 27 '14 at 12:36
  • What do you mean by round-robin? I can see that when we attach the D2700 enclosures with two cables to the P812/P822 (one to each IO module) the active path of each drive alternates between IO module A and IO module B. This is all done "automatically" by the controller, so there must be full 4x6Gbps per IO module through the backplane. Otherwise alternating between paths wouldn't make any sense...? – N-3 Mar 27 '14 at 12:51
-1

Depends on how you set it up and what exactly you mean by "performance"...

When connecting disk enclosures, the all around "best" option is to create a loop. In your situation that can either be 1 link "to" each enclosure and 1 link "from" each (note: SAS 2.0+ is bi-directional and routed, the "to" and "from" terminology is leftovers from SAS 1.0). Or you could do a pair of links to the first enclosure, cascade two to the second, and loop from the second back to the HBA card.

The latter topology (2 links HBA->EncA->EncB->HBA) would allow all 16 channels to be used by one enclose, or split between the enclosures. The former topology (2 links HBA->EncA & HBA->EncB) allows just 8 channels to each enclosure. If your load is split pretty evenly then either topology works equally well, and both are redundant.

Another thing to consider, the RAID functionality of the P822 is limited to 8 channels at a time. If you're intending to do a bunch of large hardware RAID configurations your bandwidth may be limited by this card. This is unlikely to be an issue unless you're pushing a lot of data all at once.

Chris S
  • 77,337
  • 11
  • 120
  • 212
  • I hear you, which is also what I read from the HP guidelines. I just can't seem to wrap my brain around, why a loop would be preferred over a "shortest path" routing. But it's likely because I don't have a thorough understanding of how SAS actually works then. What we actually see, when connecting two enclosures with 4 cables to one P812/P822 is that the active path to each drive alternates between each link. We figured this surely must be the optimal way to use the full link bandwith of each cable, but I guess the controller is the real bottleneck here - not the cables/links? – N-3 Mar 21 '14 at 14:00
  • As for the shortest path thing, SAS is an order of magnitude or two faster than the disks, so taking an extra hop or three makes no difference overall. The round-robin approach simply minimizes downtime in the event of a link failure. The controller might be the bottleneck, depending on how many disks you'll be looking to access at any given time (more than 8-16 disks at a time and that RAID chip is going to be the limiting factor). – Chris S Mar 21 '14 at 14:24