4

I want to know, before going forward, what I can expected in lost of performance (or not) of creating Windows spanned volumes from LUN delivered by a SAN ?

I don't know which kind of SAN is (we don't administer it), but they give us 10 300 Gb LUN to our Windows 2k8 R2 (Vmware) and we need larger volume so we think to spanned some disk but we are aware of the performance issue.

Any input ?

Regards.

1 Answers1

12

Time for some science, bitches. The test setup:

  • Windows 7 x64
  • 2Gb RAM
  • Virtual Machine on ESXi 5.0
  • LUN 1: 5Gb Thick Provisioned on 2x HP P4000 Lefthand Cluster 1 exposed via iSCSI (2x 1Gb MPIO)
  • LUN 2: 5Gb Thick Provisioned on 2x HP P4000 Lefthand Cluster 2 exposed via iSCSI (2x 1Gb MPIO)

We have a total of two LUNs on two seperate clusters. I have artifically limited the maximum throughput on these LUNs so that I don't impact our real systems that are running on the arrays, but for the purpose of comparing the outputs, that should be enough.

Step 1: Benchmark the LUNs individually

Created two individual simple volumes, formatted with NTFS 4Kb blocks.

Atto Disk Benchmark, from 512 to 4196Kb:

enter image description here

Both LUN1 and LUN2 consistantly average a maximum 1Gbps of throughput (LUN2 is every so slightly slower as it runs SATA disks rather than SAS disks).

If I look at the data collected from each SAN cluster itself, we see a similar story:

enter image description here

Both LUNs output approx 1Gbps of traffic during each test.

Step 2: Benchmark the LUN as a spanned volume

OK; so far everything is as we expected. Now we convert those two disks to dynamic disks and create a single spanned 10Gb drive and run the same benchmark:

enter image description here

enter image description here

And what do you know, an ever so slight drop in performance, but all in all we can call that identical to the first two tests. But, most importantly, looking at the data collected from the SAN, only one LUN was ever active:

enter image description here

One would assume that the 2nd LUN would only come active once the first LUN is full. Hence span.

Step 3: For shits and giggles

I have limited the bandwidth here so that I don't impact our live systems, but I suggest you do all of this again on your own servers to see what kind of performance you get. If it's not enough, then I suggest trying a striped set. Normally I would never ever suggest this as if you lose a LUN you're screwed, but if you're comfortable that your SAN provider can keep both LUNs online (in this example, each LUN is a fault tolerant cluster, so the chances of it going offline are slim) then you might want to try striping and benchmarking to see if you get the performance you require. And let's be honest, striping or spanning, you lose one disk you lose the lot anyway. So the risk factor is pretty high either way.

That's all for now; let me go and clear all the network alerts that have been triggered because a single initiator is consuming more than its fair share of bandwidth...

Mark Henderson
  • 68,316
  • 31
  • 175
  • 255
  • You know, I didn't read your answer in detail (I'm sure it's very thorough and concise and I will get around to reading it in detail), but +1 for your opening line. Science, bitches. Word. – joeqwerty Nov 16 '12 at 02:22
  • Mark, better of what I expected. Thanks for your time to test all the things. As you suggested, I'll test on my own server. – Patrick Pellegrino Nov 16 '12 at 04:14
  • @Mark, stripping can be a restrictive in the future while we cannot expanded stripped volumes. – Patrick Pellegrino Nov 16 '12 at 04:17
  • @PatrickPellegrino - yes, that's very true, and I guess why Microsoft included Spans instead of just Stripes. – Mark Henderson Nov 16 '12 at 04:19