0

We're doing a virtual desktop pilot and I was wondering what kind of SAN storage is commonly used. I've heard conflicting reports of SAS/SATA disks or even SSD or large read/write cache settings.

Requirements:

  • 50 Seats
  • Non persistent
  • Windows XP 20 GB storage per seat
  • IBM nSeries 6070 (NetApp Rebranded)
  • Streaming video and streaming audio is a must (Wyse C50LE is a model we're looking at)
  • Medium sized workload (similar to what VMware's doc describes as a knowledge worker)

I've reviewed VMware's Server and Storage Sizing For VMware VDI: A Prescriptive Approach and they are using 7200 RPM SATA drives for a similar workload while colleagues in similar situations were seeing poor performance:

We are using 60 SATA spindles for 20 concurrent connections. We had it running on 20 spindles but it was too slow.

I don't know who/what to believe. Are there any other good resources out there? What are other's experience?

andyhky
  • 2,652
  • 1
  • 25
  • 26

3 Answers3

1

Well it's a pilot, this is where you get to discover things about your usage case.

I'd simply go with what you've got, see how fast/slow it is and extrapolate from there - there's no rule of thumb for this one, only you can decide what you need.

Come back to us when you have some data.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
0

Chopper3 is dead on here. Until you know more about what kind of I/O you need from your VDI, you can't accurately predict what kind of storage will fill the need. If you need to know before purchasing everything, get a few thin clients, give them to people and monitor what they require I/O wise. Then extrapolate out to 50.

Part of the reason there aren't any good guide-posts for sizing storage for VDIs is because the storage environment is so variable it's hard to give good guidance. If the storage is being used for other things (MS-SQL databases, for instance) you have less room than if the storage is dedicated to the VDI. Also the performance of each storage subsystem itself can affect things, which further muddies the water.

So try it on what you have, and make contingency plans for improving your storage environment should it become obvious it needs it.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
0

The reason for the confusion is that there are documents like VMware's that indicate that 5 IOPs per user is reasonable and there are others out there that estimate 10-15 and I've seen evidence of a small scale pilot where the actual load we saw was over 40.

Testing\piloting really is the only option - just choose a storage solution that allows you to scale up in response or aim high for the pilot so you can get good data on the worst case. Worst case here is a VM's IOPs when there is effectively no limit applied by the storage. Once you know what that is you can make better estimates of how the patterns will scale as you add users.

There is quite a comprehensive VDI IOPs estimation article over at the Citrix Community site that covers a lot of the issues like estimating concurrent peak loads in addition to steady state averages and IO peaks due to concurrent login\boot storms. He points out that most of the generic average estimates of of user IOPs fail to recognise the intensity of workloads for many users - while 4/5 IOPs might be fine for a very generic user on a VM with lots of RAM it will not be accurate for anyone who pushes the system in any way. I prefer his way of breaking out users with power users\high end consuming 25 and 50 IOPs for significant periods of time. Those numbers aren't excessive as these are the sort of users you would be providing with 7200RPM drives on physical systems.

He comes up with a calculation of peak IOPs of around 77k for 3.5k users or 20 IOPs per user which is a long way from VMware's 5.

Helvick
  • 19,579
  • 4
  • 37
  • 55