3

Obviously there's an infinite variety of ways of carving up your raw SAN LUNs to provide VM boot and data vDisks - but what methods do you use, what are their pros/cons, and are there any good 'best-practice' docs you've come across (other then the very generic VMWare ones I mean). Thanks in advance.

Chopper3
  • 100,240
  • 9
  • 106
  • 238

3 Answers3

2

As you've already presumed this isn't a VMware question, but is a SAN question. TR-3428, and it's VDI cousin TR-3705 do a great job outlining a VMware implementation on a NetApp SAN. These docs are of debatable use on a non netapp SAN. As they fail to account for any strengths/shortcomings of your SAN. Having said that my views on the matter.

There are many reasons to break up your data stores. The most prevalent reason on ESX3.5 is without a doubt locking. Some choose to have OS/Boot data stores, and application data stores, but I've found any performance boost this may offer to be negligible. I have seen real performance boost by segregating the temp data from the OS onto a separate data store, but for this technique to be viable you must redirect to a high performance data store.

In the end I reverted back to one vmdk per vm, and handle exceptions. I did this for two reasons, One I found the best practice implementation was far too complicated. Not only was it a bear to set up, but often the system SA would screw up negating any possible gain. Often introducing a drag on the vm's performance. Two, with sVMotion today, this really just isn't as important as it once was. Previously, you needed to know all the answers. nowadays, just take a swing if you like the results keep going in that direction. VM's are narley animals and no two virtual infrastructures look alike. Therefor best practices are of limited use. trial and error will ultimately leave you with the best solution in your environment.

All that being said, I do use NetApp over NFS (with dedupe(IN PRODUCTION)), and run a 600GB datastore that I average 60 vm's on.... For me, the consolidation ratios NFS offered outweighed the performance hit. The few (less that 10) VM's I manage that demand more than NFS can offer sit on a ISCSI datastore (250GB).

Glenn Sizemore
  • 131
  • 1
  • 5
2

Where I work we have been presenting the ESX servers fiber attached LUN's which are 500GB in size. Due to SCSI reservation issues it seems that 500GB is the optimal size to give FC attached storage to ESX.

A big trend coming up is using NFS mounted storage for your ESX datastores, especially now that that 10gbps Ethernet is becoming mainstream. NFS presents many advantages to traditional fiber attached storage as well in an ESX environment.

It would help to know what type of storage you are using as different storage has different features and options which can be leveraged toward VMware environments.

WerkkreW
  • 5,879
  • 3
  • 23
  • 32
  • We're a HP XP/EVA house, interesting that you said about NFS/10G - how does that perform? presumably you're doing that from NetApp? – Chopper3 May 13 '09 at 19:16
  • 1
    We have done some testing on it, but have not moved anything to production yet. Initially the performance is at least equal to that of direct fiber attached disk, and it allows you much more flexibility from a deduplication angle. Yes, it is on NetApp. – WerkkreW May 13 '09 at 19:33
1

A very informative guide that I've used quite a bit in the past, in case you are using NetApp storage, is TR-3428. It also contains some general ESX / SAN information that might be useful if you use another storage vendor.

Dave K
  • 2,751
  • 2
  • 21
  • 17