3

I'm looking to setup a SAN for a VMWare vCenter cluster of ESX hosts. I'd like it to be the primary storage device for VMs. There will be up to 10 hosts in the cluster eventually.

I'm creating a test environment with FreeNAS installed on a Poweredge R610. If it goes well we may be able to buy an R720 or something with a lot of drive bays. I have a RAID-Z on the drives on the R610 with a 120GB SSD cache. I plan to connect this server to a dedicated/isolated gigabit switch with jumbo-frames enabled, and all the hosts will connect to this switch for iSCSI.

That's what I have so far.

  1. Is FreeNAS a good option for this?
  2. How do I set this up on the VMWare side? Do I attach each host individually to the SAN via iSCSI? Or do I add it directly to vCenter? I would like the ability to load balance VMs across hosts easily without having to transfer VMDKs, which I assume is fairly implicit since they will all be on the SAN anyway.

Any help, tips, things you've learned from experience would be greatly appreciated!

I should note that I have never setup a SAN/iSCSI before, so I'm treading into new waters.

EDIT This post has helped me realize that I am going to need to invest a lot in some more high-end networking gear to make this perform in the way that I am wanting. Time to re-evaluate!

computmaxer
  • 367
  • 1
  • 4
  • 11
  • Use the software initiator, save yourself some headaches. – SpacemanSpiff Mar 22 '13 at 04:20
  • 1
    With 10 Hosts you'll want multiple 10 Gig network connections going to the storage. If you don't and you have everything properly load balanced you'll end up with 100 Megs of bandwidth per host. – mrdenny Mar 22 '13 at 14:44
  • How about 7 1GB lines to the switch? That's the best I can do at the moment. – computmaxer Mar 22 '13 at 15:50
  • I'll post a potential design later... – ewwhite Mar 22 '13 at 22:02
  • 1
    That's still only 700 Megs of traffic. If you've got a bunch of VMs (I'm assuming that you'll have a lot of VMs here as you'll have 10 hosts) that become very chatty (paging to the page file for example) then you'll start running out of bandwidth really fast. – mrdenny Mar 23 '13 at 01:02
  • 1
    @mrdenny I have to say though.. I haven't worked with many installations that actually needed more than 1gbps traffic to their SAN (except on peak occasions).. the 95th percentile is usually at 100-200mbps.. – pauska Mar 23 '13 at 12:27
  • 1
    @pauska that is true. Even on my 20-host clusters, I'm still using 1Gbps links to the host servers. – ewwhite Mar 24 '13 at 00:29

4 Answers4

7

This question depends heavily upon your VMware vSphere licensing tier..., the applications and VMs you intend to run and the amount of storage space and performance profile you need.

Answer that first, as an organization that can afford to properly license ten ESXi hosts in a vSphere cluster should be prepared for a higher-end storage solution and networking backbone than what you're planning. (Ballpark price for the right licensing level for that many hosts is ~$100k US.)

I think the scope of this question is a bit too broad, since there are established design principles for VMware installations of all sizes. I'll try to summarize a few thoughts...

ZFS-specific notes:

  • RAIDZ is a poor choice for virtual machine use in just about every use-case. It also won't allow you the expansion you may need over time. Using mirrors or triple-mirrors would be preferred to handle the random read/write I/O patterns of virtual machines. For ZFS best-practices, check out THINGS NOBODY TOLD YOU ABOUT ZFS.
  • SSD choice is important in ZFS-based solutions. You have L2ARC and ZIL cache options. They are read-optimized and write-optimized, respectively. The characteristics of the SSDs you'd use in each application are different. Quality SAS SSDs for ZFS caching are $1500+ US.
  • FreeNAS is not a robust VMware target. It has a reputation for poor performance. If you're set on ZFS, consider something like NexentaStor, which can run on the same type of hardware, but is compatible with VMware's storage hardware acceleration (VAAI) and has some level of commercial support available.

VMware vSphere notes:

  • Identify which features you'll need in VMware. vMotion, Storage vMotion, High Availability, Distributed Resource Scheduling, etc. are all helpful cluster management features.
  • A quick licensing guide: The lowest VMware tier that provides vMotion is the Essentials Plus package @ ~$5000 US. That only accommodates three 2-CPU servers. Pricing jumps considerably from there. $10k or more, scaling with the number of host servers.

Networking notes:

I run large VMware clusters in my day-job. When I'm dealing with more than three hosts, I start to rely on 10GbE connections between the storage array and switch(es). My connections to individual hosts may remain 1GbE, but are often also 10GbE.

Isolate your storage network from data. Jumbo frames aren't always the key to performance. If you DO enable them, make sure they're configured end-to-end on every device in the path.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • TBH, the whole "you need 10GbE connections with more than X hosts" is totally misguided. I am always amazed how people who actually work with VMware proclaim such things. Remember that virtualization has been around for a lot longer than 10GbE and it worked just fine before. I'd bet good money on that most SMB production environments don't even come close to saturating one 1GbE link. I know that my 6 host (~70 VM) setup doesn't come close. The real performance bottleneck in many setups is SAN write speed. – Reality Extractor Apr 03 '13 at 07:03
  • 1
    @RealityExtractor In my environments, they do saturate Gigabit links... With more than three hosts, you're in a much different VMware licensing tier. If you're dropping >$10,000 US for VMware licensing and an appropriate SAN *today*, it's short-sighted to not plan on 10GbE to the array. Mind you, my environments use NFS, so the concept of MPIO isn't really there. – ewwhite Apr 03 '13 at 11:14
  • @ewwhite Is going with FreeNAS (now TrueNAS) still a no-go for virtualized environments? Quite a bit of time has passed and I find myself in a similar situation as the OP. Thanks! – Hunter M Sep 01 '21 at 10:03
  • TrueNAS is definitely fine these days. A bare Linux server may be a better idea if this is just going to be an NFS export to a VMware cluster. – ewwhite Sep 01 '21 at 14:04
5

FreeNAS for a production vSPhere environment? I wouldn't recommend that.

Nevertheless, on each host you'll need to add the Software iSCSI Adapter (from the Storage Adapters node on the Configuration tab for each host) and then you'll need to configure the iSCSI adapter accordingly to connect to your iSCSI target. Once you've connected the iSCSI adapter to the iSCSI target you'll need to scan for new storage devices on one of your hosts and create a new datastore from the discovered iSCSI target block device. You'll then connect to the newly created datastore from each of your other hosts. You can then move your VM storage to the iSCSI SAN.

You need a specific Edition or Kit in order to move powered on VM's from one host to another and/or to move the VM storage.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171
3
  1. For test? Sure, why not.
  2. Each ESX host needs a connection to the iSCSI network. Try not to share those iSCSI adapters with LAN traffic, so this may mean needing to have a bunch of NICs in each host once you start adding in multipathing or aggregating. Yup.
  3. the vCenter machine does not need any connection to the iSCSI network, unless you're doing something with it that does require that.
mfinni
  • 35,711
  • 3
  • 50
  • 86
1

Since you are using Dell servers, you might want to look into Dell's EqualLogic or lower end MD series of iSCSI SANs... I'm not suggesting they work better with Dell servers but I've been using them for over 4 years now and I really like them and the support is good if you need it. They do have 10Gbps models but then you need 10Gbps switches too. We have seen very good throughput with our newer model EQLs that have 4 NICs each but the couple of older ones we have with only 3 X 1Gbps NICs perform fine too with 4 - 6 Hyper-V and ESX hosts each with a mix of application and db servers running Windows and CentOS. Bottom line, 10Gbps isn't required, iSCSI is widely deployed with multiple 1Gbps and MPIO.

Running FreeNAS on a standard server isn't bad but if the server fails your storage is down. A purpose built array like EQL or similar typically runs with dual controllers in addition to dual power supplies and RAID... less chance of down time.

Jumbo frames is good but honestly, we've got things running without and there is no noticeable difference. Separate switch is OK but not required, you can use VLANs... I'd really recommend two switches for redundancy with storage traffic VLAN'd off if budget is a concern.

Dan
  • 11
  • 1