2

I'm replacing two old rack servers with a new one that has plenty of power to take over the functionality the current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM.

My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues:

  1. I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? Maybe just the disks one by one?

  2. I want to maximize the reliability of the setup. On what disks should I install the hypervisor and where would you put the VM system images?

For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays.

Recommendations? I have yet to choose which virtualization platform to use, but this should not have much of an impact.

dst
  • 21
  • 1

1 Answers1

8

Don't bother with using ZFS in this single-server hypervisor setup. Nothing to gain... lots to lose in terms of supportability and flexibility.

This is not to say it cannot be done. I've built and managed such solutions:

See: Hosting a ZFS server as a virtual guest

(the key is to use a small hardware RAID to protect the small footprint of the ZFS host's VM, then run a passthrough PCIe adapter to that VM to house the data disks. Share everything back to the host via NFS.)

Why do you think you need ZFS in this case? Use a proper hardware RAID controller with battery or flash-backed cache, RAID 1+0 and go from there... (this assumes the use of VMware ESXi)...

You options change a bit if you were looking at Linux + KVM hypervisor + ZFS... but even then, just using ZFS doesn't mean you'll have a fast or well-engineered setup. There are caching and pool layout designs that would need to be considered...

Edit:

ZFS isn't really as flexible about adding disks as you may think. A hardware RAID controller can do the same and will rebalance your data for you. For ZFS, adding disks doesn't mean the existing data is spread across the array. Also, certain ZFS RAID levels can't be expanded. E.g. RAIDZ1,Z2,Z3 can't be expanded by adding disks to the RAIDZ sets. They scale through the addition of similarly-sized RAIDZ sets.

While ZFS is great and a magical unicorn for many solutions, running a hypervisor is not the best use-case.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Basically, I was interested in the quotas, mirroring (hence relocation) and flexibility in adding more disks. In addition, I was looking at a chance to learn ZFS, which is why I'm thinking this through before doing anything. – dst Sep 03 '13 at 19:39
  • @dst See my edits above. – ewwhite Sep 04 '13 at 12:19
  • 1
    For the record, probably the most common use-case for NexentaStor appliances is as the backing storage for virtualization (all hypervisors - VMware, Xen, KVM, etc). However, seemingly despite this but not actually, I agree with @ewwhite. Reason: in nearly all of those solutions, NexentaStor is running on separate hardware dedicated to its use, and then separate boxes running hypervisors with VM's on top talk to it. Nesting ZFS within the virtualization stack as opposed to under it is fraught with gotchyas, performance oddities and risk. It is /not/ impossible, but it is not turn key easy. – Nex7 Sep 10 '13 at 23:56