11

I'm considering virtualizing a number of guests onto a single server running a recent port of KVM to Illumos. It sounds like my two primary options will be OpenIndiana and SmartOS. The distribution I will end up using needs to meet the following requirements:

  • Need to be able to manage and customize via CLI (e.g. change ZFS filesystem/zvol options, attach an external drive and copy data to it, or automatically replicate data to an offsite server using zfs send/receive).
  • Need to implement automated ZFS snapshots (e.g. using zfs-auto-snapshot).
  • Need to be able to setup automatic email notifications if the health of the server degrades. Essentially, setup periodic ZFS scrubbing, monitor zpool, fault manager, and/or SMART issues, and email when problems are detected, etc. Manually setting this up is OK, as long as the OS lets me.
  • Should handle Debian, Ubuntu, and Windows 2008 Server R2 guests with good stability and reasonable performance. These guests will be used in production.
  • There should be a reasonable expectation that future releases will continue to be delivered - I don't want to get stuck using a dead-end product.
  • Would be nice if it was easy to setup and has some sort of GUI, but this is optional.

Based on these requirements, which distribution would you recommend?

You can assume that this environment won't be deployed until the upcoming OpenIndiana stable release is released. Also, you can assume the server will use a Sandy Bridge Xeon E3-1xxx CPU, so that should take care of KVM compatibility.

Also, how robust/stable is the KVM port to Illumos on either of these distributions? Should I even consider KVM/Illumos for a production environment for now?

Stefan Lasiewski
  • 22,949
  • 38
  • 129
  • 184
Alex
  • 471
  • 7
  • 18

2 Answers2

10

I'll ask, how important is it that you specifically use KVM?

My preference for the type of solution you're inquiring about is to build around VMWare ESXi. You can build an all-in-one server running VMWare ESXi booting from flash media (SDHC, USB, CF) and leverage the DirectPath I/O (PCI-passthrough) available on current servers to present a SAS/SATA HBA to a virtualized ZFS-based OS (let's assume OpenIndiana, but I usually use NexentaStor Community Edition). From there, you can create a loopback vSwitch and present your ZFS storage to ESXi as 10GbE NFS or iSCSI to house the guest virtual machines (Windows, Linux, etc.).

  • Using this, you have full access to ZFS features like compression, deduplication and snapshots. You can augment this setup with a ZIL and L2ARC quite easily.

  • If you choose NexentaStor for your ZFS solution, you'll also have a full GUI to manage autosnapshots/tiering. The monitoring tools for the disks are also built-in.

  • VMWare handles a number of guest types very well, so you're covered well.

  • Nexenta, OpenIndiana and VMWare are here to stay, so this isn't a poor technology decision.

  • Provided you have the budget for hardware, your ESXi, ZFS OS and Linux are free...

Also see:

http://blog.laspina.ca/ubiquitous/encapsulating-vt-d-accelerated-zfs-storage-within-esxi

http://www.napp-it.org/napp-it/all-in-one/index_en.html

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 1
    Thanks. I'm definitely open to other virtualization technologies. I've thought about an approach similar to the one you suggested, but felt uneasy about it since it seemed like it may not work reliably as it's more complex. For example, when ESXi reboots, it's not going to see that NFS/iSCSI storage because the ZFS VM hasn't booted yet. So it seems I'd have to fiddle around with it every time it boots: wait for the ZFS VM to boot, then instruct VMware to attach that storage, then manually boot the other VMs - am I wrong? Also, does VMware let you clone volumes natively via ZFS? – Alex Dec 03 '11 at 01:59
  • VMWare allows you to set boot priority. So in this case, the ZFS VM boots first and shuts down last. There's no manual fiddling involved. The VMs boot once the storage is in place. From the ZFS perspective, the disks are pass-through, so you can even remove them and move to a different server and expect the same result. It's fairly portable. As for VMWare cloning, I don't use it, but you're better off doing it either at the VMware VM level, or doing it from the ZFS/datastore level. – ewwhite Dec 03 '11 at 02:12
  • 2
    Keep in mind that in the free version of ESXi 5, you are capped to 32GB of RAM. – Jed Daniels Mar 20 '12 at 22:00
9

I've been using SmartOS and KVM in production for a few months now and am very happy with it. It sounds like it should suit your needs just fine. All the ZFS stuff you require is supported. For the monitoring stuff you'll need setup up some third party stuff though.

I'm working a couple of projects related to monitoring and specifically for doing the things you mentioned. Check them out and feel free to drop me a line.

https://github.com/gflarity/nervous https://github.com/gflarity/response

gflarity
  • 206
  • 2
  • 1