Can I virtualize Windows and linux using virtualbox on solaris with zfs zvols?

4

1

I am considering building a system to virtualise Windows Server 2016 and CentOS 6 using virtualbox on Solaris (for home use), in order to take advantage of zfs' reliability.

I was planning on getting a DP ws/server board with 1tb of ram, plus a stack of WD Red drives.

I am also interested in allowing Windows VMs access to GPU resources. Is this possible in this situation?

Is it possible to host a Windows/Linux VM that uses a zfs zpool or vdev instead of going direct to the hardware for file system access?

I really only need Windows and Linux OSs, but was considering a solaris host solely for the benefits of zfs and its compatibly with virtualbox.

Is there a better way of doing this, or have I picked the best option?

Whether this is the best method or there is a better way, what are the gotchas involved with whatever method you suggest?

I have a limited budget, and I would prefer spending my money on hardware rather than software if there is a free software option that will work.

My other option was to add a hardware sas adapter with raid 6, and use Windows Server 2016 as a host for virtualbox and the linux and any other VMs, but ntfs isn't as reliable as zfs...

EDIT

My goals are:

  1. Have one physical machine.

  2. Minimize the potential for data loss as the result of hard drive failures and other file system problems.

  3. Run a Windows 2016 server OS plus some applications like Exchange, sql server. GPU is required here

  4. Run a modified CentOS system (FreePBX).

  5. Run some other virtual machines, preferably also with GPU support.

  6. Supplement and eventually replace a Synology RS812+ box.

  7. Minimize expenditure on software, allowing more/better hardware for my budget.

I am in the planning phase, so I can consider anything at this stage.

My thinking in having a Solaris host was that the entire file system would be zfs, and hence better protected from failures than the VM guests might otherwise allow - unless I have misunderstood something somewhere. The alternatives would seem to result in at least some of the file system being non-zfs, with lower reliability.

Monty Wild

Posted 2017-01-16T00:46:40.250

Reputation: 141

Answers

4

I am also interested in allowing Windows VMs access to GPU resources. Is this possible in this situation?

To directly passthrough a PCIe graphics card, you will need:

  • two PCIe graphics cards (one if certain tricks are used, like moving the card by scripts from the hypervisor system to the guest system on bootup)
  • a mainboard that supports Intel VT-d or AMD Vi (formerly IOMMU)
  • hypervisor software that supports it

It seems that unfortunately, Virtualbox does not currently support it. If this is a hard requirement, you may need to use KVM on Linux or illumos, VMware ESXi or Microsoft Hyper-V, all of them support it (although with different configuration work needed).

Is it possible to host a Windows/Linux VM that uses a zfs zpool or vdev instead of going direct to the hardware for file system access?

Yes, it is possible. Here are the relevant commands, taken from Johannes Schlüter's blog post:

# zfs create -V 10G tank/some_name
# chown your_user /dev/zvol/rdsk/tank/some_name
# VBoxManage internalcommands createrawvmdk \
  -filename /home/your_user/VBoxdisks/some_name.vmdk \
  -rawdisk /dev/zvol/rdsk/tank/some_name
# VBoxManage registerimage disk /home/your_user/VBoxdisks/some_name.vmdk

Alternatively you could use COMSTAR to serve the zvol over iSCSI.

While this has only slight additional overhead and no direct advantage in the local case, you may profit from it when you want to spread out and for example add another (redundant) storage server, or when moving the storage to a separate box.

In your specific case I would not do this, but there exists the option (also with NFS instead of iSCSI, but when using zvols instead of file systems there is no immediate advantage if both are properly configured).

Is there a better way of doing this, or have I picked the best option?

  • If you want to use Virtualbox, this would be what I would do
  • If you are flexible with regards to the hypervisor, you may have a look at SmartOS (ZFS, Zones and KVM in a small almost stateless server operating system built especially for hosting virtual machines)
  • If you require PCIe passthrough for graphics cards, you may need to use Linux+KVM, ESXi or Hyper-V as the hypervisor, virtualize the Solaris/illumos storage VM, passthrough the disks to it and serve them back over NFS or iSCSI to the hypervisor, where it is then used normally. This is also known as All-in-One storage appliance (AiO), I suggest reading about the concept in gea's excellent manual (see linked PDF at the top). It sounds complicated, but once setup, it is surprisingly simple and flexible, as you can spread it out from virtual network to physical network at any time, can replace hardware as usual and the whole approach is layered. It has some downsides, but I will only go into them if you are interested, as they are quite a niche.

Regarding your edit:

  1. Have one physical machine.
  2. Minimize the potential for data loss as the result of hard drive failures and other file system problems.
  3. Run a Windows 2016 server OS plus some applications like Exchange, sql server. GPU is required here
  4. Run a modified CentOS system (FreePBX).
  5. Run some other virtual machines, preferably also with GPU support.
  6. Supplement and eventually replace a Synology RS812+ box.
  7. Minimize expenditure on software, allowing more/better hardware for my budget.

In broad terms, you have two possible All-in-One setup options - storage itself virtualized (like in the napp-it readme I've linked) or storage on the hypervisor. I will call them A and B to compare along your points.

  1. A and B are equal, because both are on the same physical machine.
  2. A and B are almost equal, because both systems can use ZFS. With A, you are free to choose your storage OS (Solaris, illumos, Linux, BSD), with B, you have to choose something that supports ZFS and PCIe passthrough for VGA (currently only Linux and FreeBSD). This also affects your choice of hypervisor (ESXi, Hyper-V, KVM with A, and only KVM with B).
  3. A and B are equal. Take note however, that a single GPU can only be passed through to a single running VM, which occupies it completely. Switching GPUs requires shutdown of the affected VMs. If you require shared GPU support, your options are limited: Nvidia Tesla/Grid (very expensive) or new Intel Skylake shared GPUs (not very powerful, still experimental, see KVMGT presentation).
  4. No problem in both cases, as a virtual VGA card is sufficient.
  5. See point 3, depending on the number of VMs it may be ok to buy multiple cards or a single Grid card or wait until sharing is implemented properly for all cards in KVM.
  6. Both cases support the use of iSCSI and NFS for internal and external (meaning real network) use, it depends on your preference for administration. Also both can use storage from the NAS/SAN (assuming from the datasheet, I don't have the system myself).
  7. Regardless of your choice, all software can be run without any license cost, except of course Windows Server plus needed CALs (but that could be replaced with samba4), and Solaris (can be replaced by an illumos distribution like OmniOS, OpenIndiana or SmartOS). The Grid solution may have additional license costs, I did not look into it because the hardware itself is so expensive it is not useful for these cases.

So, it largely comes down to preference:

  • If you are comfortable with Linux administration (including ZoL) and KVM setup (can be a bit tricky depending on hardware and distributions), you can bypass the additional storage VM and need for small SSD/HDD if you go for solution B.
  • If on the other hand you want to choose from the full range of options and use the best system for every case, you might profit from the flexibility of solution A (although slight internal network overhead may be occurring here).

user121391

Posted 2017-01-16T00:46:40.250

Reputation: 1 228

Have a look at the edits to my question. – Monty Wild – 2017-01-16T11:16:34.740

@MontyWild see my updated answer – user121391 – 2017-01-16T13:23:46.423

If I used the zfs-host option, could I use the motherboard VGA for the host, and a desktop GPU for one guest? – Monty Wild – 2017-01-17T06:16:52.997

@MontyWild Yes, that is the normal way and it works as you would expect (it actually works in both cases because the storage VM does not need a real VGA in any case, only the hypervisor does). If using KVM, you could even reattach the VGA to a second VM after boot (with the downside that the host system dom0 can only be managed remotely via SSH or serial console). – user121391 – 2017-01-17T08:33:55.060

My intention is to attach to the VMs and the host by RDP, http or SSH most of the time. This is more a glorified NAS and application server than a desktop host, and a GPU would be used more for stuff like video transcoding than desktop or gaming applications. It'll spend most of its time sitting unattended in my rack case. As to my last comment, I meant that I might use Solaris with zfs and a hypervisor as the host and let it use the motherboard vga, and run a Windows Server guest and let that use the gpu - if I can find a suitable hypervisor that will run on Solaris. – Monty Wild – 2017-01-17T14:59:25.580

@MontyWild Yes, that combination is the one thing you will not get (at least not at this point in time), unfortunately. See https://forums.servethehome.com/index.php?threads/smartos-kvm-pci-passthrough.10301/#post-97074 So you have two options (if you still need PCIe passthrough): use Linux+ZoL (it has gotten better over time) or use a separate Solaris storage VM (my personal choice, but both are valid).

– user121391 – 2017-01-17T15:43:27.303