4
1
I am considering building a system to virtualise Windows Server 2016 and CentOS 6 using virtualbox on Solaris (for home use), in order to take advantage of zfs' reliability.
I was planning on getting a DP ws/server board with 1tb of ram, plus a stack of WD Red drives.
I am also interested in allowing Windows VMs access to GPU resources. Is this possible in this situation?
Is it possible to host a Windows/Linux VM that uses a zfs zpool or vdev instead of going direct to the hardware for file system access?
I really only need Windows and Linux OSs, but was considering a solaris host solely for the benefits of zfs and its compatibly with virtualbox.
Is there a better way of doing this, or have I picked the best option?
Whether this is the best method or there is a better way, what are the gotchas involved with whatever method you suggest?
I have a limited budget, and I would prefer spending my money on hardware rather than software if there is a free software option that will work.
My other option was to add a hardware sas adapter with raid 6, and use Windows Server 2016 as a host for virtualbox and the linux and any other VMs, but ntfs isn't as reliable as zfs...
EDIT
My goals are:
Have one physical machine.
Minimize the potential for data loss as the result of hard drive failures and other file system problems.
Run a Windows 2016 server OS plus some applications like Exchange, sql server. GPU is required here
Run a modified CentOS system (FreePBX).
Run some other virtual machines, preferably also with GPU support.
Supplement and eventually replace a Synology RS812+ box.
Minimize expenditure on software, allowing more/better hardware for my budget.
I am in the planning phase, so I can consider anything at this stage.
My thinking in having a Solaris host was that the entire file system would be zfs, and hence better protected from failures than the VM guests might otherwise allow - unless I have misunderstood something somewhere. The alternatives would seem to result in at least some of the file system being non-zfs, with lower reliability.
Have a look at the edits to my question. – Monty Wild – 2017-01-16T11:16:34.740
@MontyWild see my updated answer – user121391 – 2017-01-16T13:23:46.423
If I used the zfs-host option, could I use the motherboard VGA for the host, and a desktop GPU for one guest? – Monty Wild – 2017-01-17T06:16:52.997
@MontyWild Yes, that is the normal way and it works as you would expect (it actually works in both cases because the storage VM does not need a real VGA in any case, only the hypervisor does). If using KVM, you could even reattach the VGA to a second VM after boot (with the downside that the host system dom0 can only be managed remotely via SSH or serial console). – user121391 – 2017-01-17T08:33:55.060
My intention is to attach to the VMs and the host by RDP, http or SSH most of the time. This is more a glorified NAS and application server than a desktop host, and a GPU would be used more for stuff like video transcoding than desktop or gaming applications. It'll spend most of its time sitting unattended in my rack case. As to my last comment, I meant that I might use Solaris with zfs and a hypervisor as the host and let it use the motherboard vga, and run a Windows Server guest and let that use the gpu - if I can find a suitable hypervisor that will run on Solaris. – Monty Wild – 2017-01-17T14:59:25.580
@MontyWild Yes, that combination is the one thing you will not get (at least not at this point in time), unfortunately. See https://forums.servethehome.com/index.php?threads/smartos-kvm-pci-passthrough.10301/#post-97074 So you have two options (if you still need PCIe passthrough): use Linux+ZoL (it has gotten better over time) or use a separate Solaris storage VM (my personal choice, but both are valid).
– user121391 – 2017-01-17T15:43:27.303