1
This is ultimately a question about how to optimize a physical->virtual machine move. I've found a number of posts that are somewhat related, but nothing (yet?) that appears to address it directly. Feel free to point and blame if I didn't search well enough... ;-)
My physical hardware consists of a half-dozen 2TB storage drives, an 8-core/32GB supermicro Xeon board, and a couple of reasonably fast SATA SSDs.
At the moment I have one physical machine that acts as a dedicated file server, running Ubuntu and ext4 over MDADM in raid5 mode. This is really the only "important" machine in my setup, as it's my local home backup and also serves as sort of a compute server for transcoding videos and other stuff like that. I also have a couple other small linux machines that I use for random, mostly unimportant stuff - I rebuild these often and if they break I don't really care as the important data is on the file server. I also have two older windows machines that I'd love to virtualize partially to save space/power but as much to get them access to much higher performance hardware.
I'd like to move the file server over to a more recent linux distro and also move the bulk of the storage over to a ZFS based filesystem. I'm familiar with ESXi from a work environment, but there I don't manage the storage so that part is a little opaque to me. I'm trying to figure out how to handle the storage on this setup. I can see at least these options:
- run linux on the baremetal. Set up that machine as the fileserver, with ZFS set up across the physical disks just as it would be on any other machine. Then run KVM on that machine, and host VMs from there.
run linux on the baremetal, run KVM there and use this instance as basically just the hypervisor. From there build a VM that runs the fileserver and other VMs for all the other machines. In this case, I need some help figuring out how I'd expose the storage to the ZFS/file server VM. Do I just pass through all of these disks to the ZFS server? This would preclude any other VM from using this space, or they'd have to bounce back and forth a couple of times if they were hosted elsewhere.
run ESXi on the baremetal, and launch one VM to be the file server and others for the other VMs. Here, mostly the same questions apply about how I'd go about exposing the storage to the file server. Is the performance likely to be better on ESXi vs. KVM? What about expanding the storage at some future time? I'm sure there's other stuff I'm not considering... but what is that other stuff?
The only way I would run something like Samba within a virtual machine is if I was running a Type 1 Hypervisor, and had configured volumes on a SAN, otherwise the benefits of an other configuration wouldn't be out weighted by the risk of any problems causing the files on the file share to be inaccessible – Ramhound – 2017-09-20T22:03:30.153
1
One thing to keep in mind is that ZFS should have direct access to entire drives for optimal performance and reliability. Which leaves you either with running ZFS on the host, or passing through whole drives. I generally suggest running ZFS on the host - from there you can run KVM or look at container solutions like LXD (which reduce overhead compared to traditional full VMs). Unfortunately, this whole thing becomes rather opinion-based (and therefore a poor fit for Q&A) - but I'd be happy to discuss further in chat.
– Bob – 2017-09-21T07:29:27.760