7

We are moving our virtualization platform from Citrix's XenServer to Hyper-V on Windows Server 2008 R2. As part of this project I need to migrate, in one form or another, some Debian Linux servers over to Hyper-V. I have successfully built a Debian-based server on our new Hyper-V platform and I'm beginning to test it.

Debian 6 (Squeeze) uses the 2.6.32 kernel which includes the Hyper-V synthetic drivers, but it is not considered a supported operating guest operating system by Microsoft. I'm a little hesitant in trying to use them unless there's a compelling reason to, as other folks have had trouble (here, and here).

  • What advantages do the Hyper-V synthetic drivers offer over the emulated drivers?
  • For those of you who have experience with the Xen hypervisor, is using the synthetic drivers analgous to para-virtualizing a guest operating system?
  • Are then any noteworthy dangers or drawbacks of NOT using the synthetic drivers?

Why should I bother to either a) deal with the reported instability of the Hyper-V drivers currently in the kernel, b) try to build a newer kernel, or c) try to make the Virtual Machine Additions work with a distribution they weren't designed for when everything seems to "just work"?

EDIT: To add a little to the answers... Clock drift seems to be significant issue (as in so bad that NTP can't keep the clock in time) unless you are using Linux Integration Services. See KB918461. Apparently using the vmbus components included in the Linux Integration Services resolves this. My testing bares this out as a problem.

3 Answers3

7

The synthetic drivers 'talk' more directly to the actual hardware, bypassing most of the hypervisor (for common data operations). This dramatically cuts down on the hypervisor overhead related to most network activity.

If your server doesn't communicate much on the network, or if your hardware is well undercommitted, you should be fine with the emulated drivers. There's definitely a performance penalty for doing this however.

Chris S
  • 77,337
  • 11
  • 120
  • 212
  • 1
    Chris, a few of your facts don't match. I'm curious why you think what you think. Adding an emulated NIC to a virtual switch doesn't really affect any other port of the switch. Furthermore, synthetic drivers don't talk directly to the hardware, they send messages to drivers running in the Hyper-V management OS, which talk to hardware. On the whole, though, your point about performance is correct. – Jake Oshins Feb 17 '11 at 17:37
  • @Jake, on the drives, I was over-simplifying; you are correct that even the synthetic drivers have to go through the hypervisor. On the mixing of emulated and synthetic drivers, I head that a few times (though I'm having a hard time finding a reference on the web). It's possible I've been told wrong, or that it applies to an older version of Hyper-V. I'll remove it from my answer until I can find a reference. – Chris S Feb 17 '11 at 18:09
  • so they function in a similar manner to Xen's para-virtualized guest operating system; they allow privileged "streamlined" access to the Hypervisor (Ring -1 in Hyper-V and Ring 0 in Xen) instead of executing the same tasks at the level of abstraction that the virtual machine/guest operating system lives in. –  Feb 17 '11 at 18:48
  • 1
    It's something like that. (Oversimplifying again) Synthetic: Guest hand packets to Hypervisor who runs it through the NIC driver and out the wire. Emulation: Guest uses PCI type commands to manipulate a fake NIC, hypervisor interprets that to figure out what the guest wants, runs it through the NIC Driver and out the wire. – Chris S Feb 17 '11 at 18:52
  • Yes. Chris's last answer is right on. The only thing that I would add is to point out that when the guest OS thinks it's manipulating a real PCI device, sending a packet can involve at least several hypervisor traps. Sending a lwhole list of packets through the synthetic/paravirtualized stack just involves updating some pointers, and, if the system has gone idle, a lightweight signal. Overall, the total cost of emulation is hundreds of times that of a synthetic driver. This may or may not matter. If your workload does little networking, then the cost is irrelevant. – Jake Oshins Feb 18 '11 at 01:48
2

When your hypervisor is emulating hardware, there are lots of registers and timing issues and other things that the client's driver is going to expect to do when it is doing things like putting packets into the NIC's buffer or putting data into a block on a disk drive.

When you use the synthetic driver, you skip all the "fiddle with this register (that's emulated by the hypervisor anyhow)" and skip straight to the "here's the data -- do the right thing with it" stage.

So the whole process is far more efficient.

chris
  • 11,784
  • 6
  • 41
  • 51
0

I don't have a full answer for you but some experience that might help round out the discussion. We initially used the emulated drivers on our Red Hat machines but the linux admin complained that network performance was abysmal. Eventually we got the synthetic drivers working via the Virtual Machine Additions and that made a big difference (I don't have proof or details, so take that with a grain of salt).

Separately, we sometimes image VMs over the network and when we do that we must use the emulated NIC on a Windows box because the synthetic NIC doesn't support PXE booting. Once the imaging is complete, we replace the emulated NIC with a synthetic one. Again, I'm talking about Windows here (not linux) but it's another difference.

In general, my understanding is that the emulated devices emulate older, more established or more generic devices that pretty much every OS or distribution will have built-in support for. In this regard they are more universal. The synthetic devices don't emulate any other devices that your OS or distribution would recognize and thus, you need Microsoft-provided drivers for them the likes of which you get by installing the VM Additions.

icky3000
  • 4,718
  • 1
  • 20
  • 15