Hardware-assisted virtualization

In computing, hardware-assisted virtualization is a platform virtualization approach that enables efficient full virtualization using help from hardware capabilities, primarily from the host processors. A full virtualization is used to simulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system (using the same instruction set as the host machine) effectively executes in complete isolation. Hardware-assisted virtualization was added to x86 processors (Intel VT-x or AMD-V) in 2005 and 2006 (respectively).

Hardware-assisted virtualization is also known as accelerated virtualization; Xen calls it hardware virtual machine (HVM), and Virtual Iron calls it native virtualization.

History

Hardware-assisted virtualization first appeared on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. With the increasing demand for high-definition computer graphics (e.g. CAD), virtualization of mainframes lost some attention in the late 1970s, when the upcoming minicomputers fostered resource allocation through distributed computing, encompassing the commoditization of microcomputers.

IBM offers hardware virtualization for its POWER CPUs under AIX (e.g. System p) and for its IBM-Mainframes System z. IBM refers to its specific form of hardware virtualization as "logical partition", or more commonly as LPAR.

The increase in compute capacity per x86 server (and in particular the substantial increase in modern networks' bandwidths) rekindled interest in data-center based computing which is based on virtualization techniques. The primary driver was the potential for server consolidation: virtualization allowed a single server to cost-efficiently consolidate compute power on multiple underutilized dedicated servers. The most visible hallmark of a return to the roots of computing is cloud computing, which is a synonym for data center based computing (or mainframe-like computing) through high bandwidth networks. It is closely connected to virtualization.

The initial implementation x86 architecture did not meet the Popek and Goldberg virtualization requirements to achieve "classical virtualization":

  • equivalence: a program running under the virtual machine monitor (VMM) should exhibit a behavior essentially identical to that demonstrated when running on an equivalent machine directly
  • resource control (also called safety): the VMM must be in complete control of the virtualized resources
  • efficiency: a statistically dominant fraction of machine instructions must be executed without VMM intervention

This made it difficult to implement a virtual machine monitor for this type of processor. Specific limitations included the inability to trap on some privileged instructions.[1]

To compensate for these architectural limitations, designers have accomplished virtualization of the x86 architecture through two methods: full virtualization or paravirtualization.[2] Both create the illusion of physical hardware to achieve the goal of operating system independence from the hardware but present some trade-offs in performance and complexity.

  1. Full virtualization was implemented in first-generation x86 VMMs. It relies on binary translation to trap and virtualize the execution of certain sensitive, non-virtualizable instructions. With this approach, critical instructions are discovered (statically or dynamically at run-time) and replaced with traps into the VMM to be emulated in software. Binary translation can incur a large performance overhead in comparison to a virtual machine running on natively virtualized architectures such as the IBM System/370. VirtualBox, VMware Workstation (for 32-bit guests only), and Microsoft Virtual PC, are well-known commercial implementations of full virtualization.
  2. Paravirtualization is a technique in which the hypervisor provides an API and the OS of the guest virtual machine calls that API, requiring OS modifications.

In 2005 and 2006, Intel and AMD (working independently) created new processor extensions to the x86 architecture called Intel VT-x and AMD-V, respectively. On the Itanium architecture, hardware-assisted virtualization is known as VT-i. The first generation of x86 processors to support these extensions were released in late 2005 early 2006:

  • On November 13, 2005, Intel released two models of Pentium 4 (Model 662 and 672) as the first Intel processors to support VT-x.
  • On May 23, 2006, AMD released the Athlon 64 ("Orleans"), the Athlon 64 X2 ("Windsor") and the Athlon 64 FX ("Windsor") as the first AMD processors to support this technology.

Well-known implementations of hardware-assisted x86 virtualization include VMware Workstation (for 64-bit guests only), XenCenter, Xen 3.x (including derivatives like Virtual Iron), Linux KVM and Microsoft Hyper-V.

Advantages

Hardware-assisted virtualization reduces the maintenance overhead of paravirtualization as it reduces (ideally, eliminates) the changes needed in the guest operating system. It is also considerably easier to obtain better performance. A practical benefit of hardware-assisted virtualization has been cited by VMware engineers[3] and Virtual Iron.

Disadvantages

Hardware-assisted virtualization requires explicit support in the host CPU, which is not available on all x86/x86_64 processors.

A "pure" hardware-assisted virtualization approach, using entirely unmodified guest operating systems, involves many VM traps, and thus high CPU overheads, limiting scalability and the efficiency of server consolidation.[4] This performance hit can be mitigated by the use of paravirtualized drivers; the combination has been called "hybrid virtualization".[5]

In 2006 first-generation 32- and 64-bit x86 hardware support was found rarely to offer performance advantages over software virtualization.[6]

gollark: Yes, but not that many that nobody else has.
gollark: So nine people do this and nobody else can? This is just potatOS.
gollark: (I know I just disagree with it)
gollark: What we can't use is a significant square of the world, the tomes already found, and the turtle swarm used, though most people already have their own swarms.
gollark: So Kepler decided to maintain a monopoly on the secret tome data, you see, which is totally good for the consumer and never goes wrong.

See also

References

  1. Adams, Keith. "A Comparison of Software and Hardware Techniques for x86 Virtualization" (PDF). Retrieved 20 January 2013.
  2. Chris Barclay, New approach to virtualizing x86s, Network World, 20 October 2006
  3. See "Graphics and I/O virtualization".
  4. See "Hybrid Virtualization: The Next Generation of XenLinux". Archived March 20, 2009, at the Wayback Machine
  5. Jun Nakajima and Asit K. Mallick, "Hybrid-Virtualization—Enhanced Virtualization for Linux" Archived 2009-01-07 at the Wayback Machine, in Proceedings of the Linux Symposium, Ottawa, June 2007.
  6. A Comparison of Software and Hardware Techniques for x86 Virtualization, Keith Adams and Ole Agesen, VMware, ASPLOS’06 October 21–25, 2006, San Jose, California, USA"Surprisingly, we find that the first-generation hardware support rarely offers performance advantages over existing software techniques. We ascribe this situation to high VMM/guest transition costs and a rigid programming model that leaves little room for software flexibility in managing either the frequency or cost of these transitions.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.