59

This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)?

Did anyone measure performance difference of web server or db server (virtual VS physical)?

If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

Michal Illich
  • 691
  • 1
  • 5
  • 5
  • 1
    Ubuntu Entreprise Cloud is based on KVM not Xen. – Antoine Benkemoun Apr 24 '10 at 09:15
  • 1
    Antoine, you are right - "The core virtualization strategy has always been KVM-based, although with the development of lib-virt, the management of KVM and Xen hosts is unified." - I will edit out the mention about Xen. – Michal Illich Apr 24 '10 at 11:57

12 Answers12

32

The typical experience for a general purpose server workload on a bare metal\Type 1 Hypervisor is around 1-5% of CPU overhead and 5-10% Memory overhead, with some additional overhead that varies depending on overall IO load. That is pretty much consistent in my experience for modern Guest OS's running under VMware ESX\ESXi, Microsoft Hyper-V and Xen where the underlying hardware has been appropriately designed. For 64 bit Server operating systems running on hardware that supports the most current cpu hardware virtualization extensions I would expect all Type 1 hypervisors to be heading for that 1% overhead number. KVM's maturity isn't quite up to Xen (or VMware) at this point but I see no reason to think that it would be noticeably worse than them for the example you describe.

For specific use cases though the overall\aggregate "performance" of a virtual environment can exceed bare metal \ discrete servers. Here's an example of a discussion on how a VMware Clustered implentation can be faster\better\cheaper than a bare metal Oracle RAC. VMware's memory management techniques (especially transparent page sharing) can eliminate the memory overhead almost entirely if you have enough VM's that are similar enough. The important thing in all these cases is that the performance\efficiency benefits that virtualization can deliver will only be realised if you are consolidating multiple VM's onto hosts, your example (1 VM on the host) will always be slower than bare metal to some degree.

While this is all useful the real issues in terms of Server virtualization tend to be centered around management, high availability techniques and scalability. A 2-5% CPU performance margin is not as important as being able to scale efficiently to 20, 40 or however many VM's you need on each host. You can deal with the performance hit by selecting a slightly faster CPU as your baseline, or by adding more nodes in your clusters but if the host can't scale out the number of VM's it can run, or the environment is hard to manage or unreliable then its worthless from a server virtualization perspective.

Helvick
  • 19,579
  • 4
  • 37
  • 55
  • 8
    You use outdated tech - especially the 5% to 10% memory overhead is old hardware. The newer hardware chips have an overhead of about 2% to 3% if the hyper-visor supports it - and we talk of stuff being a year old being new. AMD and Intel improoved their API for Hyper-Visor memory mapping by then. As you said later, they hit to be pretty transparent (1% target). +1 for pointing out the real benefits. – TomTom Apr 25 '10 at 15:46
  • 1
    I based the 5-10% on what I've seen with VMware and it is based on pre EPT\RVI kit. It makes sense that the improved hardware based virtual memory management in the most recent CPU's would reduce the RAM overhead – Helvick Apr 25 '10 at 16:10
  • concerning transparent page sharing, it sucks when you have large memory pages which all new cpu's support. You essentially gain nothing in this case. – tony roth Jul 19 '10 at 21:45
  • 1
    @Tony that's only true if you are not overcommitted - if you are then ESX\ESXi 4 will opt to use small pages, and TPS will kick in. I haven't pushed this to the limit so I can't confirm that it really does work as advertised but it is a sensible approach that should allow over-commit when absolutely required without sacrificing performance when its not. See http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021095 – Helvick Jul 19 '10 at 21:58
  • 1
    @Helvick, if you run win7 or w2k8r2 guest TPS doesn't work much since the guest are aggresively precaching things. – tony roth Jul 19 '10 at 22:17
  • @tony roth, that's true, but you're probably being pedantic since that's not the problem that TPS was designed to address. The performance impact of aggressive precaching is handled by the balloon driver in your hypervisor's guest additions, not by zero page sharing through TPS. – jgoldschrafe Feb 18 '11 at 15:08
  • When we say "overhead is only 1-2%" -- what about when we add 10-50 virtual machines on the one physical one. Now we are adding 10-100% overhead. Right? – trilogy Jul 10 '19 at 19:45
  • Not necessarily. Hypervisor context switching adds the overhead but it is only relevant if you are doing a lot of context switching. If you have a 10 core system and it is running 10 single vCPU VMs then there will be very little context switching. If you are runnin 50 busy VMs on that same box, then there will be a lot of context switching but your bigger problem then is that you are trying to run 5 times too many VMs. I wrote that 9 years ago and the pervasive adoption of virtualization today bears out the fundamental argument - better manageability is more important than raw performance. – Helvick Jul 11 '19 at 12:20
23

"Performance" has many aspects. The n00bs measure the boot time of an OS, and say e.g. Windows 2012 is sooooooo great because it boots in 12 sec on real HD, maybe 1 sec on SSD.
But this sort of measure not very useful: performance is equal to OS boot time, but the OS boots once a month so optimizing that doesn't make much sense.

Because it's my daily business I might point out the 4 following parts which made up the "performance"

  1. CPU load
    This should be comparable, meaning a task taking 1000 ms on bare metal will execute in 1000 ms process time and probably 1050 ms of clock time in an idle VM environment on the same hardware (some details later). Google the MSDN for processtime and queryperformancecounter and yu can do a thing that can show how much the VM eats up yur CPU time.

  2. SQL performance
    SQL performance highly relies on IO to the datastore where the SQL data is stored. I've seen 300% difference between 1'st gen ISCSI which you may find on Buffalo home NAS, then ISCSI with DCE and a real old school FC environment, on all levels. The FC still wins nowadays, because the FC latency is the lowesst archievable which lead to a "copy" of the FC protocol for TCP/IP datacenter enhancements. Here IOps and latency is vital but also IO bandwidth from the server process to the media - depends if the app tends to No-SQL or to Datawarehousing or is in the middle of that like ERP sytems ... Sage KHK for small enterprises, SAP for the huge ones. Both have a CEO view on enterprise financial statistics and when the CEO hits the button he effectively grants vacations for some days when the IO subsystem of the database has weaknesses.

  3. Filesystem Access
    Some applications, like video streaming relies on a guaranteed minimum bandwidth, others rely on max IO throughput like just openeing large files in a hex editor, loading a video project into yur favorite movie making prog. Not a typical situation on a vm.... the IOps may also be important to developers. Developers often make use of VMs because developing environment s are very sensitive and so the temptation to do that in a VM is high. Compiling a large project often means reading tons of small files, do the compiler stuff and build an EXE and the accompaining components.

  4. Network latency to the client
    Here the usability of WYSIWIG progs like word 2010, Openoffice Writer, LaTEX, GSView and others highly rely on the speed - how fast a mouse action gets from the client to the server. Especially in CAD apps this is important.... but also not a LAN issue, it'S remote access over WAN where this is important.

But - and I speak from the perspective of years of consulting - there are users having the admin password (and they're often employees of a BIG company with a BIG budget and a BIG pocketbook) complaining this and that, but it must be clarified which performance compoent is important to them and which is important from the perspective of the application they use.
It's most likely not notepad, but a highly sophisticated application for engineering this and that, which was also very expenssive and should be moved on the VMware, HyperV or Xenapp and it doesn't perform as expected.

But they don't have in mind that it might run on 1.5 GHz Xeons on blades not made for pure CPU performance, they're built for an average, let's say "optimized for $ per CPU cycle" or "CPU cycles per Watt".

And when we talk about tradeoffs and economisations - that mostly leads to overcommitments. Overcommitments lead to lack of ressources where CPU can be handled pretty well, but lack of memory leads to paging, lack of IO in the core routers leads to increased answer times on everything , and transactional overload on any kind of storage might stop every useful app from responding too quickly. Here monitoring is required, but many software vendors are not able to provide such informations....on the other hand a host with ressources of 3 physical servers can most likely handle 8 virtual machines of the same layout like the physical ones...

The CPU tradeoffs on idle systems often leads to systems performing 50% slower than physical systems, on the other hand nobody is able to install the "real world" os and the "real world" app the customer's IT guys want to move into the VM box. And it takes days (maybe weeks but for sure 42 meetings) to make clear that VM technology can offer flexibility by trading pure CPU speed. This is just built into the CPUs on these blade systems hosting nowadays larger VM environments. Also the memory won't be comparable, also some tradeoffs apply. DDR3 1600 CL10 will have higher memory bandwidth than DDR2 800 ECC LLR - and everyone knows that Intel CPUs profit from this in a different way than AMD cpus. But they're rarely used on productive environments, more in whiteboxes or in datacaenters hosted in 3rd world countries who offer datacenter service for 10% of the price a datacenter in your own homeland may bill yu. Thanks to Citrx a datacenter can be everywhere if it's less than 150 ms of latency between the end user and the datacenter.

And the home users perspective....

Last but not least some people wanna throw away Win7 or XP and trade it for a Linux, and then the gaming question comes up because actually only few games are available for Linux and Windows. Gaming relies higly on 3D acceleration. VMWare 6.5 Workstation and the connected free player can handle DirectX 9, meaning a Doom3 in a VM can run on the host graphic card in full screen. Games are mostly 32 bit apps, so they won't eat up more than 3 GB and mostly not more than 3 CPUs (seen on Crysis). Newer VM players and WS can handle higher DirectX versions and probably OpenGL as well... I gamed UT and UT2004 on VMware 6.5, the host had a ATI Radeon 2600 mobile and a T5440 CPU. It was stable at 1280x800 and playable even on network games....

voretaq7
  • 79,345
  • 17
  • 128
  • 213
9

I would point out that virtualisation can exceed physical performance in certain situations. Since the network layer is not limited to gigabit speed (even though the hardware emulation is of a specific lan card), VM's on the same server can communicate between each other at speeds beyond that of multiple phyiscal servers with average network equipment.

  • 3
    Two pieces of software runing on two VM on the same server won't communicate faster than two softwares under the same OS on one bare metal server. – bokan Dec 13 '14 at 08:03
8

Yes. But that is not the question. The difference is normally neglegible (1% to 5%).

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • 4
    I believe you. But still: can you link a benchmark where someone actually measured it? – Michal Illich Apr 24 '10 at 08:02
  • 9
    It depends on so many factors thatnobody can answer your question. It depends on which hypervisor you have, the server spec, storage and most importantly what else is going on withe the host at the time in question. – Chopper3 Apr 24 '10 at 08:05
  • Actually it does not. Natually if you do a lot ot things, the physical machine is shared. But the overhead of the hyper-visor is pretty constant by now, given hardware virtialization. Anturally if you start loading multiple VM's the resulting available powe is shared, but it is - in total - still only slightly less than what the server has. – TomTom Apr 24 '10 at 12:08
  • 13
    Citation needed. – Zoredache Apr 25 '10 at 21:26
  • the overhead of the hypervisor depends on how much the OS can be enlightened and this does not mean paravirtualized. – tony roth Jul 19 '10 at 22:08
  • if you run hyper-v with a linux guest, I'd take the amount of guests and multiply by 8% to get the load factor. Now run a w2k8r2 guest then I'd multiply it by 1%. Now run esxi with a linux guest multiply the load by 4% and with any version of windows 5%. As you know nobody can publish test numbers officially for vmware products, so this is all seat of the pants testing so far. – tony roth Jul 19 '10 at 22:14
  • @TomTom: Something you need to consider is the impact of virtualized workloads on storage. Specifically, when you take multiple sequential workloads and run them concurrently, you end up with an I/O profile that's essentially random, not sequential. This can have major, major impacts on your performance. – jgoldschrafe Feb 18 '11 at 15:11
  • @tony roth: I'd imagine it would be the other way around: paravirtualized components don't need the static overhead that emulated virtual hardware does. Why do you feel that "enlightening" (which tends to be rather Hyper-V/Windows-specific as a concept) as an approach has more of an impact on reducing hypervisor overhead? – jgoldschrafe Feb 18 '11 at 15:14
1

You're trying to compare an operating system, software, and data installed on a certain physical hardware to that same operating system, software, and data installed by itself inside a hypervisor on the same original hardware. This comparison is just not valid, because almost no one does this (at least at first). Of course that would likely be slower. Thankfully, it completely misses the most common point of why you virtualize servers at all.

A better example here is to look at two (or more!) older servers in your data center. Look for servers that are performing reasonably well, but are old now and coming up on their refresh cycle. These servers already perform well on older hardware, and so thanks to Moore's law anything new you get is gonna be way over-spec'd.

So what do you do? It's simple. Rather than buying two new servers you buy just one, and then migrate both of your old servers to the same physical new device. When preparing to purchase your new server, you plan so that you have enough capacity to not only handle the load from both older servers but also any load from the hypervisor (and maybe a little extra, so that you can still get a performance boost and can allow for growth).

In summary: virtual machines provide "good enough" performance for most situations, and help you make better use of your servers to avoid "wasted" computing power.

Now let's stretch this a little further. Since these are old servers, perhaps you were looking at a couple simple $1500 pizza box servers to replace them. Chances are, even one of these pizza boxes could still easily handle the load from both hypothetical older machines... but let's say you decide to spend $7500 or more on some real hardware instead. Now you have a device that can easily handle as many as a dozen of your existing servers (depending on how you handle storage and networking), with an initial cost of only 5. You also have the benefits of only managing one physical server, decoupling your software from your hardware (ie: hardware refresh is now less likely to need a new windows license or cause downtime), you save a ton on power, and your hypervisor can give you better information on performance than you've had in the past. Get two of these and depending on how big you are maybe your entire data center is down to just two machines, or perhaps you want to use the second server as a hot standby to tell a better high-availability story.

My point here is that it's not just about performance. I would never take a perfectly good production server and virtualize it alone to equivalent hardware just because. It's more about cost savings and other benefits you can gain from consolidation, such as high-availability. Realizing these benefits means you're moving servers to different hardware, and that in turn means you need to take the time to size that hardware appropriately, including accounting for the hypervisor penalty. Yes, you might need slightly more computing power in total than if each of those machines were on their own physical device (hint: you actually probably need much less total computing power), but it's gonna be a whole lot cheaper, more energy efficient, and easier to maintaint to run one physical server than it is to run many.

Joel Coel
  • 12,910
  • 13
  • 61
  • 99
  • 2
    It's not always about consolidation and cost savings. A hypervisor is a product with many features, many of which have the potential to add business value independently of the reasons that most people virtualize. Consolidation and cost savings may be part of that business value, or they may not. Snapshots, live migration, Storage vMotion, and hardware abstraction may all be part of the business IT strategy. – jgoldschrafe Feb 18 '11 at 15:18
  • @jgold Point taken. You even forgot a big one: high availability. In my defense, I did mention hardware abstration (sort of) in my last edit, and for someone who's just exploring virtualization from the angle of the original question I think consolidation/cost is the really big point to convey. – Joel Coel Feb 18 '11 at 15:22
  • The question asked about a comparison of performance which is an entirely valid aspect to want to investigate about virtualisation, not about why virtualisation is or is not useful. – Nick Bedford Nov 26 '18 at 22:43
1

I have been doing some test comparisons of the same software running the same test (.NET-based web application with high volumes of web traffic and considerable SQL Server access). Here's what I've seen:

  • The physical machine is better at instantiating classes (which translates to allocating memory at the system level) - this makes sense to me because physical machines do this through memory management hardware and VMs do this through software (with partial hardware assist) (On the VM, the app spent a significant amount of time in its constructors (where the memory is allocated (and nothing else is done), on the physical machine, the constructors weren't even included in the top 1000)
  • When you are in the middle of a method, the two are about equivalent - this is probably how most of the benchmarks are constructed that show the two "being the same"
  • When you access a network controller, the physical beats out the VM a little - again, the physical doesn't have very much sitting in between the .NET process and the hardware. VM adds adds other "stuff" that each transaction needs to travel through.
  • Really, the same thing applied to disk access (the SQL Server was on another machine) - the difference is very small, but when you add them all up, it is noticeable. This could have been caused by the slower network access or by a slower disk access.

I can easily see how someone could build benchmarks that prove they are 1% different or the same or where VMs are faster. Don't include anything where your process takes advantage of the benefits of the local hardware support where the VM needs to simulate it in software.

slm
  • 7,355
  • 16
  • 54
  • 72
rchutch
  • 11
  • 1
0

I have just upgraded to an SSD (OCZ Vertex 2) and I run my XP VM development environment on it, I am a software developer. One thing I have noticed is that when I launch a program (one big enough to take time to load), one core of the virtual CPU pegs out. This happens when loading IE also. Since the CPU pegs out, I assume the bottleneck is the CPU and not the SSD. But it seems odd, I have a feeling that if the same thing were done on a physical machine that it would load faster and my feeling is that there is some extra processing overhead VMWare is doing that is consuming CPU on disk access.

One example, I use Delphi and on a physical machine with a regular HDD it can take 20 seconds to start from a cold boot. In the VM running off an SSD, it loads in 19 seconds from a cold start. Not much difference, I bet if the SSD were on the physical machine it would load faster. Howevere I did not check the CPU usage on the physical machine, its possible the CPU were the bottleneck there as well.

But the feel of the VM is that disk access taxes the VM.

0

Obviously a virtual machine is slower than the physical machine. But when you're in this scenario you have to evaluate what is optimal to cover your needs. If you need only one system and you need it to be fast, then install it directly to the hardware. In the other side, if you need flexibility, scalability (and all other virtualization benefits :P) deploy a VM. It will be slower, but IMHO in some cases it's justified and the performance is not significantly slow.

boris quiroz
  • 1,140
  • 1
  • 7
  • 18
0

It seems Microsoft has done some benchmark testing using BizTalk server, and SQL Server in different configurations in this regard. See link below:

http://msdn.microsoft.com/en-us/library/cc768537(v=BTS.10).aspx

  • 3
    Please cite the conclusions in your answers or this is little more than SPAM for the provided link. Thank you. – Chris S Jul 06 '11 at 14:08
  • SQL Server performacne Virtual to Physical ratio (using BizTalk:Messaging/Documents processed/Sec metric which seems like reasonably real-world) is quoted to be 88% - using HyperV. Doesn't look good. – deadbeef Jan 24 '12 at 15:54
  • Oh my god, is that a 250MB PDF file? O_O – David Balažic Aug 05 '18 at 15:08
-1

Ideally Virtual PC performance is at:

CPU: 96-97% of host

Network: 70-90% of host

Disk: 40-70% of host

-2

Sorry to disagree with TomTom.

I've been using VMware Workstation for a while mainly on Windows XP, Windows Vista and now Windows Seven native systems to run different Windows flavors as well as Ubuntu.

Yes, a virtualized environment is slower than a native system and that may be in a range of 5 up to 100 %.

The main problem isn't that much the CPU load but the physical memory lack.

Let's say you've a Windows Seven 64 Ultimate running on a 4 Gb system that when idle needs almost 1.5 Gb and uses ~ 10% of the CPU. Launching the extra layer of VMware will cost you ~ 300 Kb and the CPU loads will climb up to ~ 20%. Then launching a virtual system within VMware will request at a minimum the amount of memory you defined for that virtual machine that is a minimum of 1 Gb for any decent system. Then you'll see the CPU load ~ 60 % if the virtual machine is Ubuntu and ~ 80 % for any flavor of recent Windows OS.

Now, you'll start different apps within that virtual machine.

If the amount of memory you've set for that virtual machine is not enough, the virtualized system will start to swap, then dramatically slowing down its overall performance and responsiveness.

If the sum of the amount of memory you've set for that virtual machine plus the amount of memory needed for your native system is above the amount of memory of your native system, then it's your native system that is going to swap, slowing down both the native and virtualized system.

So, it first depends of the balance of the memory needed for both the native and the virtualized machines.

Now it's almost the same with the CPU load. If a virtualized app needs a huge CPU load, and a native app needs also a huge CPU load, your native system will have to manage the priority and balance the CPU charge between its different apps, the virtualized system being nothing but an app but that phenomenon is a classical CPU load problem that you can trick with app priorities.

So, my first advice if you need to use virtualization is to put a bunch of memory in your machine, whatever the OS you use natively or within a virtual machine.

Just my 2 cents.

Best regards.

Dopey
  • 37
  • 3
  • Imagine this configuration: 12 GB memory, two quad core processors. On top of that just 1 virtual machine with 11,5 GB memory and all the CPU power. Will there still be some noticable slowdown? – Michal Illich Apr 24 '10 at 08:08
  • 3
    How would Win7 x64 need 1,5 GB (or any cpu time at all) when idle? More like 384-512MB in my experience - the rest is just reserved for I/O caching and will be released if needed elsewhere ^^ – Oskar Duveborn Apr 24 '10 at 08:33
  • 4
    But you are talking about Workstation virtualization not a bare metal Hypervisor which has a fraction of the overheads compared to virtualizing on Windows. Ubuntu cloud might not quite be a bare metal hypervisor but it hardly uses the reosurces of Windows - it runs on Ubuntu Server which doesn't have a GUI for instance. – Jon Rhoades Apr 24 '10 at 09:03
  • Notice Michal said *noticeable* slowdown, also. I don't notice a huge slowdown for the tasks we use our servers for. And it depends on system load, network load, etc. but for the most part it's subjective unless something is really rocking the system. I had this conversation with someone re: SSD drive. "They're really really fast," he said. "But after a short time, it becomes the norm for you." Perception. Unless the VM is utterly crawling you probably won't notice much difference. If it's utterly crawling, you have a problem to troubleshoot. – Bart Silverstrim Apr 24 '10 at 11:36
  • But it makes perfect sense that, logically, there's *some* slowdown. It's not running natively on bare hardware. The percent slowdown depends on the hypervisor and *what* it's doing, as some techniques affect the performance of what you're doing (some hypervisors may be more efficient handling CPU-bound tasks, others disk tasks, depends on the disk subsystem drivers, etc.) so really you can't find solid numbers on performance until you test your hardware specifically. – Bart Silverstrim Apr 24 '10 at 11:41
  • You make totally crap arguments, Dopey. First, you really come up with a crap technical platform - try using a hyper-visor as the poster. Second, you load a lot of things - so you SHARE the machine. THis was not my answer. A good hyper-visor will only use about 3% to 5% of the resources. THis means your machine capacity is 97% - and if you insist on loading 10vm's that naturally gets divided. COmmon sense. DOes not change that the overhead of virtualization is very little. – TomTom Apr 24 '10 at 12:10
  • for some corner case applications where software scales badly, I've seen benchmarks where the virtualized solution offers better performance than running on the physical hardware. You can run 2 or more VMs with the badly scaling software and get better performance than one instance on the phyical hardware. – xenny Apr 24 '10 at 14:38
  • 3
    -1: Very poor comparison. VM Workstation is NOT a hypervisor. Secondly you're talking about running high loads on the host; of course that's going to have an impact on the guest VM. – gravyface Apr 24 '10 at 15:46
  • 1
    @ Oskar > How would Win7 x64 need 1,5 GB (or any cpu time at all) when idle? More like 384-512MB in my experience Take a look at this picture http://theliberated7dwarfs.as2.com/pictures/png/W7-Mem.png Windows 7-64, 4 Gb of RAM, fresh reboot, no application runing but MSFT Essential Security and Kapersky! Oops: 1.53 Gb of RAM used and an average of 7% of CPU load! @ TomTom & gravyface 1-The initial question was about generic VM machine, not hypervisor! 2-My crapy technical platform makes the fortune of both MSFT and VMware. You might like it or not and I won't blame you;) Kind regards – Dopey Apr 24 '10 at 17:39
  • So, no more comment about Oskar purported memory need under Windows 7 (that I love a lot for its new interface as well as its capability to perform nicely when compared to Vista which by the way I used for 3 years without any damned problem). – Dopey Apr 26 '10 at 22:56
  • Either, no more comment from TomTom with regard to my initial post responding to a generic virtual machine performance question later edited with hypervisor meanings? – Dopey Apr 26 '10 at 22:58
  • win7 precaches things/applications etc thus it appears to use more ram, once again this screws with TPS since most modern cpu's enable large memory pages. – tony roth Jul 19 '10 at 21:57
  • @Dopey, terrible comparison, like comparing a bike to a jumbo jet, workstation is very convenient (I use it myself for testing) but it's like Fisher Price virtualisation compared to ESXi. – Chopper3 Jul 06 '11 at 19:48
-2

In my experience virtual machines are always a lot slower than physical ones OUT OF THE BOX.

You will only notice it when running applications that hit the disk and tax the CPU a lot. I have run many databases and webservers on virtual machines and as an end user and the feedback from other endusers (ie: accessing the app from a remote web browser) there is quite a big lag when using virtual machines.

Of course a properly configured virtual machine may come to 80% (I don't know the real number) or whatever of the physical machine's speed, but you end up having to really dig deep into what the application is doing and how the virtual machine works. So I guess it is a cost equation of how valuable your time to configure VMs verses just buying and hosting a new server.

For me virtual machines are NOT ABOUT PERFORMANCE, but about being easier to manage and for hosting several low performance VMs of course.

yazz.com
  • 6,743
  • 14
  • 37
  • 38
  • 2
    YOu seem to run really crap virtualization technique. Seriously ;) MS did performance comparisons with Hyper-V and SQL Server - and came up with numbers that are around 3% overhead to the bare metal machine. Naturally this means running only one virtual machine, or accepting that the performance is splitted - but the overhead of virtualization is really low. And it is not ONLY about hosting several low performance VM's. It can also be about ease of maintenance - moving a VM to new hardwar easy, a physical machine may be more complicated. – TomTom Apr 24 '10 at 12:12
  • @TomTom. I would like to believe you but Microsoft of course have an interest in telling everyonr that their hypervisor is super fast. I know from companies that have tried Microsoft virtualisation AND VmWare that what Microsoft is saying is just "marketing". Have you actually benchmarked it yourself? If you get 3% overhead then please let me know your setup as I would like to try it – yazz.com Apr 24 '10 at 18:56
  • 2
    Crap out, Zubair. I am no idiot - I was running tests before. I have been moving a lot of stuff over to VM's and barely run anything physical these days. I did a lto of benchmarking myself. Naturally hyper-visors are tricky - people put a lot of servers on a machine and overload it. Most likely actually in the IO area (disc performance). But all that is not intrinsic to a hypervisor. Same with RAM - yes, you need a lot, and yes, the machines simulated still need their amount of RAM to be efficient. But that is not a hyper-visor problem. – TomTom Apr 25 '10 at 15:43
  • I run a lot of stuff on hyper-visor by now, including a 800gigabyte SQL Server application that is prety active recording financial market data. No problem inherent to the hyper-visor. This one has 4 virtual CPU's and currently 8GB ram for the windows instance ;) And the disc io subsystem is non-typical (talk about nearly a dozen physically mapped discs). – TomTom Apr 25 '10 at 15:44
  • 2
    @TomTom. Do you have any links which I could read to learn more about these virtual vs physical performance tests – yazz.com Apr 25 '10 at 16:49
  • @TomTom don't think MS published anything that shows a 3% difference concerning baremetal vs hypervisor. They did show that VHD's can perform within 3% of physical disk when used as a native disk. I think you MAY have confused that bench mark with a hypervisor comparision. – tony roth Jul 19 '10 at 21:53
  • VMware, on the other hand, posted a few performance benchmarks showing that on systems with very high core counts, running several smaller virtualized SQL Server instances and partitioning the database across them ended up being substantially faster than trying to scale the application across all the cores on the physical machine. While this isn't something you see often with things like SQL Server for complexity reasons, this is actually a very common approach with certain applications like Citrix. – jgoldschrafe Feb 18 '11 at 15:20
  • 1
    @Zubair - Although I'm a 100% VMWare-man myself I have to agree with TomTom on this, I see very little performance drop-off for CPU and memory operations on modern, well configured hardware, yes heavy concurrent mixed read/write IO can be noticibly more impacted than CPU & memory but we're still talking single-digit percentile loss across the board. I manage nearly 1,000 ESXi hosts in a company with over 8,000 and we're confident that only a handful of very heavily IO-bound applications are a bad fit for ESXi. – Chopper3 Jul 06 '11 at 19:46