13

Virtualization has some great benefits, but there are times when a virtualized server needs more performance and should be moved to physical.

My question is, how do you tell when these times are? I'm looking for measurable data and metrics that show moving a server to its own physical box would make a significant difference to performance. Personally I'm interested in Windows but presumably the essentials are the same across all platforms.

Alex Angas
  • 2,007
  • 2
  • 26
  • 37

9 Answers9

4

I disagree that a virtual server would need to be moved to physical because of performance. Hypervisors are now so close to the metal that there is virtually (pun intended) no performance hit. Especially now that many board makers are including hypervisors on the chipset. If you took two servers with identical hardware, one running a single guest and one running an exact copy of that guest on the physical hardware, you would be hard pressed to notice a difference in performance I think.

There are other reasons, though, you may need a physical server rather than virtual. One of them is hardware compatibility. If your application requires non-standard hardware with its own unique bus, you may not be able to run that in a virtual machine.

I'm anxious to hear what others have to say. Great question.

NOTE: We have servers that were virtualized and then put back on the same hardware just to have the snapshot/vmotion capabilities we love.

Daniel Lucas
  • 1,192
  • 1
  • 14
  • 25
3

I'm not an expert on this subject, but generally speaking: Very hungry I/O applications (especially those who write little and fast) are the ones who gets their own physical server.

It's not very hard to find them either, you just run performance monitor and look for high i/o wait times.

Also, high-end databases usually get its own dedicated server, because of several reasons:

  1. They want to cache everything they can, RAM usage is enormous
  2. They perform better with threading across several cores (8-way is normal), and you generally do not want to assign more than 1 virtual cpu to any server because of blocking
  3. They are very I/O hungry when loading data to cache, low latency on I/O is key.
pauska
  • 19,532
  • 4
  • 55
  • 75
  • 1
    I'll disagree with the "usually" comment. There are papers all over the VMWare site about setting up database servers on VMWare and I don't belive this to be any sort of an issue. But you make great points about what what needs to be evaluated when considering the move to a Virturalized server. – SpaceManSpiff Jun 22 '09 at 15:21
  • 1
    We will soon be virtualizing our four database servers because of the benefits we will see with snapshots/vmotion/etc. I agree with LEAT that database servers have and will be virtualized in the future. – Daniel Lucas Jun 22 '09 at 15:33
  • I think you misunderstood me. I dint say that database servers should not be virtualized (all my dbs are). I said that high-end servers usually stays on separate physical servers, because of the limitations this brings (like only having 4 virtual CPU's aviable). – pauska Jun 22 '09 at 15:42
  • Hungry I/O apps are probably going to be hitting a SAN, though, which means that you're back to fibrechannel or infiniband speed, and not a raw disk under the VM - I've not seen (personally) an app that needs to be on physical hardware, outside of timing-critical systems, or where it's officially unsupported from the vendor – warren Jul 01 '09 at 13:12
  • I get the feeling that people are arguing me for the sake of arguing. I did *not* say that it *needs* a physical host. I said that extremely resource hungry applications/usages *usually* gets it own dedicated server to not block other VM's. – pauska Jul 01 '09 at 19:16
3

The one case where I had to carry out a V2P was for an MS SQL box that had been running on dual 3.2Ghz dual core CPU's (total CPU 14.4Ghz) that we migrated to an ESX 2.5 cluster where the underlying hardware was newer with more slower (2.4Ghz IIRC) cores. Adding in the ~10% overhead even with 4 vCPU's this VM could only ever get an effective 8-8.5Ghz aggregate CPU. 60% peak CPU before migration became 90-100% post migration, customer wanted headroom so we reverted to physical. To answer your question specifically we saw that the box was running at 100% CPU across the board in Perfmon and in the VI client. A better solution (in my view) would have been to upgrade to faster CPU's but there are edge cases like this where thats not economical especially with the trend to slower cpu's with more cores that we saw with the introduction of the Opterons\Core CPU's.

With ESX 4 we could bump a box like this up to 8 vCPU's but that wasn't an option at the time.

As far as looking for performance ceilings that might indicate you need to abandon your VM then with a Windows Guest on VMWare environment then the combination of Perfmon and the VI Client should be more than up to the task of finding any VM's that are performance limited themselves. Add in getting some SAN analytics to that if you can but if the SAN shows an issue then you will almost certainly be just as well off reworking the storage in order to isolate and\or enhance the volumes that the VM's virtual disks are stored on. Same applies to any other OS\Hypervisor combination - get whatever internal stats you can but correlate them to the Hypervisor's view of what's going on because 100% CPU being reported within a VM (for example) does not necessarily mean that the Hypervisor could never deliver more performance, just that it didn't at that point.

Helvick
  • 19,579
  • 4
  • 37
  • 55
1

This very much depends on the service this is performing.

I typically look at the resources that are being used, and determining if they are indeed bottelnecks for this guest and the services it provides.

This of it this way:

If you have a Dual Core (2vSMP), 4GB RAM guest running a web server (IIS) and you're not maxing out CPU and RAM requests, then maybe the guest doesn't need more hardware.

We have run into cases where running an Oracle Database on a virtualization platform comes close to the same amount of performance as a similarly sized hardware server.

Obviously, if you wanted to have a 16-core server as a VM, you may have some trouble seeing it perform as well as dedicated hardware.

Mike Fiedler
  • 2,152
  • 1
  • 17
  • 33
1

When the VM is starved for resources (or perhaps starving other VMs for resources) e.g.:

  1. When the VM's IO can't be satisfied through the host
  2. When the VM needs more network bandwidth than is possible sharing the trunk
  3. When the VM's processes want more CPU than it can get, e.g. if there is a single process that is maxing out a virtual cpu
  4. If its linux and it needs very precise time (if its running on a VMware host linux hosts under VMware drift time. This can be alleviated by using ntp, however for apps which require very precise time, e.g. kerberos, you might consider real hardware)
  5. When its linux and needs very reliable disk (if its running on a VMware host - VMware has had and I believe still has SCSI problems under VMWare under certain conditions. A fix was put out but it still occurs, although much less often)
Jason Tan
  • 2,742
  • 2
  • 17
  • 24
0

I'd say its when the server is at the point where it is consuming enough of the server resources that it cannot share the hardware.

ESX, ESXi & Window Hyper V should all give you near real performance. So as long as one of the machines is not using 90% of the resources on its own you shouldn't need to move to real hardware.

Exceptions being you wouldn't want things like your 2 domain controllers on the same box should the hardware fail.

SpaceManSpiff
  • 2,547
  • 18
  • 19
  • 2
    I'd disagree here too. While the cost for running a single VM on a single host is high considering licensing costs, etc. There are distinct advantages to it in terms of disaster recovery and hardware failover. I think even in this case it is worth virtualizing. – Kevin Kuphal Jun 22 '09 at 15:39
  • 1
    You are quite correct. With EXSi being free now one could virtualize a single server, your Exchange server for instance and have that on a machine by itself. When its time to upgrade the hardware just copy the VM to the new machine. – SpaceManSpiff Jun 22 '09 at 16:06
0

I doubt there's a generic answer for this, but if you're worried about perfomance, then that's what you have to look at. The obvious would be to check whether you are maxing out CPU, I/O, ...

But also, performance testing and benchmarks would also help you decide whether there is any penalty for being virtual and whether having a single VM on the host is sensible or not.

Toto
  • 738
  • 2
  • 5
  • 11
0

You first need to identify which resource is the bottleneck.

Windows performance monitor ( perfmon ) provides lots of counters for various aspects such as Disk queue, virtual Memory stats etc.

If you are disk bound, giving the virtual machine direct access to a disk instead of something like a vmx file with VMWare could help a lot.

Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444
0

I think it all depends on two factors:

  • Ressource sharing: does the guest consume so much ressources that the other's performance is affected
  • Security :if it's a very critical service you should probably not use virtualization as the more you add layers between the software and the hardware the less you might be secure.

    just my 2cts.

  • Maxwell
    • 5,026
    • 1
    • 25
    • 31