57

VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables.

I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Unfortunately, many vendor requirements and people unfamiliar with virtualization request more resources than necessary... I'm interested in quantifying the impact of this decision.


Some examples from a "problem" cluster.

Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. enter image description here

Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. enter image description here

The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. enter image description here

Summary of the same VM enter image description here


  • What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments?

  • Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they actually need?

  • What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated that VMs are not the same as physical hardware?

  • What specific metric(s) should be used to meter RAM usage. Tracking the peaks of "Active" versus time? Watching "Consumed"?


Update: I used vCenter Operations Manager to profile this environment and get some detail on the cluster stats listed above. While things are definitely overcommitted, the VMs are actually so overconfigured with unnecessary RAM that the real (tiny) memory footprint shows no memory contention at the cluster/host level...

My takeaway is that VMs should really be right-sized with a little bit of buffer for OS-level caching. Overcommitting out of ignorance or vendor "requirements" leads to the situation presented here. Memory ballooning seems to be bad in every case, as there is a performance impact, so right-sizing can help prevent this.

Update 2: Some of these VMs are beginning to crash with:

kernel:BUG: soft lockup - CPU#1 stuck for 71s! 

VMware describes this as a symptom of heavy memory overcommitment. So I guess that answers the question.

enter image description here


vCops "Oversized Virtual Machines" report... enter image description here

vCops "Reclaimable Waste" graph...

enter image description here

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • If you configure the VMs with `thin provisioning`, shouldn't the situation resolve itself automatically, as the VMs would use only what they need? – boardrider Dec 24 '20 at 20:05

2 Answers2

45

vSphere's memory management is pretty decent, though the terms used often cause a lot of confusion.

In general, memory over-commit should be avoided as it creates exactly this type of problem. However, there are times when it cannot be avoided, so forewarned is forearmed!

What are the downsides of overcommitting and over-configuring resources (specifically RAM) in vSphere environments?

The major downside of over-committing resources is that should you have contention, your hosts would be forced to balloon, swap or intelligently schedule/de-duplicate behind the scenes in order to give each VM the RAM it needs.

For ballooning, vSphere will inflate a "balloon" of RAM within a chosen VM, then give that ballooned RAM to the guest that needs it. This isn't really "bad" - VMs are stealing each other's RAM, so there's no disk swapping going on - but it could lead to mis-fired alerting and skewed metrics if these rely on analysing the VM's RAM usage, as the RAM won't be marked as "ballooned", just that it's "in use" by the OS.

The other feature that vSphere can use is Transparent Page Sharing (TPS) - which is essentially RAM de-duplication. vSphere will periodically scan all allocated RAM, looking for duplicated pages. When found, it will de-duplicate and free up the duplicated pages.

Take a look at vSphere's Memory Management whitepaper (PDF) - specifically "Memory Reclamation in ESXi" (page 8) - if you need a more in-depth explanation.

Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need?

There's no visible overhead - you can allocate 100GB of RAM on a host with 16 GB (however, that doesn't mean you should, for the reasons above).

Total memory in use by all of your VMs is the "Active" curve shown in your graphs. Of course, you should never rely on just that figure when calculating how much you would like to overcommit, but if you have historical metrics as you have, you can analyse and work it out based on actual usage.

The difference between "Active" and "Consumed" RAM is discussed in this VMWare Community thread.

What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated?

The short answer to this is yes - customers should always be educated in best practices, regardless of the the tools at their disposal.

Customers should be educated to size their VMs according to what they use, rather than what they want. A lot of the time, people will over-specify their VMs just because they might need 16 GB of RAM, even if they're historically bumbling along on 2 GB day after day. As a vSphere administrator, you have the knowledge, metrics and power to challenge them and ask them if they actually need the RAM they've allocated.

That said, if you combine vSphere's memory management with carefully-controlled overcommit limits, you should rarely have an issue in practice, the likelihood of running out of RAM for an extended period of time is relatively remote.

In addition to this, automated vMotion (called Distributed Resource Scheduling by VMware) is essentially a load-balancer for your VMs - if a single VM is becoming a resource hog, DRS should migrate VMs around to make best use of the cluster's resources.

What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

Mostly covered above - your main concern should be "Active" RAM usage, though you should carefully define your overcommit thresholds so that if you reach a certain ratio (this is a decent example, though it may be slightly outdated). Typically, I would certainly stay within 120% of total cluster RAM, but it's up to you to decide what ratio you're comfortable with.

A few good articles/discussions on memory over-commit:

Peter Mortensen
  • 2,319
  • 5
  • 23
  • 24
Craig Watson
  • 9,370
  • 3
  • 30
  • 46
  • My understanding is that more RAM allocated to a VM means that it's harder for DRS to migrate the VM - it takes longer to migrate between nodes because it takes longer to copy the RAM across; and the more RAM is required, the less likely it is that DRS is going to be able to find a big enough chunk that's free. This can be particularly troublesome (I've been led to believe) if you do have an event (eg, hardware failure) that reduces capacity in the cluster. Small VMs are easy to shuffle and not likely to notice much outage, big VMs can be tricksy. Have I been informed correctly? – James Polley Aug 03 '13 at 12:53
  • 2
    @James - only active (i.e. in use) memory is migrated during vMotion, so the amount of RAM you allocate to your VMs doesn't matter as much. Reference: http://www.vmware.com/files/pdf/VMware-VMotion-DS-EN.pdf – Craig Watson Aug 03 '13 at 15:04
  • Great answer. I've updated my question with more detail from this particular cluster. Your points are good, though. It turns out that the VMs in this setup are heavily overconfigured. Active RAM usage is well below the cluster's physical resources, so there's no contention... Just heavy ballooning/swapping/ugliness. I suspect right-sizing the VMs will alleviate this pressure. – ewwhite Aug 05 '13 at 01:55
23

In addition to the excellent answer from Craig Watson I would like to add the following:

Over-committing memory in VMware is not something you should do on purpose. It generally shows that either you or your customer is oversubscribing the hardware.

If over-committing is the only choice then I strongly advise that you enforce priority rules. If someone is bent on giving a non-critical VM 16GB of vRam when it only needs 4GB - at least put that VM in a low resource pool or give it a low priority. You really do not want a critical production database to be swapped out by the hypervisor. Not only will the performance go down the drain, it will also eat up the I/O queues against your backend storage.

If you are running on blazing fast storage (FusionIO, Violin, local SSD's etc) then swapping might not be a big concern, but with traditional SAN storage you will eventually affect every single VM and host connected to the same array/controller.

pauska
  • 19,532
  • 4
  • 55
  • 75
  • 5
    Good observation on the storage impact of swapping. This explains some of the VNX performance issues I've seen.... – ewwhite Aug 02 '13 at 17:41
  • 1
    Brilliant point, I've never thought to take the storage IO argument, – Dan Aug 06 '13 at 20:20