7

I have a VM running on vSphere 6.5.which has 24 vCPU's. The server has two physical cpu's (xeon e5-2699 v4) which have 22 cores and hyperthreading is enabled.

How exactly are the vcpu's running on the physical cpu's? Would it be better to reduce the vcpu's to 22 that it could run on one phyical cpu or would vSphere even use in this case multiple pysical cpu's?

user3235860
  • 71
  • 1
  • 2

4 Answers4

3

A single VM must never have more virtual CPUs than logical physical cores that are available.

With Hyperthreading enabled you are at 44 logical physical cores, so this should be fine. However, this heavily depends on how many more VMs are running on that host. One thing you have to keep in mind is how the CPU scheduler of the ESXi server works. For every CPU cycle it always waits until there is a physical core available for each virtual CPU on a VM. So, in your case, it will always wait until 22 physical cores are available before a CPU cycle can be processed. If you have many more VMs on that host that can lead to a high CPU ready time and a very slow VM.

Personally, I always try to keep the number of vCPUs at 8 or less. If you can, rather scale your VMs out than up.

Another consideration: With the current state of mitigations against Spectre and Meltdown attacks it is generally recommended to disable Hyperthreading, because this reduces the possible attack vectors. If you decide to disable Hyperthreading your configuration will most probably not be usable anymore.

Gerald Schneider
  • 19,757
  • 8
  • 52
  • 79
1

I can't think of a situation where you'd want a single VM to have more vCPUs allocated than there are physical cores in a server.

Benchmark your workload with the current VM configuration, and then see what happens as you gradually lower the number of vCPUs. Take note both of execution speed for your workload and of actual CPU usage on the host/VM from the hypervisor's perspective rather than that of the guest OS.

Usually when setting up VMs it's beneficial to start with a rather low number of vCPUs and then working your way up until the performance increase flattens out. For many workloads you don't necessarily need to stick to even numbers of vCPUs, even though there are exceptions to this principle. Again, a good test run should show how your application deals with its environment.

Mikael H
  • 4,868
  • 2
  • 8
  • 15
  • My current situation is an appliance which has specific sizes (in this case vCenter as X-Lage instance). – user3235860 Feb 22 '19 at 08:13
  • 1
    Is it for a lab or for a production environment? In production you should probably stick to a supported configuration (which includes the hardware on which the machine runs), but if it's for lab use or for a test environment it should be possible to turn down the number of vCPUs a notch after deploying the appliance. Again - if you actually need that kind of power, it's probable you'll get less overhead and better total performance by not exceeding your number of physical cores. – Mikael H Feb 22 '19 at 08:21
  • This two socket box has 44 cores. Although it is a bad idea to cross NUMA nodes, especially when you could reduce vCPU or get a processor with enough cores. – John Mahowald Feb 23 '19 at 14:43
1

As per VMware Maximums (https://configmax.vmware.com/) you can have 32 vCPUs per Physical Core but according to best practices you should not assign more cores than you actually have.

Keep in mind though that you can limit, reserve and prioritize according to your workloads and needs.

You can read another answer posted about the same topic here.

Sir Lou
  • 36
  • 3
  • According the best practice not to assign more cores than you actually have, is this referring to logical or physical cores? – user3235860 Feb 22 '19 at 10:28
  • It's according to logical cores you can read more here http://www.techiessphere.com/2016/02/how-to-choose-right-number-of-virtual.html?m=1 – Sir Lou Feb 23 '19 at 11:47
1

1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.

2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.

3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.

There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.

There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.

Stuggi
  • 3,366
  • 4
  • 17
  • 34