1

We are working with a server running ESXi 6.7 Enterprise Plus, and the mobo has 2 Xeon 10-core CPU's.

The host is moderately loaded, but strangely the ESXi monitoring screen shows MAX socket (package) 0 at 87% utilization, and socket (package) 1 at 2.5% utilization, and AVERAGE socket 0 at 20% and socket at 1%.

Is this normal? Should ESXi be balancing the load across the 2 CPU's? Or does it fill one and then start using the other.

License is installed and should support 2 sockets I think (though I don't see a CPU limit on the licensing tab of the GUI). I didn't purchase the hardware/license so I don't know too much about what was purchased but I can see the license tab and it looks right-ish. I just don't see anything that says 2 SOCKETS...so I'm wondering if another license needs to be purchased to activate the second socket? Does anyone with ESXi 6.7 with Enterprise Plus have a line in their license tab showing # of sockets licensed?

TSG
  • 1,634
  • 6
  • 29
  • 51

1 Answers1

1

The ESXi scheduler is NUMA aware. By default, it will prefer to keep VMs on one socket's cores and memory if possible. An overview of this is in the Resource Management Guide.

You can show 2 sockets get used by putting more load on the host. If its usual workload isn't enough, create a 14 core VM and run something multi-threaded and CPU intensive. Have fun with it, maybe compile a very large software package, or donate some CPU cycles to science. Both sockets should be well over 2% utilized, because the VM is larger than one node.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32
  • That link was a tough but interesting read (I probably understood 50%). Since the mobo has a single DIMM does that mean one CPU has lower latency to that DIMM so all vCPU's are allocated on the most proximate CPU? Is it possible to adjust the scheduling algorithm to balance between CPU's? – TSG Aug 27 '18 at 12:23
  • It is highly likely the VMs are running on the socket with the memory. ESXi NUMA behavior does not need tuning for the majority of configurations. It would be better use of your time to install identical RAM modules in the other socket. – John Mahowald Aug 28 '18 at 00:17
  • I may have misunderstood (your message / manual for mobo). Does that mean the second CPU will not operate at all without a second DIMM in the system? (Mobo is Asus Z10PE-D16) – TSG Aug 28 '18 at 12:24
  • The remote memory can be accessed. ESXi is going to avoid doing so, because of the latency. If you actually want to balance between the nodes, it is better to put some memory on both. Then you can avoid the temptation to force a non-optimal configuration. – John Mahowald Aug 28 '18 at 23:35