0

I am planning to build a multi-OS workstation overseen by (probably) KVM, on which I will do a variety of tasks. Some of these tasks lend themselves to multithreading better than others, so I want to maximize clock speed as much as possible. To this end, I am considering the pros and cons of a dual socket setup so that I can get more clock speed with the same number of cores. However, it is my understanding that the usefulness of dual socket builds is limited by slow communication between the CPUs. So my thought is that if I allot resources intelligently, dual socket might work well, but if not it could be a disaster.

So here are a few things that I'd like to understand:

  1. If the host OS is exclusively using one socket and the actively used guest exclusively using the other socket, how much will those two sockets need to communicate?

  2. How much does the hypervisor benefit from having access to more cores?

  3. How smart is KVM (or other hypervisors) in terms of alloting resources between CPU sockets vs CPU cores? Are there some things I should set manually and others I should let be decided by the hypervisor?

An important consideration is that at any given time, only one or at most two VMs will be needing lots of resources, the other two or three should be pretty light at all times.

Stonecraft
  • 243
  • 2
  • 4
  • 15

1 Answers1

1

Large performance improvements can be obtained by adhering to the system's specific NUMA topology. Using the Pinning option will constrain the guest's vCPU threads to a single NUMA node; however, threads will be able to move around within that NUMA node. For tighter binding capabilities, use the output from the lscpu command to establish a 1:1 physical CPU to vCPU binding using virsh cpupin.

In the documentation for RHEL 7 you can bind VCPUs to their physical CPU counterpart using numatune, then pin a guest to a certain NUMA NODE with lscpu and virsh cpupin. Use lstopo to vizualize your NUMA NODE topology when using numatune.

Troy Osborne
  • 106
  • 1
  • 11