7

We have an ASP.NET MVC site hosted on two VMware ESX hosts. Each host has two sockets with quad core CPUs (8 logical cores per host). We have two VMs on each host. Initially, only one vCPU was allocated to each host.

We increased the vCPUs per host to 2, and then 4, and at each stage have measured an average of 30% throughput increase in our load testing. The application is CPU-bound - there is not much caching (RAM) and very little disk activity.

I am wondering if we should expect different results if we have more VMs with fewer CPUs. I've been reading a bit about how ESX schedules vCPUs and it appears that with more vCPUs and fewer VMs, the scheduling overhead may be holding us back.

Should we go with, for example, 4 VMs with 2 vCPUs? Exactly what resource bottlenecks are we trading by adjusting VM count versus vCPU count?

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
Aidan Ryan
  • 1,253
  • 2
  • 13
  • 16

1 Answers1

4

I think you'd be better with fewer VMs with more vCPUs - not all applications grow like yours, it's quite a luxury in fact, I wish mine did that :)

Basically don't worry about vCPU scheduling until you start seeing an actual problem, then it gets complex.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Thanks! Is there anything we can measure at the host level to understand if scheduling overhead is too high? – Aidan Ryan Jul 13 '12 at 20:40
  • 1
    @AidanRyan Watch the ready time metric in the CPU section of the performance tab - as the values get higher, it's indication of the CPU contention costing performance. – Shane Madden Jul 14 '12 at 00:33
  • If you have an eight-way server, with two VMs, each with four vCPUs, you won't every get into a situation where a vCPU is in a wait state, so this point is moot. If you have spare SAN ports, add hosts, or, if not, switch a host out for one with faster/more cores. Of course, if Oracle is involved, the latter won't necessarily be an option ;-) – Simon Catlin Jul 27 '12 at 21:16