4

so i just purchased one KVM vps and when i bench it and compare it with my existing openvz vps i found it that its slower that openvz vps and when i checked processor info its bit wrong info so am wondering can the company control to share the cpu cache between different KVM containers or not ?

on kvm : cat /proc/cpuinfo

vendor_id       : GenuineIntel
cpu family      : 6
model           : 79
model name      : Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
stepping        : 1
microcode       : 0x1
cpu MHz         : 2199.996
cache size      : 4096 KB
bogomips        : 4399.99

i checked on intel site this E5 processor have about 30 MB cache.

on openvz vps : cat /proc/cpuinfo

vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz
stepping        : 2
microcode       : 46
cpu MHz         : 2999.918
cache size      : 20480 KB
bogomips        : 5999.83

cache size is 20 MB which is real.

after that i did cpu bench

on kvm : sysbench --test=cpu --cpu-max-prime=20000 run

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          30.1875s
    total number of events:              10000
    total time taken by event execution: 30.1860
    per-request statistics:
         min:                                  2.57ms
         avg:                                  3.02ms
         max:                                  4.13ms
         approx.  95 percentile:               3.22ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   30.1860/0.00

on openvz : sysbench --test=cpu --cpu-max-prime=20000 run

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          26.5902s
    total number of events:              10000
    total time taken by event execution: 26.5889
    per-request statistics:
         min:                                  2.64ms
         avg:                                  2.66ms
         max:                                  3.17ms
         approx.  95 percentile:               2.70ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   26.5889/0.00

so as you can see there is 4 sec difference which is big thing .

user889030
  • 181
  • 1
  • 8
  • you dont bench the same metal, sofar its not a try, openvz could be more limited as a kvm, so be happy with the kvm :) – djdomi Sep 02 '19 at 05:46
  • Please post the command line you used to start KVM. It's very easy to start kvm in emulation mode rather than (para)virtualization mode. – abligh Sep 02 '19 at 14:27

1 Answers1

9

SHORT ANSWER: Your OpenVZ CPU is faster for single-threaded workload than the one used by KVM. On top of that, OpenVZ is a lighter virtualization approach so, all other things being equal, it is somewhat faster than KVM.

LONG ANSWER: KVM CPU is a Broadwell Xeon with base/turbo clocks of 2.2-2.9 GHz. OpenVZ uses an Haswell Xeon with base/turbo clocks of 3.0-3.5 GHz. Considering that Haswell and Broadwell IPC is basically the same, it is not a surprise the fastest CPU wins a single-threaded benchmark.

Regarding the virtualization platforms:

KVM is a full hardware virtualization platform (full HVM), while OpenVZ uses containerization, with others useing para-virtualization.

The first approach, which basically emulates an entire virtual machine/platform, has the advantage of very high compatibility (even with OS not originally written with virtualization in mind - ie: Windows). The cost is added overhead, which can be quite significant in some workload. Specific para-virtualized drivers can be added to a full HVM setup, avoiding some of the overhead.

Paravirtualization, on the other hand, requires guest OS collaboration (for example, in the form of hypercalls). In other words, the guest OS had to be adapted to run under the specific supervisor/para-virtualizer, and so a para-virtualized host can not run arbitrary guest OS. The advantage is much lower overhead and, so, faster performance.

Containerization is even lower-overhead that paravirtualization, as there is a single OS instance only; every other VPS is a jail/chroot-on-steroid which duplicate the userspace parts, using the very same kernel of the "main" OS. This is at the same time its main strenght and weakness: as it only duplicate user-space tools, overhead is very low. On the other hand, a single kernel is in use for all VPS images.

Anyway, the difference between full HVM, paravirtualization and containerization are mainly visible in latency and I/O bound workloads. As your benchmark is a pure CPU stress test, the difference can be mainly attributed to the different CPU configuration (rather than to the different hypervisors).

shodanshok
  • 44,038
  • 6
  • 98
  • 162
  • 1
    Interesting. It has been over 12 years since I last looked at OpenVZ. At that time, it used to be containerization not paravirtualization, when did that change? – Jörg W Mittag Sep 02 '19 at 08:03
  • 1
    @JörgWMittag you are right: OpenVZ uses containerization rather than paravirtualization. I updated my answer; thanks. – shodanshok Sep 02 '19 at 08:15