2

I have a beefy linux server (32 GB RAM) with good harddrives. with one of my hosts. I run a lot of webapps which use varnish, nginx, unicorn app server, thin app server, redis, mongodb and postgresql. Now, I don't expect a huge amount of traffic to the webapps. So, my questions is, with a good enough server, Should I just run all the services on the bare metal OS? or should I setup VMS and run a few services on the VMS? Setting up VMs doesn't seem like a good idea performance wise.

EDIT: It'd be great if someone had some numbers on this. I would never have thought of putting databases on VMs as they are more IO intensive. I don't have any numbers to support that, but wanted to know if anyone has deployed databases on VMs.

  • Just as note - I would not call that powerfull. As in: I just bought a micro ats machine that gets 32gb memory. This is HALF (!) what the motherboard can handle and the memor costs not even 300 USD. It may seem powerfull to you, but it is a lower range machine these days. And the hard drive will be the bottlneck - I run a 64gb virtualization server with 22 discs now (24 slots) and guess what overloads ;) Next hardware upgrade puts it into a 4u rack case with 72 disc slots. – TomTom Feb 09 '12 at 07:00
  • Not all of us are lucky to be working with huge hardware :) It's all relative, I moved from a small EC2 instance to this, so it's pretty big for me :) – Khaja Minhajuddin Feb 09 '12 at 07:57
  • Yeah. Just pointing out - 32gb were very impressive when got them on my first 64gb machine and it cost I think 2000 Euro or so for the tram. These days it is nothing. Modern end user motherboards go 64gb with not expensive RAM, micro atx 32gb. – TomTom Feb 09 '12 at 08:27

2 Answers2

5

My rule is simple - I virtualize everything except when the hypervisor gets in my way.

Even if I only put one VM on a box, at least I have the hardware abstracted which comes in handy when you need high uptime (real time mover to another machine), in case of disasters (cut down large servers move others to same machine) and lifetime (upgrades do not deal with low level drivers, I can just move the machine to new hardware).

There are exceptions - which are systems that are time sensitive. Data collection and decision making in sub millisecond space is not really usable on VM's, so certain activities are off. Note that those are not VOIP etc. - VOIP mostly is ok with latency. Not so much when you start dealing with financial market data and trading, though.


Update:

There is an obvious other case where you can not virtualize - that is when your hardware is too powerfull. At the moment using hyper-V for example a machine needing more than 4 cores can not be virtualized as a VM only supports 4 cores. Simple decision. Next generation hyper-V will mvoe to 32 virtual procesors, but then when AMD comes out with 20 core CPU's and you have two of them... over the limit again. This IS mostly relevant for more powerfull servers, agreed.

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • I agree with Tomtom-virtualize (although I disagree that there is some things you can't virtualize there is simply no reason to run a bare metal OS, as all modern hypervisors can handle RTOS requirements) – Jim B Feb 09 '12 at 05:50
  • 1
    ;) Sure about that? I run a NxCore / Nanex real time financial feed and on Hyper-V servers my vm side clock was terrible unstable. Nanex never went into precision mode because it could not reliably calculate clock skew (jumping 1-2 seconds per minute forward and backward). Moved physical and I run a nice 37ms/hour skew now. – TomTom Feb 09 '12 at 06:04
  • The only other edge case is when you are using additional hardware to do things - support for various device passthrough is improving but in our experience often buggy and unstable. – Zanchey Feb 09 '12 at 06:56
  • Agreed, but this is quite rare. The one "more common" scenario are printers / scanners (get a proper one that is networked) and VOIP (but for that there are adapters attaching to the network). But you are right - this is anotehr edge case when it comes in. Sadly. Would not be that hard to moveusb devices over to ta dedicated erver. RDP does that with RemoteFx. Would fix most issues. – TomTom Feb 09 '12 at 06:58
  • I see the benefits of a VM in moving them around, which I don't think happens a lot (atleast in my case). Has anyone done a performance comparison of a webapp running on VM services and a webapp running on bare metal? – Khaja Minhajuddin Feb 09 '12 at 07:56
  • Overhead is minimal if you have proper hardware below - the overhead of virtualizatin is below 5% generally. The main issue is not "is it slower" but "how about supprot systems". have multipel VM's your discs will give up first - all random IO, and a lot of OS hitting them at the same time. – TomTom Feb 09 '12 at 08:28
  • @TomTom - hmm that's interesting. Most of the time RTOS problems with virtualization are about "can I keep up with the incoming data without latency" Perhaps your time problem was an OS time polling interval problem or a problem accessing the quartz via the hypervisor- I'll have to do some digging. I have a similar situation with manufacturing process software that adjusts the mix in real-time, so I know that hyper-v will allow the sensor feeds to come in in real-time (I believe the latency was around 8-15ms ,but in that case time (by itself) is irrelevant, most important is synchronization – Jim B Feb 09 '12 at 12:46
  • It cuold be, but anyhow given the current low power limtis on hyper-v we decided to go bare metal here on a 6 core hyper threading machine for now ;) Pending an upgrade to an amd 32 core system, all beyond hyper-v's capabiltieis at the moment. It nayway is a single use system. – TomTom Feb 09 '12 at 12:49
0

Services on Host

  • No virtualization overhead (more raw performances, and direct hardware access);
  • Early startup during boot (e.g.: DHCP or firewall for VM);
  • Manual migration to another host;

Services on a VM

  • Virtualization overhead (but it's ok for "light" services);
  • Start up when all the VMs start up;
  • Easy migration to another host;
Giovanni Toraldo
  • 2,557
  • 18
  • 27
  • -1. First point on services on host is not relevant when overhead is low single digi percentag (2-3%) which is what you get with ahrdware virtualization. – TomTom Feb 09 '12 at 07:06
  • I was not questioning on how much the overhead is, but the fact that there is. Try to run a scientific data mining app on an HPC cluster, and you will see that the 3% of 100 days of a work-load unit is 3 days available for other purpose. You get the point? – Giovanni Toraldo Feb 09 '12 at 07:24
  • Yes. as in irrelevant. I generally ignore all fluctuations below 5% for not signiicant. In the above example for example a little better progtramming may shaft off 10 days;) – TomTom Feb 09 '12 at 08:26