5

On my desktop computer I have VirtualBox, and i can run a lot of concurrent VMs, up to near native speed.

On my server, that is twice as powerful than my desktop computer, I have debian+VMware server 1.0 (because I don't like the java bloat introduced with 2.0), and if I run a single VM, it runs up to near native speed. The real bottleneck is disk access speed: if I start TWO (yes, just 2!) VMs at the same time (read: when the server will turn on), the server will be paralyzed for 40 minutes! 40 minutes for booting 2 Windows VMs! Completely useless! I had better performance when I installed VirtualPC on a Celeron 400 Mhz!!!! If I search for "vmware slow hdd access", I get tons of results, so, I assume this is an huge VMware problem, right?

So I was thinking one of this actions:

  1. Replace the server HDD with two SSDs in RAID 0
  2. Switch to Proxmox VE

Someone tried Proxmox? How better it is? Will it fix the bottleneck? I don't have another spare server to experiment with, so, if I wipe my server to play with proxmox, I will lose at least 2 working days...

Magnetic_dud
  • 1,034
  • 2
  • 15
  • 28
  • 4
    What does twice as powerfull mean? What VMs are you starting (usage not OS)? Is it I/O bound or CPU bound? Do you have the VMWare drivers installed? Just remember RAID0 is short for the number of files you can recover in case of a single failure – Martin M. Jul 21 '09 at 08:04
  • I am running empty WinXP machines (for testing server speed, fresh install that would boot in 40 seconds); of course i have VMware drivers installed. Hdd is RAID1. CPU maxes at 11%, so it is an I/O problem – Magnetic_dud Jul 21 '09 at 08:11
  • Ps: "Replace the server HDD with two SSDs in RAID 0" is ironic :-P – Magnetic_dud Jul 21 '09 at 08:19
  • 2
    @Server Horror +1 for this. "Just remember RAID0 is short for the number of files you can recover in case of a single failure" – egorgry Jul 21 '09 at 13:35
  • +1 at both, you are perfecly right to the RAID0, i was just kidding. Every server should show a nag screen to every logon explaining the risks of a RAID0 haha – Magnetic_dud Jul 22 '09 at 19:46

8 Answers8

9

I have seen this behaviour when I assign too much memory to the VM's. When I start a VM that grabs memory from the host OS above some threshold, everything dies except for the hard drive LED. It takes an age just to shut down the VM.

Fine-tuning the memory footprint of the VM's has done wonders for me.

Hans Malherbe
  • 725
  • 2
  • 9
  • 11
  • 2
    I've had a very similar experience with parallels on OSX. I dropped the memory to 640mb and my fedora vm install became usable. – egorgry Jul 21 '09 at 13:37
  • 1
    Fine-tuning the page file also helps. Size it close to what is actualy using. I've found "most" VM's run better with one vCPU also. – Alan Jul 21 '09 at 15:28
  • I can second Alan's comment: one vCPU is better than two – Josh Jul 21 '09 at 18:12
6

It sounds like something is seriously wrong with your setup because there's just no way it should take 40 minutes for a couple of VMs to boot.

If disk I/O is an issue your best bet is to add drives and dedicate a drive (or RAID array) to each VM.

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
5

Booting two VMs from the same hard drive will cause drive thrashing (the heads jumping from place to place, consuming more time than actually reading data), especially if the host OS is on the same drive. Boot them separately to avoid this thrashing, and your total boot time will be lower.

I always try to put my VMs on separate drives and then do not perform concurrent actions on any that share a drive (spindle) with other VMs/OSs.

Theune
  • 51
  • 1
3

Yes, VMWare Server's disk IO performance is generally pretty ordinary. I use KVM on my desktop for local virtualisation, and we use a mix of Xen and VMWare ESX for datacentre virtualisation, while keeping a close eye on KVM for that role too.

womble
  • 95,029
  • 29
  • 173
  • 228
3

Have you installed the vmware drivers in the guest OS? If not, do so.

Thomas
  • 1,446
  • 11
  • 16
  • The VMWare drivers won't get loaded until fairly late in the boot process if I rememebr correctly, and if it takes 40 minutes to just get control of the host back I don't know how much this will help. It's still of course good practice to install them anywawy. – Mark Henderson Jul 22 '09 at 22:12
3

Make sure you have a fast disk subsystem. For years I was running four VMs on VMware Server 1.0 with no issues. Just upgraded to 2 so I'll let you know how that goes, but so far no issues there either.

One thing that helped me considerably with IO was switching from RAID1 to RAID10. Night and day difference.

The other thing you could try is adding the following lines to the VMware server config file:

prefvmx.useRecommendedLockedMemSize = "TRUE"
prefvmx.minVmMemPct = "100"

And the following to your .vmx files:

sched.mem.pshare.enable = "FALSE" 
mainMem.useNamedFile = "FALSE" 
MemTrimRate = "0" 
MemAllowAutoScaleDown = "FALSE"

See this post on the VMware forums.

Josh
  • 9,001
  • 27
  • 78
  • 124
  • yes, i saw and tried that, but no success (but a noticeable difference) – Magnetic_dud Jul 22 '09 at 19:33
  • You may want to try RAID5 or RAID10 instead of RAID1. I too had issues with RAID1. If you can't add 1/2 extra disks, try staggering the boot times of the VMs by 5 minutes or so – Josh Jul 22 '09 at 20:04
3

Well, you might not believe it, but i wiped my server (it was only 4 days old, so there was no important data yet) and installed the Proxmox VE distribution (Debian 5.0 + Qemu-KVM + OpenVZ)

Wow! It is extremely faster than VMware on Debian!!!

There is a difference, now i explain:

VMware is good in managing RAM, the unused RAM of my VM was left free for the other VM. But, the IO will make the VM "hang up", waiting for the emulator writing to the hdd. So, if your VM are using the HDD, unless you have a RAID 0+1 set or a physical HDD for each VM, you will be disappointed by performance.

Instead, qemu-kvm won't share the unused ram between the hosts, or it does a lot more ineffectively than VMware (as i saw from the web-ui of both emu), but, i think that qemu will cache the IO on ram and then write to the hdd later. (in the web-ui there is a % indicator "IO delay: 5%") The performance gain are really better!

Magnetic_dud
  • 1,034
  • 2
  • 15
  • 28
0

Im still working on bottle neck myself but I went into the vm bios and disabled all the memory and legacy settings and i had a 10 minute vista boot to almost normal ..for a vm .. im still horribly laggy with disk writes and reads but at least the machine works now.. oh and i did reduce the vm's memory from a gig to 512m . my guess is the bios caching was the problem with the slow boots (disk prob still working on) desktop works well but disk access is ........BAD .

mxdog
  • 1