0

The company i work from are actually using a bunch of "commercial VPS" which actually, in my point of views, are really overpriced for what they have.

Having some basic and limited knowledge in virtualization, i was wondering if just taking 6-7 of dedicated server, install Proxmox on it and then, create a bunch of VM (which is, correct me if i'm wrong, a VPS) will be better, and for the financial side, this is largely advantageous and probably also on technical side as actually, all VPS are all unmanaged. So i would like to give it a try and test it by creating bunch of VM and test the stability.

Now that i have the server and Proxmox up and ready, i was wondering how many VM could be created with this server config (without overloading etc, something like what a VPS hosting company would do in a real world, without considering what would be run on it, but as "general" use)

If it all "VPS" (VM) of 1GB RAM each, how many VM will i be able to create ? Does hosting company counting the RAM you have by using real main host RAM or they count it as VRAM?

If it is real RAM from the main host, will 32 VM of 1gb RAM each and ??? VCPU will nicely run ?

If it is VRAM, how many VM can i create/run?

**sorry for using the world VPS, as i read it is a marketing term of VM but since im not from this area in IT, im not sure at 100% if it all the same.

Intel  Xeon D1540 8c / 16t 2GHz 32GB DDR3 ECC 2133MHz SoftRAID  4x2TB   SATA

Thanks alot

Wtrnd
  • 1
  • 1
  • 1
    Are your company using VMs for their own services? Do they really need a KVM level of isolation? Convert every service into a docker container and run them under an orchestrator. Docker containers share memory so you'll save a lot on hardware. Managing a container is less effort than managing a VM so you'll save a lot of maintenance costs. – AlexD Dec 18 '21 at 09:37
  • 1
    Maybe should start by asking why they're using these dedicated VPS servers, rather than assuming you found a problem and proposing a solution to a problem that may not be a problem. – joeqwerty Dec 18 '21 at 17:31

1 Answers1

1

In general the larger the hypervisors and the more diverse the work load, the more you can overcommit assigning resources to customers/VM's.

On most hypervisors you can't really overcommit memory all that much, because things will fall apart somewhat catastrophically when all VM's want to actually utilise their assigned memory at the same time. That means that on a physical server with 32 GB of RAM you can reliably host ± 16 VM's which each are assigned 2 GB of RAM or two VM's when each is assigned 16GB. Basic math. Maybe more if you have a much better idea of your average and peak work loads.

It really depends on your CPU load if overcommitting CPU makes sense. With 16 cores available in the hypervisor you can create 16 VM's with each 1 VCPU or two with 8 VCPU's each and be sure that there will almost never be any resource contention. Again the math is elementary.
But when average the CPU load of each VPS is lower and their peak loads don't coincide, you can assign all of them two VCPU's (an overcommit ratio of 2) and that might still work out for you.

Note that if you want to create your own hosting platform in addition to hypervisors you generally also need a better management solution than just a collection of pets, as well a network and storage capacity.

Bob
  • 5,335
  • 5
  • 24