1

I bought a Dell Poweredge at a local auction, mainly because it was too good a deal to pass up, but I only have experience with virtual servers via AWS and Linode (i.e. I'm new, so this may not make a lot of sense).

I plan to install KVM over ubuntu to spin up many virtual linux servers to replace my many paid VPS instances, but I'm not sure how to best utilize all the resources of the server hardware. In particular, the specs are: Dell R720, 2x E5-2640 12 Cores (2.5 Ghz), 1 RAID card (PERC H710P Mini - Embedded), 128GB RAM, 8.2TB disk space (2 SSDs @ 185GB ea.; 13 HDDs @ 558GB ea.). From what little I know, it seems like creating a virtual disk using both SSDs and HDDs is not recommended. The best option I could think of was to forsake the performance of the 2 SSDs and just use all the drives to create a single virtual drive, then install and run a single ubuntu instance (chosen arbitrarily), whereon I install KVM, then spin up the virtual servers via KVM.

Again, I'm not sure if I'm even making sense here, but if someone could help me figure out a good way to maximize utilization of my hardware via KVM, I'd appreciate it.

Edit: the closest thing I could find on serverfault was this question: can-i-mix-ssd-with-spindle-hdds

Edit2: or maybe KVM itself provides ways of discovering, connecting, and/or managing hardware resources?

dan
  • 35
  • 5
  • I'd say use the SSDs for [caching](https://www.kernel.org/doc/Documentation/device-mapper/cache.txt). But you have much to learn before you can make this decision. Start [here](https://serverfault.com/q/339128/126632). – Michael Hampton Apr 14 '19 at 18:19

2 Answers2

3

It is fully up to you according to your requirements in term of storage quantity, performances and redundancy.

You can guess that assigning multiple virtual drives to a single VM is possible. The VM see each virtual drive as a different block (like /dev/vda and /dev/vdb).

Let's say your VMs should have SSD performance for system boot and programs execution, and have a slower but bigger storage to store files (like medias).

You can assemble the two SSDs in a RAID-1 array, install the host system on this array. You can choose: use the whole array for your host system and store VMs SSD drives in a directory (not recommended), or install the host system in a smaller partition (as small as possible but with a margin) and use the remaining space for an other partition, mounted in /mnt/guest-ssd/ (recommended).

You can assemble all HDDs in a single RAID-10 or RAID-5 array, create a single but very large partition on the RAID array, and mount it in /mnt/guest-hdd/ as an example.

Both RAID arrays will benefit of redundancy. The RAID-10 or RAID-5 array made from HDDs will get better r/w performances regarding to a single HDD.

A first advantage of this architecture is: guests' drives are stored as files in two partitions / directories which are /mnt/guests-ssd/ and /mnt/guests-hdd/. The images could be easily transferred for backups or migrations. A second avantage is that there is an abstraction of the real HDDs capacity. Virtual drives in /mnt/guests-hdd/ can be smaller or bigger than 558 GB.

A disadvantage of this scenario is: you don't have a lot of SSD capacity. If SSDs are in a RAID-1 array, you will only get 185 GB as SSD storage for host and guests systems. This is not a lot regarding to the number of VMs you can create with so much RAM. There might be a disproportion between your resources (RAM vs SSD vs HDD) but it depends on your needs. If you want to create multiple VM for storage, they will require small SSD (just for OS + NFS/FTP/...) and large HDD. If you want many VMs with databases (lot of iops) or other disk intensive applications, you should replaces the two SSDs by bigger ones, and probably replace a few HDDs by SSDs. Or use caching solutions as suggested by other people here.

Knowing that you can pass block devices to VMs, and that RAID arrays are presented to the host system as block devices, you can provide a direct access to a RAID array to your VMs. The VM will not have conscience of the RAID mechanism behind this block but this method is less flexible than the previous one: the block's size will be a multiple of HDD size.

  • If one VM requires medium storage (500 GiB), you can create a RAID-1 array out of 2 HDD and pass this block/array to the VM.

  • If one VM requires large storage, you can build a RAID-10 array with 4 HDD so it will get a virtual 1.1 TiB drive with redundancy and improved r/w performances (2x faster in comparison to a single HDD).

  • If one VM requires XL storage, you can build a RAID-10 array with 8 HDD so you will get a 2.2 TiB block with redundancy and improved r/w performances (4x faster in comparison to a single HDD)

You can see that there are more choices to make and more configuration to do. There are very few scenarios that require this kind of setup.

or maybe KVM itself provides ways of discovering, connecting, and/or managing hardware resources?

KVM does not manage storage/drives on host. Libvirt allows you to configure storage pools (local, over network, ...) but it will not configure RAID (hardware or software) and it will not build your architecture for you, it will not take decisions instead of you regarding to how you plan your storage, network, nodes and other resources.

You can only have fun with this beast, playing with KVM/libvirt ;)

Dylan
  • 441
  • 2
  • 6
2

I agree that you should not mix SSDs and HDDs in the same array (PERC controllers does not even allow it, if I remember correctly).

I would create two arrays: an SSD one for OS and caching and an HDD one for raw image storage. I can see two different setups:

  • an lvmthin (HDDs RAID6 or 10) + lvmcache (SSD RAID1) based one, with classical ext4 and/or xfs as filesystems;
  • a ZFS based one with a L2ARC partition on SSDs. NOTE: PERC H710 does not support non-raid/passthrough disks. This means such a setup would be somewhat restricted compared to a native disk passthrough case. You can create a RAID6/10 main array used as a single vdev, or multiple RAID0 arrays to be used in 2-way mirrored vdevs.

Anyway, take the above as generic suggestions only: you can (and should) adapt them based on your specific redundancy and performance level (which are not described in your original post).

shodanshok
  • 44,038
  • 6
  • 98
  • 162