5

I currently have large KVM nodes utilizing between 8 & 16 RAID10 arrays & hardware raid. We typically provision a single large volume (VM's are backed up off site as well).

KVM VM's using LVM volumes

Currently, on our 16 disk arrays, we're getting between 500 MB/s to 1.3G/s sequential write speeds using dd & 512-2048 file size at the VM level.Host level is consistent 1.4G/s which calculating the write speeds of the actual disks looks to be maxing out the disks themselves.

Hardware raid cards have onboard 2GB ram for caching.

///

To clarify, there's not any performance issues in terms of disk I/O (nearly no io wait with about 15-20 VMs)

We're exploring the addition of trying to add PCIe SSD cards for caching, but would ideally like to be able to implement it to new systems as well as current systems.

We could go the LSI & cachecade route - no brainer there. We could also do all SSD, no brainer there however, we'd like to implement in addition to the large sata arrays and ideally would like to use PCIe since it wouldn't require additional bays.

Any pointers on how to do this? Doesn't seem to be much info out there and many vendor websites are horrible with describing how their products actually work.

Bjones
  • 51
  • 2

1 Answers1

2

Also keep a in mind the CPU overheads that would be caused by software based caching solution. Since it has to keep the maps of the hot data blocks in the memory. Also it would be host system dependant.

One may try using hybrid SHDD's to make it device/system independent.

I think that best option would be to utilise one-two SSDs for raid controller controlled caching, and separate dedicated PCIe for higly randomly read data.

Also make sure noatime/nodiratime is set for all host/VM systems. The directory structures benefit the most from being duplicated to flash, but flash hates small writes...

Mark
  • 54
  • 3