It is fully up to you according to your requirements in term of storage quantity, performances and redundancy.
You can guess that assigning multiple virtual drives to a single VM is possible. The VM see each virtual drive as a different block (like /dev/vda and /dev/vdb).
Let's say your VMs should have SSD performance for system boot and programs execution, and have a slower but bigger storage to store files (like medias).
You can assemble the two SSDs in a RAID-1 array, install the host system on this array.
You can choose: use the whole array for your host system and store VMs SSD drives in a directory (not recommended), or install the host system in a smaller partition (as small as possible but with a margin) and use the remaining space for an other partition, mounted in /mnt/guest-ssd/ (recommended).
You can assemble all HDDs in a single RAID-10 or RAID-5 array, create a single but very large partition on the RAID array, and mount it in /mnt/guest-hdd/ as an example.
Both RAID arrays will benefit of redundancy.
The RAID-10 or RAID-5 array made from HDDs will get better r/w performances regarding to a single HDD.
A first advantage of this architecture is: guests' drives are stored as files in two partitions / directories which are /mnt/guests-ssd/ and /mnt/guests-hdd/. The images could be easily transferred for backups or migrations.
A second avantage is that there is an abstraction of the real HDDs capacity. Virtual drives in /mnt/guests-hdd/ can be smaller or bigger than 558 GB.
A disadvantage of this scenario is: you don't have a lot of SSD capacity. If SSDs are in a RAID-1 array, you will only get 185 GB as SSD storage for host and guests systems. This is not a lot regarding to the number of VMs you can create with so much RAM.
There might be a disproportion between your resources (RAM vs SSD vs HDD) but it depends on your needs. If you want to create multiple VM for storage, they will require small SSD (just for OS + NFS/FTP/...) and large HDD.
If you want many VMs with databases (lot of iops) or other disk intensive applications, you should replaces the two SSDs by bigger ones, and probably replace a few HDDs by SSDs. Or use caching solutions as suggested by other people here.
Knowing that you can pass block devices to VMs, and that RAID arrays are presented to the host system as block devices, you can provide a direct access to a RAID array to your VMs. The VM will not have conscience of the RAID mechanism behind this block but this method is less flexible than the previous one: the block's size will be a multiple of HDD size.
If one VM requires medium storage (500 GiB), you can create a RAID-1 array out of 2 HDD and pass this block/array to the VM.
If one VM requires large storage, you can build a RAID-10 array with 4 HDD so it will get a virtual 1.1 TiB drive with redundancy and improved r/w performances (2x faster in comparison to a single HDD).
If one VM requires XL storage, you can build a RAID-10 array with 8 HDD so you will get a 2.2 TiB block with redundancy and improved r/w performances (4x faster in comparison to a single HDD)
You can see that there are more choices to make and more configuration to do. There are very few scenarios that require this kind of setup.
or maybe KVM itself provides ways of discovering, connecting, and/or managing hardware resources?
KVM does not manage storage/drives on host. Libvirt allows you to configure storage pools (local, over network, ...) but it will not configure RAID (hardware or software) and it will not build your architecture for you, it will not take decisions instead of you regarding to how you plan your storage, network, nodes and other resources.
You can only have fun with this beast, playing with KVM/libvirt ;)