You probably cut down your "cache time" when shared the disks
Imagine that you have two applications "A" and "B":
Application "A" has a small database with only 40GiB, loads 1GiB/day and most queries uses the data for the last week days. In a server with 20GiB of RAM dedicated to disk cache, probably near than 20 day worth of data will be on the disk cache and most reads will not even move a disk head.
Application "B", on the other side is a medium archive with 2000GiB, loads 20GiB of data every day and most queries read sequentially the whole thing. It is an archive and mostly do textual queries that is difficult o index and the sequential read happens within a day anyway which is enough for the application users. As many archives it is used only by auditories that does not need faster responses.
If you join the disks of these two servers on the same storage using the same 64GiB cache, application "A" and "B" move 21GiB data per day. Then the cache will hold at most 3 days of data. Before the merge, application "A" did most of their queries on RAM, now, most of them need a phisicall disk read. Before the merge, application "B" had little concurrency from the application "A" in the disk accesses, now has a lot of concurrency.
Got the idea?
Segment the disk caches is very important to performance because RAM speed is between 4k and 4 million times faster than 15k disks for random access. Disks has to move the head to get the data, RAM does not. 15k RPM disks are a waste of money. They are about 2 times the speed of regular SATA drives for random access and costs way more than 2 times the price of SATA drives.
About VMDK
My servers are too big and we had issues in the past with big VMs (700GiB RAM for example) on VMWare. We also had severe performance issues and unexplained crashes. For that reason we moved to KVM. I was not the manager of the virtualization server at the time, so I can not say what was wrong with our VMWare. But since we moved to KVM and I become the virtualization server manager we have no more issues.
I have some vm images on phisical devices (SCSI forwarding) and some images as .img image files (similar to VMDK with fixed size). People on internet said SCSI forwarding is way faster, but for my usage patterns the performance is the same. If there is a difference is small enough for me not to see. The only thing is that when creating a new virtual machine we have to instruct KVM not to cache the disk access on the host operating system. I do not know if VMWare has an similar option.
My sugestions to you
1. Change storage strategy
Trade the storages by internal disks. 24 internal SATA disks allows a big raid 10 that will be way cheaper and faster than most storages. And have a side benefit, for less cost you will have an surplus of disk space on those servers that can be used in cross backup and maintenance tasks.
But does not expose this surplus space to your users. Keep to yourself. Otherwise it will be hell to make backups.
Use storages for things they are designed for:
- Centralized backup;
- Database/archives that either is too big to fit in internal disks;
- Database/archives that usage patterns are not accelerated by disk caches and the number of disk heads needed for performance does not fit in internal disks or dedicated storage.
And... does not even bother to Buy storages with much disk cache. Instead put the money in increasing the RAM of the servers that use the storages.
2. Move RAM from the storage cache to the actual servers if possible
Assuming you have the same ammount of cache RAM in you storages after the unification you may have enough RAM. Try to move the RAM from the storage cache to the actual servers in proportion you have before. That if the RAM chips are compatible. That may do the trick.
3. No RAID 6 to mission critical databases
Raid 5 and 6 are the worst for database performance. Move to Raid 10. Raid 10 doubles the reading speed because you have two independent copies of each sector that can be read independently.
4. Move the database log to a dedicated internal drive
I use postgres, and moving the write-ahead-log to a dedicated disk makes a lot of difference. The thing is, most modern database servers write the information in the log before writing the information in the database data area itself. The log is usually a circular buffer and the writes are all sequential. If you have a dedicated physical disk, the head will always be on place to the write, almost no seek time even if is a low rotation drive.
As I read in internet, Mysql uses the very same design.