I know that I have had developers dislike the notion of VMs claiming major performance hits.
You may only know about the Developers because they were the low-hanging fruit.
There are many applications where performance is not adequate. Try telling someone with with 200 concurrent users per Citrix server, how would they like to virtualize? Sure, there are a few case studies where it works for lightly used, well-behaved applications. All it takes is one IE process to go to 100% utilization to take out that single cpu guest.
The same performance barrier exists with the Exchange server hosting 5000+ mailboxes or a SQL server that processes hundreds or thousands of queries per second. How much will they benefit from virtualization? Probably none.
Also consider costs. How much more expensive is it to virtualize one physical into multiple guests, to achieve the same performance? The additional Windows os licenses and any software that is licensed per-server, the costs can be prohibitive. Increased administrative effort due to more servers?
Server virtualization was originally intended to consolidate servers that are lightly used or have copious amounts of idle time. That is a good strategy. If someone wants to pursue a broader strategy, they had better make sure it works. Measurements like "how many servers can we virtualize" ring false. The objectives should be finding a good fit and reducing costs.