I'll try as hard as I can to word this so it is not considered a shopping list.
We have been successfully running a dev/test ESXi environment for some time, with a couple of Dell PE2950III servers over an HP MSA2012fc Starter Kit (with the Brocade-based HP Class B SAN switch). This has worked very well for us, but being in dev/test, it comes with various caveats with regards to uptime/performance.
In any case, the perceived success of the dev/test platform has led to calls for a more 'production-ready' virtualisation platform. We are drafting the recommendations at the moment.
However, one of the complaints levelled at the existing stack is a lack of support for other virtualisation technologies (HyperV, Xen, etc), as the SAN LUNs are fully-allocated and formatted as VMFS. This is something that we have been told to overcome but, as is typical, there is no indication of the uptake of HyperV/Xen (and we don't particularly want to waste the 'expensive' storage resource by allocating LUNs to such where it won't be used).
As such, our current line of thinking is to forego the traditional fibre SAN, in favour of a straight-forward CentOS box (probably higher-end HP ProLiant DL380p Gen8), running NFS and Samba/CIFS daemons, with a 10GbE switch (probably Cisco Nexus 5000/5500-series).
The reasoning is and that the ESXi heads could talk NFS and the HyperV heads could talk CIFS, but both ultimately be pointing to the same XFS/RAID1+0 volumes.
Now, I'm not green enough to think that 10GbE is going to allow me to get true 10 gigabits of I/O throughput between the heads and the disks, but I don't know the kinds of overheads I can expect to see from the NFS and CIFS implementations (and any other bits that might interfere when more than one host tries to talk to it).
I am hoping to at least get near to the sustained disk read/write speeds of direct-attached disks, though, for as many hosts as I can. Looking at various drive manufacturer websites, I'm roughly anticipating this to be somewhere around the 140-160MB/s mark (if I am way off, please let me know).
What recommendations/guidelines/further reading can anyone offer with regards to Linux/NFS/Samba or 10GbE switch configuration that might help attain this?