Context:
I have a SAN operating system (Nexenta) that runs on ESXi. It has a couple of HBAs passed through to it already, via VT-d. All of the HBAs for the SAN OS are full (no free connectors). I recently purchased some SATA SSDs for Nexenta and attached them directly to the motherboard's on-board SATA controller.
I can add those new SSDs to the Nexenta VM by adding "physical disks" to the VM profile in vSphere. Alternatively, I could connect then to one of the HBAs, but I'd have to disconnect/move existing disks, which entails considerable hassle.
Question:
My question is: assuming that my HBAs don't do any fancy caching etc., and have the same available bus bandwidth and SATA specification as the onboard controller (the one connected to the new SSDs), is there a performance difference between attaching the physical disks to the VM via the vSphere disk add functionality, and attaching them to an HBA that is passed through to the VM via VT-d? Does the vSphere disk add method impose some relaying/request forwarding behavior that could negatively impact disk performance compared to native speeds?
Anecdotes for answers are good, but statistics are better: I know I probably won't notice a performance difference at first, since SSDs are fast. But I've been in the field long enough to know that if there is a problematic performance difference, it will manifest during production-critical activities, at the worst possible time :)