Hmmmm.
Well, I have an interesting beast built on Citrix XenServer. I used FreeBSD 9.1 x64 with an HVM kernel.
I used passthrough to expose the FC HBA card and an Intel dual port nic to the FreeBSD HVM. The system boots on a small virtual disk provided by the Hypervisor. The rest is installed on the LUNs provided by the san. Thus my zpools look like this:
pool: local
state: ONLINE
scan: scrub repaired 0 in 0h3m with 0 errors on Mon Feb 11 04:58:53 2013
config:
NAME STATE READ WRITE CKSUM
local ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
multipath/DDN-v00p2 ONLINE 0 0 0
multipath/DDN-v01p2 ONLINE 0 0 0
multipath/DDN-v02p2 ONLINE 0 0 0
errors: No known data errors
pool: nas
state: ONLINE
scan: scrub repaired 0 in 2h31m with 0 errors on Sun Feb 10 23:22:57 2013
config:
NAME STATE READ WRITE CKSUM
nas ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
multipath/DDN-v03 ONLINE 0 0 0
multipath/DDN-v04 ONLINE 0 0 0
multipath/DDN-v05 ONLINE 0 0 0
multipath/DDN-v06 ONLINE 0 0 0
multipath/DDN-v07 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
multipath/DDN-v08 ONLINE 0 0 0
multipath/DDN-v09 ONLINE 0 0 0
multipath/DDN-v10 ONLINE 0 0 0
multipath/DDN-v11 ONLINE 0 0 0
multipath/DDN-v12 ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
multipath/DDN-v13 ONLINE 0 0 0
multipath/DDN-v14 ONLINE 0 0 0
multipath/DDN-v15 ONLINE 0 0 0
multipath/DDN-v16 ONLINE 0 0 0
multipath/DDN-v17 ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
multipath/DDN-v18 ONLINE 0 0 0
multipath/DDN-v19 ONLINE 0 0 0
multipath/DDN-v20 ONLINE 0 0 0
multipath/DDN-v21 ONLINE 0 0 0
multipath/DDN-v22 ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
multipath/DDN-v23 ONLINE 0 0 0
multipath/DDN-v24 ONLINE 0 0 0
multipath/DDN-v25 ONLINE 0 0 0
multipath/DDN-v26 ONLINE 0 0 0
multipath/DDN-v27 ONLINE 0 0 0
errors: No known data errors
And the NIC's:
xn0: flags=8843 metric 0 mtu 1500
options=503
ether f2:05:91:2c:bb:8a
inet 10.1.3.6 netmask 0xffffff00 broadcast 10.1.3.255
inet6 fe80::f005:91ff:fe2c:bb8a%xn0 prefixlen 64 scopeid 0x6
nd6 options=29
media: Ethernet manual
status: active
lagg0: flags=8843 metric 0 mtu 1500
options=4019b
ether 00:15:17:7d:13:ad
inet 10.1.250.5 netmask 0xffffff00 broadcast 10.1.250.255
nd6 options=29
media: Ethernet autoselect
status: active
laggproto lacp lagghash l2,l3,l4
laggport: em1 flags=1c
laggport: em0 flags=1c
Notice the "em's" in the lagg. It's quite fast and works great. Provided you have drives attached to a controller that you can passthrough to the VM, there's no real need to worry about the whole PVM situation.