Before I talk about LeftHand's VSA specifically, I'm going to zoom out and talk iSCSI in general. If you're going to hook SQL Server up over 1 gig iSCSI, then your max storage bandwidth is 100MB/sec - fairly low. It's pretty easy to saturate that with a handful of hard drives. SQL Server lives and dies by IO speed, so as a result, I don't see a lot of production SQL Servers running entirely on iSCSI. I love iSCSI, don't get me wrong, but it's pretty easy to hit the bandwidth ceiling.
You can add multiple network ports and start doing multipathing, but you have to be careful: most multipathing solutions out there aren't really active/active, but active/passive. You can do some delicate setup operations and split out load - for example, use one array for your data, one for your logs, and use different network cards for the "active" pipe for each array. However, that's a manual setup, and you have to stay on top of that manually.
Now, let's talk LeftHand's VSA: not only are you facing these bandwidth limitations, but now you're going to lose some speed off the top due to the overhead of a software SAN implemented through virtualization. The network throughput is virtualized, the storage access is virtualized, and the cpu/memory is virtualized - whereas SAN gear is built from the ground up for IO speed.
Does it work? Absolutely. Is it as fast as conventional SAN gear? No, and in some cases, it's not even as fast as direct attached storage.