I accidentally overwrote both ZILs on the last version of OpenSolaris, which caused the entire pool to be irrevocably lost. (Really bad mistake on my part! I didn't understand that losing the ZIL would mean losing the pool. Fortunately recovered from backup with downtime.)
Since version 151a though (don't know offhand how what ZPool version that means), this problem has been fixed. And, I can testify that it works.
Other than that, I've lost ZERO data on a 20tb server - including due to several further cases of user error, multiple power-failures, disk mis-management, mis-configurations, numerous failed disks, etc. Even though the management and configuration interfaces on Solaris change frequently and maddeningly from version to version and presents a significant ever-shifting skills target, it is still the best option for ZFS.
Not only have I not lost data on ZFS (after my terrible mistake), but it constantly protects me. I no longer experience data corruption - which has plagued me for the last 20 years on any number of servers and workstations, with what I do. Silent (or just "pretty quiet") data corruption has killed me numerous times, when the data rolls off the backup rotation, but has in fact become corrupt on-disk. Or other scenarios where the backups backed up the corrupt versions. This has been a far bigger problem than just losing data in a big way all at once, which is almost always backed up anyway. For this reason, I just love ZFS and can't comprehend why checksumming and automatic healing haven't been standard features in file systems for a decade. (Granted, truly life-or-death systems usually have other ways of insuring integrity, but still - enterprise data integrity is important too!)
Word to the wise, if you don't want to descend into ACL-hell, don't use the CIFS server built-in to ZFS. Use Samba. (You said you use NFS though.)
I disagree with the SAS vs. SATA argument, at least the suggestion that SAS is always preferred over SATA, for ZFS. I don't know if that comment[s] was referencing platter rotation speed, presumed reliability, interface speed, or some other attribute[s]. (Or maybe just "they cost more and are generally not used by consumers, therefore they are superior". A recently released industry survey (still in the news I'm sure), revealed that SATA actually outlives SAS on average, at least with the survey's significant sample size. (Shocked me that's for sure.) I can't recall if that was "enterprise" versions of SATA, or consumer, or what speeds - but in my considerable experience, enterprise and consumer models fail at the same statistically significant rates. (There is the problem of consumer drives taking too long to time-out on failure though, which is definitely important in the enterprise - but that hasn't bitten me, and I think it is more relevant to hardware controllers that could take the entire volume off-line in such cases. But that's not a SAS vs SATA issue, and ZFS has never failed me over it. As a result of that experience, I now use a mix of 1/3 enterprise and 2/3 consumer SATA drives.) Furthermore I've seen no significant performance hit with this mix of SATA, when configured properly (e.g. a stripe of three-way mirrors), but then again I have a low IOPS demand, so depending on how large your shop is and typical use-cases, YMMV. I've definitely noticed that per-disk built-in cache size matters more for latency issues than platter rotational speed, in my use-cases.
In other words, it's an envelope with multiple parameters: cost, throughput, IOPS, type of data, number of users, administrative bandwidth, and common use-cases. To say that SAS is always the right solution is to disregard a large universe of permutations of those factors.
But either way, ZFS absolutely rocks.