I know this thread is ancient, but things have changed quite a bit since then. (E.g. the state of ZFS-FUSE and in-kernel options, the arguable disappearance of "Open" Solaris, etc.)
First of all, the kernel port of ZFS won't necessarily perform much better than ZFS-FUSE "without a doubt". That reply seems to echo the common misconception that FUSE filesystems always perform worse than in-kernel. (In case you don't already know, in short: in theory kernel filesystems perform better, all else being equal. But there are many other factors affecting performance with bigger impact than kernel vs. user space.) With ZFS-FUSE however, it does appear according to benchmarks that in some cases it is significantly slower than native ZFS (or BTRFS). For my uses though, it is fine.
Ubuntu now has an "ubuntu-zfs" package through their PPA repository system, which is just a nice packaging and automatic module-building of the native zfs-on-linux project. It runs in kernel space and supports a higher zpool version currently than zfs-fuse.
I used to use OpenSolaris on a big redundant 20tb server, and now use Oracle Solaris 11 on it. Solaris has some significant problems and challenges (esp. if you are comfortable with configuring and administering Linux rather than old-school UNIX), and they have drastically changed many of the hardware management and other configuration interfaces between OS versions and even updates, making it an often highly frustrating moving target (even after finally mastering a version prior to upgrading to the next). But with the right (compatible) hardware and alot of patience for changes and learning and tweaking, it can be an amazing choice in terms of the file system.
One more word of advice: Don't use the built-in CIFS support. Use Samba. The built-in support is broken and may never be ready-for-prime-time. The last time I checked, there were plenty of enterprise installs using Samba, but not a single one using CIFS due to permissions management nightmares.
I also use ZFS-FUSE on Ubuntu on a daily basis (on personal workstation), and have found it to be rock-solid and an awesome solution. Only problems I can think of with ZFS-FUSE specifically, are:
You can't disable the ZIL (write-cache), at least not without setting a flag in source code and compiling yourslef. BTW - disabling the ZIL, contrary to a common misconception, will not cause you to lose your pool on a crash. You just lose whatever was being written at the time. This is no different than with most filesystems. It may not be ideal for many mission-critical server scenarios (in which case you should probably be using native Oracle Solaris anyway), but is usually a very worthwhile tradeoff for most workstation/personal use-cases. For a small-scale setup, the ZIL can be a huge write-performance problem, because by default the cache is spread among the pool itself - which could be quite slow especially if a parity stripe setup (RAIDZx). On Oracle Solaris, disabling it is easy, I believe it is the pool's "sync" property IIRC. (I don't know if it can be easily disabled on the native linux kernel version.)
Also with ZFS-FUSE, the zpool version isn't high enough to support the better pool recovery options of more recent versions - so if you do decide to offload the write cache to, say, one or more SSDs or ram drives, be wary. (And always mirror it!) If you lose the ZIL, you almost certainly also lost your entire pool. (This happened to me disastrously back with OpenSolaris.) More recent zpool versions on Oracle Solaris have mitigated that problem. I seem unable to determine if the kernel-level linux port has that mitigation incorporated or not.
Also you can safely disregard the "ZFS ARC bug" alarm that guy seemed to spam discussions with. My server gets hammered hard, as have countless production servers around the world, and have never experienced it.
Personally, while I strongly dislike Solaris, ZFS is just amazing, and now that I've come to depend on its features, I can't do without it. I use it even on Windows notebooks. (Via a complex but very reliable virtualization solution and USB drives velcro'ed to the lid.)
Edit: A few minor edits for clarity, relevance, and acknowledging ZFS-FUSE performance limitations.