ZFS on linux is unfortunately still not a viable solution, even if you dismiss the issue of being a FUSE module (which can seriously cramp performance on certain workloads). It simply isn't complete enough. Also, I don't think there's a debugfs for ZFS on linux, which is a serious negative.
debugfs is the traditional name for low level filesystem repair tool on unices. e2fsprogs include one for Ext2/3/4, XFS tools have xfs_db and others. Other filesystems, especially longer-existing ones like FFS and JFS have such tools too. It's basically a tool that allows you to read and manipulate the data on volume at much lower level, useful especially in recovery.
As for ext4, I'd suspect it's fairly usable in production, but I'd recommend actually simulating your workload on it. Be wary of various unsafe code paths in various applications that can corrupt the data depending on settings of ext4 (mind you, AFAIK those issues can happen in XFS and JFS as well).
XFS is still a good, stable solution, though I'll admit I moved from XFS to ext4 due to XFS' lackluster create/unlink performance. Still a very good choice if you don't have many small files being constantly created and deleted. Hard numbers can be taken from most benchmarks on the net. The slowdown is related to particular optimizations of XFS that cause certain journal operations to be quite slow (create/unlink). It's very fast in metadata access and read/write, though. Good choice for big files, IMHO (multimedia editing?).
Haven't really tested JFS, though I heard rather good opinions about it - just check first if it has a debugfs tool that you feel you can use reliably.