I've been using FreeBSD 8.0 and subsequently 8.0-stable-February 2010 snapshot to experiment with ZFS for a couple of months. The system has a couple of independent 4-disc RAIDZ1 pools. At first things seemed to go more or less perfectly, though I've run into some increasingly disturbing problems which makes me think that under some particular circumstances and configurations it may be wise to avoid this setup. My first problem is not necessarily with the stability / functionality not of FreeBSD / ZFS overall themselves, but rather with the reliability and functionality of certain device drivers and disc drives under FreeBSD. I found that the default ata/ide driver didn't support the controller I'm using, but the siis silicon image storage driver had the needed port multiplier SATA support to make the drives work with FreeBSD 8. However upon closer inspection that driver code isn't really production ready IMHO -- it didn't gracefully handle the first disc related soft error / timeout / retry condition that caused a drive in the array to do something like delay responding for a few dozen seconds. I don't know exactly what happened, but it took around a minute for the array to timeout, reset, and reestablish operation, during which time every single drive in the array was 'lost' from operational status and resulted in an unrecoverable data fault at the higher filesystem level. AFAICT even the SIIS driver's maintainer says the driver's timeout / reset handling isn't really fully completed / robust / optimized yet. Fair enough, but the point is, no matter how good the OS or ZFS is, if you have a unreliable disc drive or controller or driver, it can certainly ruin the overall operations enough to cause fatal errors and data loss despite ZFS. Also SMART diagnostics requests don't seem to work with this particular controller driver. As for what caused the error .. flaky Seagate drives / firmware? I don't know, but having one drive error cause the whole array to 'fail' despite the RAIDZ defeats the whole point of a RAIDZ's reliability.
The behavior subsequent to the issue with zpool scrup / zpool status et. al. was also a bit suspicious, and it's not really clear whether that diagnostic / recovery process worked correctly at the ZFS/ZPOOL level; certainly I got some mixed messages about error statuses and error clearing et. al. The error indications sort of disappeared after a reboot despite the lack of an explicit zpool clear command; maybe that's intended, but if so it wasn't suggested by the zpool status output.
Potentially more seriously something seems to have SILENTLY gone wrong during operation after a few days of uptime wherein large parts of the array containing multiple file systems (ZFS) just "vanished" from being listed in (ls), and from normal I/O access. IIRC df -h, ls, etc. did not report the file systems even as existing, whereas zpool list / zpool status continued to indicate the expected amount of consumed storage in the pool, but it wasn't accounted for by any listed mounted or unmounted filesystems. /var/log/messages was not containing any error situation message, and operations had been proceeding totally normally afaict prior to that problem. zpool list / zpool status did not indicate problems with the pool. A zfs unmount -a failed with a busy indication for no obvious reason relating to interactive usage for several minutes before the last of the mounted zfs filesystems would unmount. Rebooting and rechecking /var/log/messages, zpool status, zpool list was not informative of any problem. The previously missing file systems did in fact remount when asked to do so manually, and appeared initially to have the correct contents, but after a minute or so of mounting various zfs systems in the pool it was noted that some had again disappeared unexpectedly. It is possible that I've done something wrong with defining the zfs filesystems and have somehow caused a problem but at the moment I find it inexplicable that a working system doing I/O to various zfs directories can all of a sudden lose view of entire filesystems that were working just fine minutes/hours/days ago with no intervening sysadmin commands to modify the basic zpool/zfs configuration.
Granted I'm running the Feb'10 stable snapshot which is NOT commended for production use, but then again several relatively noteworthy fixes to known ZFS/storage issues have been committed to the stable branch since 8.0 release, so, running stock 8.0 might be unsatisfactory in terms of reliability / features due to those issues for some people.
Anyway just a few weeks of fairly light testing have resulted in enough potentially disasterous reliability / functionality problems, not all of which seem to have to do with the particular deficiencies of the storage drives / controller / driver that I'm cautious about trusting FBSD 8.0 + ZFS for production / reliability use without a very carefully controlled hardware and software configuration and offline backup strategy.
OpenSolaris is a no-go right now anyway IMHO even if you wanted to run it -- afaict there are serious known problems with zfs deduplication that pretty much renders it unusable, and that and other issues seem to have resulted in a recommendation to wait for a few more patch versions to come out before trusting OpenSolaris+ZFS especially with a dedup system. B135/B136 seem to be just missing released without explanation, along with the 2010.03 major OS release. Some say Oracle is just being tight lipped about a schedule slippage, but that the expected codes will be belatedly released eventually, whereas others wonder if we'll ever see the full set of expected features in development by Sun be released as future open source versions by Oracle given the transition in ownership / leadership / management.
IMHO I'd stick with doing mirrors only, and only with very well vetted / stable storage controller drivers and disc drive models for optimum ZFS reliability under FreeBSD 8, and I'd probably wait for 8.1 even so.
2this seems rather subjective and open-ended and would be better suited to a threaded discussion forum. can you edit the question into a more concrete technical question? "is it reliable" and "should i use" typically don't make for good questions here. – quack quixote – 2010-03-30T06:33:31.193