Feedback on Using ZFS and FreeBSD

6

I need to create a server that will be used solely for backing up files. The server will have 2TB of RAID-5 storage to begin with but I may want to add additional storage later on. As such, I am currently considering using FreeBSD + ZFS as the OS and file system.

Can anyone point to scenarios where they are using ZFS in a production environment and are satisfied with their choice? I have read that ZFS should be used with OpenSolaris over FreeBSD as OpenSolaris is usually ahead of the curve with ZFS as far as version updates and stability. However, I am not interested in using OpenSolaris for this project.

An alternative option that I am considering is to stick with ext3 and create multiple volumes if need be, because I know that I will not need a single, continuous volume larger than 2TB.

Thanks in advance for your feedback.

ToiletOverflow

Posted 2010-03-30T06:20:47.010

Reputation:

2this seems rather subjective and open-ended and would be better suited to a threaded discussion forum. can you edit the question into a more concrete technical question? "is it reliable" and "should i use" typically don't make for good questions here. – quack quixote – 2010-03-30T06:33:31.193

Answers

4

I've been using FreeBSD 8.0 and subsequently 8.0-stable-February 2010 snapshot to experiment with ZFS for a couple of months. The system has a couple of independent 4-disc RAIDZ1 pools. At first things seemed to go more or less perfectly, though I've run into some increasingly disturbing problems which makes me think that under some particular circumstances and configurations it may be wise to avoid this setup. My first problem is not necessarily with the stability / functionality not of FreeBSD / ZFS overall themselves, but rather with the reliability and functionality of certain device drivers and disc drives under FreeBSD. I found that the default ata/ide driver didn't support the controller I'm using, but the siis silicon image storage driver had the needed port multiplier SATA support to make the drives work with FreeBSD 8. However upon closer inspection that driver code isn't really production ready IMHO -- it didn't gracefully handle the first disc related soft error / timeout / retry condition that caused a drive in the array to do something like delay responding for a few dozen seconds. I don't know exactly what happened, but it took around a minute for the array to timeout, reset, and reestablish operation, during which time every single drive in the array was 'lost' from operational status and resulted in an unrecoverable data fault at the higher filesystem level. AFAICT even the SIIS driver's maintainer says the driver's timeout / reset handling isn't really fully completed / robust / optimized yet. Fair enough, but the point is, no matter how good the OS or ZFS is, if you have a unreliable disc drive or controller or driver, it can certainly ruin the overall operations enough to cause fatal errors and data loss despite ZFS. Also SMART diagnostics requests don't seem to work with this particular controller driver. As for what caused the error .. flaky Seagate drives / firmware? I don't know, but having one drive error cause the whole array to 'fail' despite the RAIDZ defeats the whole point of a RAIDZ's reliability. The behavior subsequent to the issue with zpool scrup / zpool status et. al. was also a bit suspicious, and it's not really clear whether that diagnostic / recovery process worked correctly at the ZFS/ZPOOL level; certainly I got some mixed messages about error statuses and error clearing et. al. The error indications sort of disappeared after a reboot despite the lack of an explicit zpool clear command; maybe that's intended, but if so it wasn't suggested by the zpool status output.

Potentially more seriously something seems to have SILENTLY gone wrong during operation after a few days of uptime wherein large parts of the array containing multiple file systems (ZFS) just "vanished" from being listed in (ls), and from normal I/O access. IIRC df -h, ls, etc. did not report the file systems even as existing, whereas zpool list / zpool status continued to indicate the expected amount of consumed storage in the pool, but it wasn't accounted for by any listed mounted or unmounted filesystems. /var/log/messages was not containing any error situation message, and operations had been proceeding totally normally afaict prior to that problem. zpool list / zpool status did not indicate problems with the pool. A zfs unmount -a failed with a busy indication for no obvious reason relating to interactive usage for several minutes before the last of the mounted zfs filesystems would unmount. Rebooting and rechecking /var/log/messages, zpool status, zpool list was not informative of any problem. The previously missing file systems did in fact remount when asked to do so manually, and appeared initially to have the correct contents, but after a minute or so of mounting various zfs systems in the pool it was noted that some had again disappeared unexpectedly. It is possible that I've done something wrong with defining the zfs filesystems and have somehow caused a problem but at the moment I find it inexplicable that a working system doing I/O to various zfs directories can all of a sudden lose view of entire filesystems that were working just fine minutes/hours/days ago with no intervening sysadmin commands to modify the basic zpool/zfs configuration.

Granted I'm running the Feb'10 stable snapshot which is NOT commended for production use, but then again several relatively noteworthy fixes to known ZFS/storage issues have been committed to the stable branch since 8.0 release, so, running stock 8.0 might be unsatisfactory in terms of reliability / features due to those issues for some people.

Anyway just a few weeks of fairly light testing have resulted in enough potentially disasterous reliability / functionality problems, not all of which seem to have to do with the particular deficiencies of the storage drives / controller / driver that I'm cautious about trusting FBSD 8.0 + ZFS for production / reliability use without a very carefully controlled hardware and software configuration and offline backup strategy.

OpenSolaris is a no-go right now anyway IMHO even if you wanted to run it -- afaict there are serious known problems with zfs deduplication that pretty much renders it unusable, and that and other issues seem to have resulted in a recommendation to wait for a few more patch versions to come out before trusting OpenSolaris+ZFS especially with a dedup system. B135/B136 seem to be just missing released without explanation, along with the 2010.03 major OS release. Some say Oracle is just being tight lipped about a schedule slippage, but that the expected codes will be belatedly released eventually, whereas others wonder if we'll ever see the full set of expected features in development by Sun be released as future open source versions by Oracle given the transition in ownership / leadership / management.

IMHO I'd stick with doing mirrors only, and only with very well vetted / stable storage controller drivers and disc drive models for optimum ZFS reliability under FreeBSD 8, and I'd probably wait for 8.1 even so.

zuser

Posted 2010-03-30T06:20:47.010

Reputation: 1

3

I've been using FreeBSD+ZFS with over 2TB storage for past two years without any issues. However I suggest you use FreeBSD 8 (amd64) that support ZFS version 13.

Babak Farrokhi

Posted 2010-03-30T06:20:47.010

Reputation: 11

What's the rationale behind your recommendation? – None – 2010-03-30T16:14:33.003

FreeBSD 8 support ZFSv13 that supports ZFS operations by a regular user, L2ARC, ZFS Intent Log on separated disks (slog), sparse volumes, and so on. I also experienced system crash on ZFS operations with FreeBSD 7.2, which never happened once we upgraded to FreeBSD 8. – Babak Farrokhi – 2010-04-03T05:41:19.633

2

one thing you could base your decision upon is: which features do you need and which one are there in the freebsd implementation of zfs?

the http://wiki.freebsd.org/ZFS points out the open issues pretty well.

(eg, at the time of writing this, fbsd does not support iscsi on the zfs as it seems. since you want to use your box as storage / backup and maybe you have an apple flying around... they like iscsi, especially for time machine)

akira

Posted 2010-03-30T06:20:47.010

Reputation: 52 754