5

I created a ZFS pool on Ubuntu 14.04 without specifiying RAID or redundancy options, wrote some data to it, rebooted the machine and the pool is no longer available (UNAVAIL). I don't have the exact error to hand but it mentioned that there was not sufficient replication available. I created two datastores in the pool which consists of 2 3TB disks. ZFS was recommended to me for its deduplication abilities and I'm not concerned with redundancy at this point.

I actually only want RAID0 so no mirroring or redundancy in the short term. Is there a way to do this with ZFS or would I be better off with LVM?

zpool status -v:

sudo zpool status -v
  pool: cryptoporticus
 state: UNAVAIL
status: One or more devices could not be used because the label is missing 
    or invalid.  There are insufficient replicas for the pool to continue
    functioning.
action: Destroy and re-create the pool from
    a backup source.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    cryptoporticus  UNAVAIL      0     0     0  insufficient replicas
      sda       ONLINE       0     0     0
      sdc       UNAVAIL      0     0     0

UPDATE

zpool export cyrptoporticus, then zpool import cryptoporticus resolved this for now. Is this likely to happen again on reboot?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
codecowboy
  • 1,287
  • 5
  • 17
  • 31
  • 2
    I'm not sure how it is in Linux but on solaris if you create a pool without specifying the redundancy then by default a striped vdev pool(RAID0) gets created. It looks more like one of your disks is missing. Please provide the output of `zpool status -x` A nice explanation of ZFS raid levels [here](http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/) – b13n1u Jun 24 '14 at 10:36
  • Exact error messages are also useful. – Sobrique Jun 24 '14 at 10:41
  • Also, are you using [`ZFS on Linux`](http://zfsonlinux.org/) or [`zfs-fuse`](http://zfs-fuse.net/)? Which versions? – the-wabbit Jun 24 '14 at 10:42
  • @syneticon-dj ubuntu-zfs from apt-add-repository --yes ppa:zfs-native/stable – codecowboy Jun 24 '14 at 10:47
  • Please show the output of the errors you're receiving. Namely `zpool status -v`. – ewwhite Jun 24 '14 at 11:44
  • possible duplicate of [Why did rebooting cause one side of my ZFS mirror to become UNAVAIL?](http://serverfault.com/questions/596712/why-did-rebooting-cause-one-side-of-my-zfs-mirror-to-become-unavail) – ewwhite Jun 24 '14 at 11:45
  • @ewwhite have added status output – codecowboy Jun 26 '14 at 21:16
  • @codecowboy Yes, it's likely to happen again. Please see the resolutions at the link I posted above. – ewwhite Jun 26 '14 at 22:29

2 Answers2

3

You likely are seeing a situation where at least one of your used disks became unavailable. This might be intermittent and resolvable, both Linux implementations (ZFS on Linux as well as zfs-fuse) seem to exhibit occasional hiccups which are easily cured by a zpool clear or a zpool export / zpool import cycle.

As for your question, yes, ZFS is perfectly capable of creating and maintaining a pool without any redundancy just by issuing something like zpool create mypool sdb sdc sdd.

But personally, I would not use ZFS just for its deduplication capabilities. Due to its architecture, ZFS deduplication will require a large amount of RAM and plenty of disk I/O for write operations. You probably will find it unsuitable for pools as large as yours as writes will be getting painfully slow. If you need deduplication, you might want to look at offline dedup implementations with a smaller memory and I/O footprint like btrfs file-level batch deduplication using bedup or block-level deduplication using dupremove: https://btrfs.wiki.kernel.org/index.php/Deduplication

the-wabbit
  • 40,319
  • 13
  • 105
  • 169
  • thanks!Do you think reads would be affected? I might store video content on the ZFS pool and want to stream from it over the local network. – codecowboy Jun 24 '14 at 11:44
  • @codecowboy if your access pattern is reads only, then performance will be quite fine (as much as your disks allow for). But writes will be sssllllloooooooooowwwww and create lots of disk read I/O as the pool fills up. If you are going to read off the zpool and try to write to it at the same time, you might experience significant I/O contention for your reads. – the-wabbit Jun 24 '14 at 15:18
1

This is a duplicate of: Why did rebooting cause one side of my ZFS mirror to become UNAVAIL?

In your case, the device names or symbolic links in the /dev/disk-by-* directory on your system were either not present or were renamed.

It's best to use /dev/disk-by-id devices for your zpool instead of by-path, as the path names can change. (grrrr... Ubuntu udev)

In /dev...

by-id/   by-path/ by-uuid/

So my spools look like the following (note how the devices aren't sda, sdb, etc.):

[root@BigHomie ~]# zpool status -v
  pool: vol0
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Sat May 24 17:14:09 2014
config:

    NAME                                            STATE     READ WRITE CKSUM
    vol0                                            ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        scsi-SATA_OWC_Mercury_AccOW140403AS1321905  ONLINE       0     0     0
        scsi-SATA_OWC_Mercury_AccOW140403AS1321932  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        scsi-SATA_OWC_Mercury_AccOW140403AS1321926  ONLINE       0     0     0
        scsi-SATA_OWC_Mercury_AccOW140403AS1321922  ONLINE       0     0     0
ewwhite
  • 194,921
  • 91
  • 434
  • 799