19


I have installed ZFS(0.6.5) in my Centos 7 and I have also created a zpool, everything works fine apart from the fact that my datasets disappear on reboot.
I have been trying to debug this issue with the help of various online resources and blogs but couldn't get the desired result.
After reboot, when I issue the zfs list command I get "no datasets available" , and zpool list gives "no pools available" After doing a lot of online research, I could make it work by manually importing the cache file using zpool import -c cachefile, but still I had to run zpool set cachefile=/etc/zfs/zpool.cache Pool before the reboot so as to import it later on after reboot.

This is what systemctl status zfs-import-cache looks like,

zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static) Active: inactive (dead)

cat /etc/sysconfig/zfs

# ZoL userland configuration.

# Run `zfs mount -a` during system start?
ZFS_MOUNT='yes'

# Run `zfs unmount -a` during system stop?
ZFS_UNMOUNT='yes'

# Run `zfs share -a` during system start?
# nb: The shareiscsi, sharenfs, and sharesmb dataset properties.
ZFS_SHARE='yes'

# Run `zfs unshare -a` during system stop?
ZFS_UNSHARE='yes'

# Specify specific path(s) to look for device nodes and/or links for the
# pool import(s). See zpool(8) for more information about this variable.
# It supersedes the old USE_DISK_BY_ID which indicated that it would only
# try '/dev/disk/by-id'.
# The old variable will still work in the code, but is deprecated.
#ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"

# Should the datasets be mounted verbosely?
# A mount counter will be used when mounting if set to 'yes'.
VERBOSE_MOUNT='no'

# Should we allow overlay mounts?
# This is standard in Linux, but not ZFS which comes from Solaris where this
# is not allowed).
DO_OVERLAY_MOUNTS='no'

# Any additional option to the 'zfs mount' command line?
# Include '-o' for each option wanted.
MOUNT_EXTRA_OPTIONS=""

# Build kernel modules with the --enable-debug switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG='no'

# Build kernel modules with the --enable-debug-dmu-tx switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no'

# Keep debugging symbols in kernel modules?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_DISABLE_STRIP='no'

# Wait for this many seconds in the initrd pre_mountroot?
# This delays startup and should be '0' on most systems.
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'

# Wait for this many seconds in the initrd mountroot?
# This delays startup and should be '0' on most systems. This might help on
# systems which have their ZFS root on a USB disk that takes just a little
# longer to be available
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_POST_MODPROBE_SLEEP='0'

# List of additional datasets to mount after the root dataset is mounted?
#
# The init script will use the mountpoint specified in the 'mountpoint'
# property value in the dataset to determine where it should be mounted.
#
# This is a space separated list, and will be mounted in the order specified,
# so if one filesystem depends on a previous mountpoint, make sure to put
# them in the right order.
#
# It is not necessary to add filesystems below the root fs here. It is
# taken care of by the initrd script automatically. These are only for
# additional filesystems needed. Such as /opt, /usr/local which is not
# located under the root fs.
# Example: If root FS is 'rpool/ROOT/rootfs', this would make sense.
#ZFS_INITRD_ADDITIONAL_DATASETS="rpool/ROOT/usr rpool/ROOT/var"

# List of pools that should NOT be imported at boot?
# This is a space separated list.
#ZFS_POOL_EXCEPTIONS="test2"

# Optional arguments for the ZFS Event Daemon (ZED).
# See zed(8) for more information on available options.
#ZED_ARGS="-M"

I am not sure if this is a known issue,.. if yes, Is there any workaround for this? perhaps an easy way to preserve my datasets after reboot and preferably without the overhead of an cache file.

Vishnu Nair
  • 293
  • 1
  • 2
  • 10
  • what zpool status -v and zpool import says? – ostendali Oct 28 '15 at 10:28
  • Hi, `zpool status -v` `zpool status -v no pools available` And, `zpool import` gives me this `pool: zfsPool id: 10064980395446559551 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zfsPool ONLINE sda4 ONLINE ` – Vishnu Nair Oct 28 '15 at 10:39
  • zfs import is how I could make it work, by setting the cachefile initially using the set cachefile command – Vishnu Nair Oct 28 '15 at 10:40
  • you missed /etc/init/zpool-import.conf, can you post the content of that file as well? – ostendali Oct 28 '15 at 11:19
  • Hi, I was searching for /etc/init/zpool-import.conf file in my system, but I cannot locate it anywhere, I think it doesn't exist in my system – Vishnu Nair Oct 28 '15 at 11:29
  • I see you are using CentOS alright, I have zfs setup on my debian and if my memory doesn't betray me I had to create myself the init script. Here is someone had the same issue with CentOS and they appear to have fixed: https://github.com/zfsonlinux/zfs/pull/2766 – ostendali Oct 28 '15 at 11:44
  • I'm checking this now, thank you..will post it here if I have any further issues, thanks again :) – Vishnu Nair Oct 28 '15 at 11:55
  • @ostendali Hi, I checked the link and some of the references mentioned there, I had already tried to implement those solutions before, but it didn't help, actually after referring to those links is how I got hold of the fact that we can import from cachefile, however I'm looking for a solution which perhaps does not involve a cachefile..not sure if this can be achieved or not. – Vishnu Nair Oct 28 '15 at 12:15
  • It seems like this is happening after you upgrade kernel. The same crew here seems to find a solution https://github.com/zfsonlinux/zfs/issues/2600 it is a bit long thread but at the end they seems to find out the fix. hope it helps...there is a bit mess though in that forum about this issue... – ostendali Oct 28 '15 at 12:25
  • 1
    Is the ZFS target enabled? `systemctl status zfs.target` – Michael Hampton Oct 28 '15 at 12:44
  • Hi, I did not set any target I ran the status zfs.target command, it gives me.. `systemctl status zfs.target zfs.target - ZFS startup target Loaded: loaded (/usr/lib/systemd/system/zfs.target; disabled) Active: inactive (dead)` Any idea how I can get it up and running ? thanks. – Vishnu Nair Oct 28 '15 at 12:59

4 Answers4

8

Please make sure the zfs service (target) is enabled. That's what handles pool import/export on boot/shutdown.

zfs.target loaded active active ZFS startup target

You should never have to struggle with this. If you have a chance, run an update on your zfs distribution, as I know the startups services have improved over the last few releases:

[root@zfs2 ~]# rpm -qi zfs
Name        : zfs
Version     : 0.6.5.2
Release     : 1.el7.centos
ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Hi, I had tested 0.6.5.3 as well which happens to be the latest release I believe, but still faced this issue, with .6.5.3 I had to even run `modprobe zfs` everytime I did a reboot to load the modules. Btw, Target is not enabled please check the output in comments above(reply to Michael). May I know how to set one ? thanks. – Vishnu Nair Oct 28 '15 at 13:02
  • 1
    All you need to do is probably something like: `systemctl enable zfs.target` – ewwhite Oct 28 '15 at 13:19
7

ok, so the pool is there, which means the problem is with your zfs.cache, it is not persistent and that is why it looses its config when your reboot. what I'd suggest to do is to run:

      zpool import zfsPool 
      zpool list 

And check if the if it is available. Reboot the server and see if it comes back, if it doesn't then perform the same steps and run:

      zpool scrub

Just to make sure everything is alright with your pool etc.

Pls also post the content of:

      /etc/default/zfs.conf
      /etc/init/zpool-import.conf

Alternatively, if you are looking for workaround to this issue you can set it of course as follow.

Change the value in from 1 to 0:

    /etc/init/zpool-import.conf

and add the following to your /etc/rc.local:

    zfs mount -a

That will do the trick.

ostendali
  • 373
  • 1
  • 4
  • I ran `zfs import zfsPool` which as expected imported my pool, then I did a reboot, ran `zfs list` which gave me `no datasets`. I repeated the steps again and ran `zfs scrub` which did not give me any output, I did a reboot now again, and still the datasets are not preserved – Vishnu Nair Oct 28 '15 at 11:00
  • in case you haven't seen my request, I will post again "can you also post what is in /etc/default/zfs?" – ostendali Oct 28 '15 at 11:02
4

I also had the problem of the zfs disappearing after a reboot. Running CentOS 7.3 and ZFS 0.6.5.9 Reimporting brought it back (zpool import zfspool) only until the next reboot.

Here's the command that worked for me (to make it persist through reboots):

systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target

(Found this at: https://github.com/zfsonlinux/zfs/wiki/RHEL-%26-CentOS )

chicks
  • 3,639
  • 10
  • 26
  • 36
Jeff
  • 41
  • 2
1

In my case, ZFS was failing to import a zpool because it was on a cloud persistent volume that was not physically attached to the machine. I guess network volumes become available later than expected in the bootup process.

Running systemctl status zfs-import-cache.service after boot gave the following message:

● zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2021-09-07 18:37:28 UTC; 3 months 17 days ago
       Docs: man:zpool(8)
    Process: 780 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
   Main PID: 780 (code=exited, status=1/FAILURE)

Sep 07 18:37:26 ingress-zfs-2 systemd[1]: Starting Import ZFS pools by cache file...
Sep 07 18:37:28 ingress-zfs-2 zpool[780]: cannot import 'data': no such pool or dataset
Sep 07 18:37:28 ingress-zfs-2 zpool[780]:         Destroy and re-create the pool from
Sep 07 18:37:28 ingress-zfs-2 zpool[780]:         a backup source.
Sep 07 18:37:28 ingress-zfs-2 systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Sep 07 18:37:28 ingress-zfs-2 systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Sep 07 18:37:28 ingress-zfs-2 systemd[1]: Failed to start Import ZFS pools by cache file.

The solution was to patch the zfs-import-cache.service service file to include the remote-fs.target dependency:

[Unit]
...
After=remote-fs.target
...

On Ubuntu 20.04, this file was located at: /etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service.

I think specifying After=remote-fs.target is equivalent to using the _netdev option in an /etc/fstab file (see: https://unix.stackexchange.com/a/226453/78327).

ostrokach
  • 140
  • 4