I have Debian 10 (Buster) installed and have added ZFS from Backports. I have 4 iSCSI-LUNs that I use as disks for ZFS. Each LUN holds a separate zpool.
So far the ZFS setup works. But the system is not reboot-stable. Sometimes after reboot all ZFS volumes are restored and mounted correctly, sometimes not. I think that happens, because ZFS does not wait for iSCSI-completion.
I tried:
$ cat /etc/systemd/system/zfs-import-cache.d/after-open-iscsi.conf
[Unit]
After=open-iscsi.service
BindsTo=open-iscsi.service
$ systemd-analyze critical-chain zfs-import-cache.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.
zfs-import-cache.service +1.602s
└─open-iscsi.service @2min 1.033s +286ms
└─iscsid.service @538ms +72ms
└─network-online.target @536ms
└─ifup@eth0.service @2min 846ms
└─apparmor.service @2min 748ms +83ms
└─local-fs.target @2min 745ms
└─exports-kanzlei.mount @2min 3.039s
└─local-fs-pre.target @569ms
└─keyboard-setup.service @350ms +216ms
└─systemd-journald.socket @347ms
└─system.slice @297ms
└─-.slice @297ms
This does not solve my problems. Probably the iSCSI stuff is not ready but already systemd-activated and therefore ZFS does not find its devices.
Currently the only very dirty workaround is to put some rules in /etc/rc.local
:
systemctl start zfs-import-cache.service
systemctl start zfs-mount.service
systemctl start zfs-share.service
systemctl start zfs-zed.service
zfs mount -a
This works, but I want a clean solution.
What I really do not understand and what drives me crazy is that in Debian there do exist /etc/init.d/scriptname
and also systemd
unit files. Which one is used? sysvinit or systemd? Why are both provided? Which ones are the better ones?
So currently I feel I have a not stable boot process here.