0

I rebooted my Ubuntu 16.04.03 system and my systemd startup jobs aren't well organized yet because it does not bring up ZFS. (that's a different topic) So I did a zpool import sbn and ended up with the following status:

# zpool status
  pool: sbn
 state: ONLINE
  scan: scrub repaired 0 in 20h49m with 0 errors on Sun Feb 11 21:13:58 2018
config:

    NAME        STATE     READ WRITE CKSUM
    sbn         ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0
        sdg     ONLINE       0     0     0
        sdh     ONLINE       0     0     0
        sdi     ONLINE       0     0     0
        sdj     ONLINE       0     0     0
        sdk     ONLINE       0     0     0
    spares
      sdk       FAULTED   corrupted data

errors: No known data errors

The array is working fine but I would like to fix the FAULTED drive message and add back in the spares (sdl and sdm). I am not sure how sdk got listed as a spare while also being active but how do I remove it from the spare list without removing from the raidz2-0?

Any suggestions?

AlanObject
  • 652
  • 1
  • 9
  • 20
  • 1
    You should not create zpools with `sd*` device names. These may appear in a different order each time you reboot the machine. And this may cause drives to get "lost". Instead, use the unique names which appear under `/dev/disk/by-id` which do not change. – Michael Hampton Feb 13 '18 at 20:27
  • @MichaelHampton yes I understand that but that tip was in one of the tutorials that I followed when I first set up this system. I didn't want tot try to change that in a production system. – AlanObject Feb 13 '18 at 21:16
  • Hmm. There is probably a way to fix that but I don't know it offhand. – Michael Hampton Feb 13 '18 at 22:51
  • 1
    To reimport the pool with the correct device names, do the following: 1) export it; 2) delete the `/etc/zfs/zpool.cache` file; 3) execute `zpool import -d /dev/disk/by-uuid sbn`; 4) check the result with `zpool status` – shodanshok Feb 13 '18 at 23:01

0 Answers0