My backup NAS (Arch-based) reports a degraded pool. It also report a degraded disk as "repairing". I'm confused by this. Presuming that faulted is worse that degraded, should I be worried?
zpool status -v:
pool: zdata
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub in progress since Mon Dec 16 11:35:37 2019
1.80T scanned at 438M/s, 996G issued at 73.7M/s, 2.22T total
1.21M repaired, 43.86% done, 0 days 04:55:13 to go
config:
NAME STATE READ WRITE CKSUM
zdata DEGRADED 0 0 0
wwn-0x50014ee0019b83a6-part1 ONLINE 0 0 0
wwn-0x50014ee057084591-part1 ONLINE 0 0 0
wwn-0x50014ee0ac59cb99-part1 DEGRADED 224 0 454 too many errors (repairing)
wwn-0x50014ee2b3f6d328-part1 ONLINE 0 0 0
logs
wwn-0x50000f0056424431-part5 ONLINE 0 0 0
cache
wwn-0x50000f0056424431-part4 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
zdata/backup:<0x86697>
Also the failing disk is reported much smaller: zpool iostat -v:
capacity operations bandwidth
pool alloc free read write read write
------------------------------ ----- ----- ----- ----- ----- -----
zdata 2.22T 1.41T 33 34 31.3M 78.9K
wwn-0x50014ee0019b83a6-part1 711G 217G 11 8 10.8M 18.0K
wwn-0x50014ee057084591-part1 711G 217G 10 11 9.73M 24.6K
wwn-0x50014ee0ac59cb99-part1 103G 825G 0 10 0 29.1K
wwn-0x50014ee2b3f6d328-part1 744G 184G 11 2 10.7M 4.49K
logs - - - - - -
wwn-0x50000f0056424431-part5 4K 112M 0 0 0 0
cache - - - - - -
wwn-0x50000f0056424431-part4 94.9M 30.9G 0 1 0 128K
------------------------------ ----- ----- ----- ----- ----- -----
[EDIT] As the harddisk kept reporting errors I decided to replace it with a spare one. First I issued a add spare command for the new disk, which was than included in the pool after that I issued a replace command to replace the degraded one with the spare one. It might not have improved things as the pool now reads:
pool: zdata
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Dec 22 10:20:20 2019
36.5G scanned at 33.2M/s, 27.4G issued at 24.9M/s, 2.21T total
0B resilvered, 1.21% done, 1 days 01:35:59 to go
config:
NAME STATE READ WRITE CKSUM
zdata DEGRADED 0 0 0
wwn-0x50014ee0019b83a6-part1 ONLINE 0 0 0
wwn-0x50014ee057084591-part1 ONLINE 0 0 0
spare-2 DEGRADED 0 0 0
wwn-0x50014ee0ac59cb99-part1 DEGRADED 0 0 0 too many errors
wwn-0x50014ee25ea101ef ONLINE 0 0 0
wwn-0x50014ee2b3f6d328-part1 ONLINE 0 0 0
logs
wwn-0x50000f0056424431-part5 ONLINE 0 0 0
cache
wwn-0x50000f0056424431-part4 ONLINE 0 0 0
spares
wwn-0x50014ee25ea101ef INUSE currently in use
errors: No known data errors
What worries me is that the "to go" date keeps going up(!). In the time I wrote this it now reads 1 days 05:40:10. I assume the pool is lost forever when another disk, the controller, or power fails.
[EDIT] The new drive was resilvered after 4 hours or so. The guestimate of ZFS was not so correct apparently. After detaching the faulty drive I now have the situation where the new drive shows only 103G used of a 1TB disk. Just as the DEGRADED drive. How do I get this to the full 1TB?