In a small server system, I have a zfs file system with a mirrored pair of consumer grade drives (Seagate Barracudas). Recently, during a periodic scrub operation the following result was given:
pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: scrub repaired 10.9M in 44h14m with 0 errors on Tue Jun 6 00:11:23 2017
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
map2_sda ONLINE 0 0 0
map2_sdb ONLINE 0 0 55
errors: No known data errors
There have been a few power failures and similar events between this scrub operation and the previous one, which I think may be a plausible cause of the failure, but I worry about the possibility that it is an impending hardware fault, particularly given that one disk was entirely clean and the other had multiple errors.
smartctl tells me that the suspect drive has had a total of 117 errors during its lifetime (of 935 days), but the most obvious error indicators are all well clear of their threshold values:
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 109 081 006 Pre-fail Always - 22737688
5 Reallocated_Sector_Ct 0x0033 092 092 010 Pre-fail Always - 9784
7 Seek_Error_Rate 0x000f 083 060 030 Pre-fail Always - 213798923
9 Power_On_Hours 0x0032 075 075 000 Old_age Always - 22599
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
Does anything here indicate that I need to be preemptively replacing this disk? I don't need 100% uptime on this machine, but would rather not have to worry about the multiple days of resilvering that would be required if I did have to replace the disk in an emergency situation.