2

In Our Solaris 11.3 zfs storage server, one disk in the pool died, we replaced it. Everything is working fine, but iostat is still showing old disk with large error count. This is bothering our monitoring scripts. Any idea how to reset disk list to reflect the current status?

# iostat en
---- errors ---
  s/w h/w trn tot device
    0   2   0   2 c0t5000C500855433DFd0
    0   2   0   2 c0t5000C5008554369Bd0
    0   2   0   2 c0t5000C5008555AD6Bd0
    0   2   0   2 c0t5000C5008555EB27d0
    0   2   0   2 c0t5000C5008555EB53d0
    0   2   0   2 c0t5000C5008555EBDBd0
    0 294   6 300 c0t5000C5008554DC67d0

0 Answers0