FreeNAS supports S.M.A.R.T monitoring so typically before a drives fails if notifications are set correctly and monitoring is enabled sysadmin will be getting reports on bad unusable sectors, overheating, etc.
FreeNAS as of version 9.2.1.8 DOESNOT support "hot spare". Spares configured in a zpool can be manually pushed to replace a failed drive but nothing in the software provides for automation of the process.
With 2 simultaneous failures in RAIDZ2 there will be almost guaranteed unrecoverable file errors. This is because of a process known as Bitrot. Contemporary drives are typically 3TB+. In order to get better than mirror space utilization one would construct RAIDZ2 from at least 6 Drives. Now with one failed drive and vdev capacity greater than 12 TB in the remaining RAID 5 like stripe and an URE rate of 10^14, you are highly likely to encounter an URE. Almost certain, if the drive vendors are right. Which will result as minimum in a message like this:
~# zpool status -v
pool: dpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
scan: resilvered 6.90T in 52h5m with 313 errors on Wed Oct 22 17:44:25 2014
config:
NAME STATE READ WRITE CKSUM
dpool DEGRADED 0 0 5.75K
raidz2-0 ONLINE 0 0 78
c0t50014EE05807CC4Ed0 ONLINE 0 0 0
c0t50014EE6AAD9F57Fd0 ONLINE 0 0 0
c0t50014EE204FC5087d0 ONLINE 0 0 0
c0t50014EE6AADA3B7Cd0 ONLINE 0 0 0
c0t50014EE655849876d0 ONLINE 0 0 0
c0t50014EE6AADA3DFDd0 ONLINE 0 0 0
c0t50014EE6AADA38FFd0 ONLINE 39 0 0
raidz2-1 ONLINE 0 0 11.4K
c0t50014EE6AADA45E4d0 ONLINE 1.69K 0 0
c0t50014EE6AADA45ECd0 ONLINE 726 0 0
c0t50014EE6AADA3944d0 ONLINE 0 0 0
c0t50014EE204FC1F46d0 ONLINE 0 0 0
c0t50014EE6002A74CEd0 ONLINE 0 0 0
c0t50014EE2AFA6C8B4d0 ONLINE 0 0 0
c0t50014EE6002F9C53d0 ONLINE 5 0 0
raidz2-2 DEGRADED 0 0 0
c0t50014EE6002F39C5d0 ONLINE 0 0 0
c0t50014EE25AFFB56Ad0 ONLINE 0 0 0
c0t50014EE6002F65E3d0 ONLINE 0 0 0
c0t50014EE6002F573Dd0 ONLINE 0 0 0
c0t50014EE6002F575Ed0 ONLINE 0 0 0
spare-5 DEGRADED 0 0 0
c0t50014EE6002F645Ed0 FAULTED 1 29 0 too many errors
c0t50014EE2AFA6FC32d0 ONLINE 0 0 0
c0t50014EE2050538DDd0 ONLINE 0 0 0
raidz2-3 ONLINE 0 0 0
c0t50014EE25A518CBCd0 ONLINE 0 0 0
c0t50014EE65584A979d0 ONLINE 0 0 0
c0t50014EE65584AC0Ed0 ONLINE 0 0 0
c0t50014EE2B066A6D2d0 ONLINE 0 0 0
c0t50014EE65584D139d0 ONLINE 0 0 0
c0t50014EE65584E5CBd0 ONLINE 0 0 0
c0t50014EE65584E120d0 ONLINE 0 0 0
raidz2-4 ONLINE 0 0 0
c0t50014EE65584EB2Cd0 ONLINE 0 0 0
c0t50014EE65584ED80d0 ONLINE 0 0 0
c0t50014EE65584EF52d0 ONLINE 0 0 0
c0t50014EE65584EFD9d0 ONLINE 0 0 1
c0t50014EE2AFA6B6D0d0 ONLINE 0 0 0
c0t5000CCA221C2A603d0 ONLINE 0 0 0
c0t50014EE655849F19d0 ONLINE 0 0 0
spares
c0t50014EE2AFA6FC32d0 INUSE currently in use
errors: Permanent errors have been detected in the following files:
The Rebuild process named "resilvering" will depend on the speed of the individual drives and their occupancy.
Think about 25MB/s top speed. However here is a real life example of multiple failures and actual speed of 5MB/s - so we are talking about week(s)- these are 2TB 7200 RPM WD Drives.
~# zpool status
pool: dpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Nov 13 10:41:28 2014
338M scanned out of 48.3T at 5.72M/s, (scan is slow, no estimated time)
32.3M resilvered, 0.00% done
config:
NAME STATE READ WRITE CKSUM
dpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/9640be78-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0 (resilvering)
gptid/97b9d7c5-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/994daffc-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/9a7c78a3-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/9c48de9d-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/9e1ca264-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0 (resilvering)
gptid/9fafcc1e-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/a130f0df-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/a2b07b02-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/a44e4ed9-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/a617b0c5-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/a785adf7-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/a8c69dd8-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0 (resilvering)
gptid/aa097d45-a3e1-11e3-844a-001b21675440 ONLINE 0 0 1 (resilvering)
gptid/ab7e0047-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/acfe5649-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0 (resilvering)
gptid/ae5be1b8-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/afd04931-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/b14ef3e7-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/b2c8232a-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
raidz2-2 ONLINE 0 0 0
gptid/b43d9260-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/b5bd6d79-a3e1-11e3-844a-001b21675440 ONLINE 0 0 1 (resilvering)
gptid/b708060f-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/b8445901-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/b9c3b4f4-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/bb53a54f-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/bccf1980-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/be50575e-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0 (resilvering)
gptid/bff97931-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
gptid/c1b93e80-a3e1-11e3-844a-001b21675440 ONLINE 0 0 0
spares
gptid/c4f52138-a3e1-11e3-844a-001b21675440 AVAIL
gptid/c6332a6f-a3e1-11e3-844a-001b21675440 AVAIL
errors: No known data errors
Data protection in RAIDZ is NOT meant to replace backups. In a PB of storage with RAID2 protection only within the first 3 years one is statistically guaranteed to lose at least some files. Hence replication to second place is mandatory. FreeNAS supports ZFS send/receive as well as rsync.
If one has set monitoring and pays attention to one's notification then it is easy to initiate spare inserting into the zpools.
However current FreeNAS version (9.2.1.8) does not provide for an easy way to identify the slot/enclosure of the failed disk. You can check my answer on the topic:
How to determine which disk failed in a FreeNAS / ZFS setup