Given the MTTF T of an individual drive (say, 100000 hours) and the average time r it takes the operator to replace a failed drive and the array-controller to rebuild the array (say, 10 hours), how long will it take, on average, for a second drive to fail while the earlier failure is still being replaced thus dooming the entire N-drive RAID5?
In my own calculations I keep coming up with results of many centuries -- even for large values of N and r, which means, using "hot spares" to reduce the recovery time is a waste... Yet, so many people choose to dedicate a slot in a RAID-enclosure to hot spare (instead of increasing capacity), it baffles me...