Obviously, if the entire drive dies, then RAID-Z on a single disk will not help. But what about other types of errors?
From my experience, I sometimes have a file that I can not read. On Mac OS X, the system will hang for a period of time and then come back with an error. I move the file somewhere out of the way, and I assume that that file has a bad sector or a bad block or perhaps even an entire bad track.
I date back to floppy disk days where managing your disk failures by hand was just a common activity. Of course you would replace the bad floppy as soon as possible, but sometimes you could not do that immediately so the practice was to find the bad area, allocate it to a file and then never delete that file.
The first question is how do hard drives fail? Is my assumptions above valid? Is it true that a bad block goes bad but the entire drive is still mostly usable? If that is the case, then it seems like RAID-Z could repair the bad block or the bad area of the disk using the parity of the other blocks (areas).
The use case is for backup. If I push data off to an 8 TB drive once a week, would it make sense to consider it a 7 TB drive of data plus 1 TB of parity in hopes that the extra parity will help me recover from bit rot, bad sectors, or other localized drive failures?
If the theory isn't flawed technically, then can ZFS be configured to do this?
Edit: I saw the other question before I posted this question. Splitting into separate partitions where each partition is grouped together is one option. But in concept, it could be possible to have the block map for the N partitions intertwined with one another so that one stripe, while logically would be across N partitions would physically be very close together. This was the gist of my question "can ZFS be configured to do this?" i.e. just ZFS ... not ZFS with trickery of partitions.