When an error occurs on a drive, is it correct to assume that it will always be detected and reported to the OS (if software RAID such as mdadm) or RAID controller (if hardware RAID) as a failed read (i.e. it won't silently return corrupted data), and then the the RAID software/controller will take that fact and use the other drive(s) in the RAID to read the data instead (assuming it's a RAID type that has redundancy)?
From what I understand, modern enterprise-grade drives have error detection schemes in place, so I'm assuming this is the case, but had trouble finding out anything conclusive online. I imagine this answer hinges to a degree around the quality of the error detection built into the drive, so if it matters, I'm most interested in this with regards to the Intel DC S3500 series SSDs.
EDIT 5-Jun-2015 - clarification:
Specifically, I'm wondering if the algorithms used today for detection of errors bulletproof. In a simple example, if error detection was just doing an XOR on all the bits in the sector, then if two bits got flipped, the error wouldn't be detected. I imagine they are way more advanced than that, but I wonder what the odds of an error going undetected is and if it's so low that we need not even worry about it, and if there's some authoritative source or trustworthy article on this somewhere that could be cited.
EDIT 10-Jun-2015
Updated the question title and the question body to make it more generic to the idea of disk errors (not centered around mdadm like it originally was).