All the answers above are incorrect regarding the capabilities of RAID 6. RAID 6 algorithms operate byte-by-byte just as RAID 5, and if a single byte on any one drive is corrupt, even with no error indicated by the drive, it can be detected AND CORRECTED. The algorithm for doing so is completely explained in
https://mirrors.edge.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf
In order to perform this check, the parity P and Q drives must also be read along with the data drives. If the computed parity P' and Q' differs with no drive errors, an analysis can pinpoint which of the drives is incorrect, and correct the data.
In addition, if the drive identification is to a drive that is not present (such as drive 137 if there are only 15 drives), more than one drive is providing corrupted data FOR THAT BYTE, signaling an uncorrectable error error. When there are much fewer than 256 drives in the set, this is detected with high probability per byte, and since there are many bytes in a block, with extremely high probability per block. If the drive identification is not consistent for all bytes within the RAID block, again, more than one drive is providing corrupted data, and generally one might reject the condition, but so long as all the drive identifications are valid, the block need not necessarily be rejected.
It takes longer than the usual verification time to perform this correction, but it only needs to be performed with the syndrome (P and Q) calculation shows an error.
All this being said, however, I have not examined the mdadm code to determine whether single-byte corruption is handled. I am aware that mdadm reports RAID6 syndrome errors on the monthly scan, but from the error message it is not clear whether they are being corrected - it does not stop the drive array nor identify any particular drive in the message.