As often as not with RAID arrays, if you can't get it to rebuild itself, you're finished. It sounds like disk 6 might have failed as well. With the loss of three disks (even if the RAID controller is hallucinating that loss), your data is pretty much gone.
I see you have no backups. That's too bad. But, for the rest of your career, I imagine you might start using RAID properly. It's many things - a way for distributing workload to improve performance, and a way to reduce the immediate operational impact of a failure that would otherwise require a restore from backup. It can even be used to limit data loss in the event of a failure, short-term (i.e. less than your backup interval). But, RAID is not:
- A substitute for backups. You may have a severe disk failure or the RAID controller might fail, or your data could be lost for innumerable other reasons that result in software or nature destroying it.
- A license to ignore disk failures or to use suspect disks. When you suspect a disk failure, you must correct it immediately.
When in the future you design RAID arrays, you should consider very carefully the odds of a catastrophic failure happening before you can correct it. With a RAID 1 array of two disks, the odds of both of them failing at the same time are pretty low, but in your setup only three out of 16 (19%) had to fail. Basic probability suggests that array is fragile. Use arrays with lower numbers of disks or higher numbers of tolerable failures. Multiple volumes might help; aggregate RAID volumes using compound levels like RAID 10 and RAID 60. A RAID 60 array would have tolerated up to 4 failures (up to 2 in one half of it), and you would most likely have been OK.
To extend that concept a little, when you are using RAID, consider using hot spares. Hot spares are awesome because the array can start rebuilding immediately, and get out of the degraded state that much faster. They basically add disks to your array's failure tolerance, as long as the failures aren't so tightly clustered as to prevent rebuilding in time.
Also, consider the time it will take the array to rebuild. It takes a while to copy a 4TB disk, which is one reason disk arrays are usually built with smaller disks than that (there are other reasons).
Finally:
- Use high-quality disks. Check out the MTTF, if quoted. Use enterprise-class ones. The premium price is there for a reason. Avoid "green" ones that cycle excessively to save power, or similar.
- Label your disks. Then, you won't forget which order they go in.
Hopefully this lesson wasn't too expensive.