Something must have happened in July 2013, which caused the RAID to become degraded and continue operating on only one drive. You did not notice this and take action, so ever since then the RAID was at risk.
The bad drive was not totally broken, but it also wasn't suitable to be in the RAID.
Then the second drive failed. At that point the RAID was simply not working at all, because it had no disks. Any reads from the RAID would likely fail with an I/O error. It is sort of surprising it was even able to produce a 500, but if the webserver is still able to process requests, a 500 is the proper error code to reply with on an I/O error.
You have to make a decision. Is it acceptable to perform a restore from backup and thus lose the data created between the most recent backup and the failure of the second drive? If that is acceptable, then you can get started doing that. But you should definitely not be doing the restore to the same drives.
So the next step for you is to get a new pair of drives and configure a new RAID on those, and then perform a restore to this new RAID.
The two faulty drives you can take to a data recovery specialist. Don't make the situation any worse by writing something to those drives. Having a copy of the most recent backup will make the recovery task a bit easier for the specialist, so you'll need to buy three new media. Two media for the new RAID, those would probably be two SSD. And one new drive to put a copy of the most recent backup on. That one could be a harddisk, since you don't need the extra performance offered by an SSD.