I always hear people say "test your backups", but I have no idea how that is done in practice when you have to deal with complex infrastructures.
For personal backups it's easy to rely on something like checksums, because all you have to recover is your own files (pictures, zip files, documents, etc.). So if you compare the checksums every once in a while, you can be sure that your files are readable and are ok. You can then restore them manually, if you need to.
But when you need to backup data in businesses / companies / organizations, you have to deal with complex infrastructures: large offices, several servers, multiple machines, users with different roles, lots of configurations, more files, etc. And you also need to avoid downtime, so I suppose the whole environment should be able to be restored quickly and automatically. The question is: how do you make sure the whole backup process works and keeps on working (generating valid backups that can actually be restored), in such a complex infrastructure? Testing with full restores doesn't seem feasible, I guess, unless you have a "copy" of the whole company to use as a testing field (like a huge empty office with lots of machines to simulate a restore). So I suppose you need to test the backups in other ways, but I have no idea what this whole testing process could be.
So what I'm asking is how backups are tested in large-enough companies, in practice, including all the necessary steps (what is usually backed up, what is usually not backed up, where it is backed up, how it is tested, how often, and what machines, software, and people are usually involved in the process).