Here's one thing that keeps bugging me ever since I heard about the NSA revelations. From what I heard, NSA built a system that basically sees most of the internet, made of many subsystems which affect the networks. Judging by the leaked documents, they were pretty invasive, yet nothing was detected before. How is that possible?
I mean, I don't believe in perfect software at the first try. Especially a system that processes a tremendous lot of various data in the real time. How could they design a system that achieves that, without getting hacked or at least crashes resulting in observable anomalies?