I was reading an infosec blog recently, and I was caught off guard by the following statement:
Sure you can run up to date software and firewalls and that network appliance in your data center that apparently solves the halting problem... or you could begin to think differently and push undecidability onto your attacker at the moment he tries to exploit your application.
I was aware of the halting problem but never really paid much attention to it, so I did some follow up reading. I understand the basic implications of the principle, but I wonder if using the halting problem isn't a convenient excuse to justify the dismal state of cybersecurity? What applicability does the halting problem actually have to cybersecurity applications, such as malware detection or vulnerability detection?
The way I see it, the halting problem states that it's not possible to write a general purpose program which can make 100% accurate determinations about the behavior of every single possible program. I'm thinking of a program that analyzes software to determine if it's malicious, or a program that analyzes software to determine if it contains buffer overflow or SQLi vulnerabilities.
The proof relies on a special, contrived program to show this paradox. This doesn't mean that no programs can be analyzed by another program with 100% accuracy, nor even that a large majority of real-world programs can't be reliably analyzed by another program with 100% accuracy.
My final question is can an attacker use the halting problem as a form of an attack? E.g. whatever analysis program you have written to detect the attacker's presence or activity, can the attack write a program with the same provable behavior of making your detection algorithm undecidable?
Any pointers to real-world research or established experts on this subject is much appreciated.