I'm curious to know if an attacker can fundamentally exploit the debugging process.
I'm not asking if specific debugging tools have been exploitable, surely some have, but rather whether the process of debugging - any and perhaps every debugging tool - is vulnerable to being lied to or used as an attack surface.
If the answer is "it depends on which OS we're talking about", let's focus on the Windows debugging API. And if the answer is that the way Windows does it isn't reliable or secure but it fundamentally could be, then that's what I'd like to know.
This relates to research I'm doing on the problem space of modern malware using polymorphism, emulator evasion, and obfuscation to defeat signature recognition.
It doesn't matter if the attacker knows it's running in a debugger
I'm fully aware that an attacker can detect they're running in a modified environment (whitepaper by Blackthorne et al. based on emulator detection but debuggers create the same scenario) (video) and use evasion strategies to hide the malicious behavior they might otherwise demonstrate on a target machine. However, I'm interested in the usage of a debugger (Qira, a "timeless" debugger by Google's Project Zero is of interest, although not decidedly so) to track the state of a process on the target machine so I can experiment with applying machine learning to develop signatures of polymorphic, obfuscated, and evasive programs.
Because we're watching a process on the actual target rather than within an EV emulator sandbox, it doesn't matter that the attacker knows they're being watched by a debugger. The attacker has to execute the malicious behavior path within their code in order for the payload to accomplish its goal.
My goal is detection of a signature - perhaps interruption of a detected process, but that's not mandatory. I'm looking at this problem space from the perspective of making a reliable way to create a signature for recognizing the polymorphic obfuscated payload for the purpose of identifying and responding to the threat rapidly, even if it already did the bad thing.
Performance isn't of interest
(So long as it's still feasible to run basic office programs performantly)
I realize running a program in a debugger will cause a significant performance hit. That's an accepted trade-off of the research I'm interested in. We're assuming the defender is willing to pay higher hardware costs to run programs. If certain computationally expensive programs can't afford this overhead, the security measure wouldn't apply to certain hosts running those programs. Such hosts may be isolated in a specific network segment with that factor in mind.
My question is this:
Is there a fundamental flaw in the reliability of the way debugging can be accomplished? Is the way debugging happens vulnerable at the processor, kernel, or OS level? Is it possible for a properly designed debugger to reliably watch the state of a program in a production environment as it executes?