1

I'm curious to know if an attacker can fundamentally exploit the debugging process.

I'm not asking if specific debugging tools have been exploitable, surely some have, but rather whether the process of debugging - any and perhaps every debugging tool - is vulnerable to being lied to or used as an attack surface.

If the answer is "it depends on which OS we're talking about", let's focus on the Windows debugging API. And if the answer is that the way Windows does it isn't reliable or secure but it fundamentally could be, then that's what I'd like to know.

This relates to research I'm doing on the problem space of modern malware using polymorphism, emulator evasion, and obfuscation to defeat signature recognition.

It doesn't matter if the attacker knows it's running in a debugger

I'm fully aware that an attacker can detect they're running in a modified environment (whitepaper by Blackthorne et al. based on emulator detection but debuggers create the same scenario) (video) and use evasion strategies to hide the malicious behavior they might otherwise demonstrate on a target machine. However, I'm interested in the usage of a debugger (Qira, a "timeless" debugger by Google's Project Zero is of interest, although not decidedly so) to track the state of a process on the target machine so I can experiment with applying machine learning to develop signatures of polymorphic, obfuscated, and evasive programs.

Because we're watching a process on the actual target rather than within an EV emulator sandbox, it doesn't matter that the attacker knows they're being watched by a debugger. The attacker has to execute the malicious behavior path within their code in order for the payload to accomplish its goal.

My goal is detection of a signature - perhaps interruption of a detected process, but that's not mandatory. I'm looking at this problem space from the perspective of making a reliable way to create a signature for recognizing the polymorphic obfuscated payload for the purpose of identifying and responding to the threat rapidly, even if it already did the bad thing.

Performance isn't of interest

(So long as it's still feasible to run basic office programs performantly)

I realize running a program in a debugger will cause a significant performance hit. That's an accepted trade-off of the research I'm interested in. We're assuming the defender is willing to pay higher hardware costs to run programs. If certain computationally expensive programs can't afford this overhead, the security measure wouldn't apply to certain hosts running those programs. Such hosts may be isolated in a specific network segment with that factor in mind.

My question is this:

Is there a fundamental flaw in the reliability of the way debugging can be accomplished? Is the way debugging happens vulnerable at the processor, kernel, or OS level? Is it possible for a properly designed debugger to reliably watch the state of a program in a production environment as it executes?

J.Todd
  • 1,300
  • 1
  • 10
  • 20
  • I don't understand your question. Debuggers **can** can contain vulnerabilities, yes, and they could be exploited. There's simply no reason to do so: the malware is already running and it cannot count on being debugged to be effective. The program cannot "lie" to a debugger but it could escape it. Most debuggers can hide from most malwares (one common exception being GuLoader) just fine. Note that Emotet is no more modern than any other malware (and has been shut down already), also last time I analyzed it, it didn't have any real polymorphic code (just the usual packer stage). – Margaret Bloom Jun 13 '21 at 15:12
  • Actually, I've yet to see a malware that really is polymorphic. Finally, a debugger is made for... debugging. Which involves a human. You may want some kind of monitoring but beware that in order to see the changes to memory you need something that is essentially an optimized automated debugger. ML algorithms focus on behavior not code, it doesn't matter which code produces the behaviour, they usually monitor API calls. Windows allow a program to escape the debugger, if that happens or the malware detects the debugger, the analysis won't be correct. Data is not usually collected with a dbg. – Margaret Bloom Jun 13 '21 at 15:13
  • @MargaretBloom I haven't gone through it myself, just seen it referenced by seeming pretty reputable people as being a recent hard to identify package, but they may have been referring only to the fact that the command and control server can send subsets of the tools full module set, causing the signature to vary. – J.Todd Jun 15 '21 at 00:53
  • @MargaretBloom My interest is not currently on practical application but about the research aspect of what is provably effective, in a sense that the attacker can't cat-and-mouse the defense mechanism, at least not by direct attack. My research involves reliably monitoring the behavior results of instructions and applying machine learning to recognize key behaviors that no amount of obfuscation can hide. I think this video is at least somewhat similar the approach I'm considering. (Also an incredibly interesting piece of research regardless) https://www.youtube.com/watch?v=0SvX6F80qg8 – J.Todd Jun 15 '21 at 00:56

2 Answers2

1

Debuggers are fundamentally ahead in this game. A perfectly written debugger will always be able to perfectly simulate a runtime environment where a perfectly written malware could never detect. Real life debuggers are complicated software that can have many vulnerabilities that may allow a particularly sophisticated malware to detect that it's running in a debugger, but in theory debugger have the inherent field advantage.

However, one complicating situation here is that in the real world, poking holes in a real world software/hardware with bugs and writing malware to exploit those is a much easier task than debugging an unknown program and understanding whether it contains malicious behaviour. So in practice, malware do have lots of practical advantage.

Lie Ryan
  • 31,089
  • 6
  • 68
  • 93
0

If the question was whether the debugger could make things even worse (opening new attack vectors), then no, I don't think so. Classic debug process based on inserting (overwriting) a special assembly instruction (int 3 on x86 proc) in the code at the desired location, interrupting the debugged program and let the debugger control the process. Unless there is some serious vulnerability in the kernel's SIGTRAP handler, the process can not elevate privileges in this way and is running in the usual context. Having int 3 not terminating the process (which is usually the default) may present some new opportunities for delicate side channel attacks, but that's all. Because we are actually running the malware in our production environment, it can not be much worse.

For experimenting and calibrating ML algorithms I'd suggest a honeypot system which mimics the production one, but doesn't handle any critical information. It could be run parallel with the production system.

Additional info: Emulators and debuggers are different. Emulators run programs in isolated environment which might or might not be easily detected depending on the effort spent. Debuggers are running code in the host environment. It also could be detected using multiple (but different) techniques like starting a (second) probe debugger, look for code modification/debugger libs, exception effect analysis, hardware breakpoint detection or measuring time.

goteguru
  • 643
  • 3
  • 11
  • "I think there is a fundamental problem with the "debugger approach": If we try to monitor the malware in the debugger -- which is certainly possible -- we are running the malicious code in the production environment using the host's libs, network and kernel. It's hardly a good idea." The point is, this is a last effort defense system. *The attacker has already gotten past every previous layer of defense and is executing code on the target host*. We're just making sure we identify the behavior signature so we can react rapidly. – J.Todd Jun 10 '21 at 23:01
  • "Understanding the malware's inner mechanics and capabilities might need several run attempts. How does the debugger (software or human) know what steps are potentially dangerous?" Obfuscated or polymorphed malware is comparable to a story (or lots of short stories) with the same start and ending, just different ways of getting there. It's my hypothesis that machine learning can be leveraged to train against register and memory state changes over time to recognize what's really happening. – J.Todd Jun 10 '21 at 23:03
  • You answer could be improved by removing the (I believe objectively wrong, both of which I addressed) opinions about why it shouldn't be done, and keep the last paragraph. I'd upvote the last paragraph, it's useful info. – J.Todd Jun 10 '21 at 23:05
  • "Emulators and debuggers are not the same scenario at all." and by this, I meant the same scenario as in "both fundamentally make it possible for the attacker to notice the defense measure". My point was that I'm aware of the evasive behavior possible against debuggers and emulators, and it doesn't apply in this case because XYZ. – J.Todd Jun 10 '21 at 23:09
  • @J.Todd I think it's important to see debuggers and emulators operate differently. The methods in the paper you referenced won't work in case of debuggers. Other methods do. As a last resort defense line the idea is interesting (it wasn't clear to me). Still, I think building a honeypot would be more effective (and safer). I'm not sure how would you like to utilize ML methods for this purpose but I'm interested. I would gladly hear about it. You can find my email at my profile page. – goteguru Jun 11 '21 at 21:15
  • A debugger can allow a malware to elevate privilege. The `SeDebugPrivilege` is own by the debugger (which often is run as full admin). If the debugger contains an exploit, the malware can make the debugger run arbitrary, privileged code. But this is dumb, a malware must work when not being debugged and should stop when it is, so relying on a debugger for LPE is just stupid. – Margaret Bloom Jun 13 '21 at 15:15
  • @MargaretBloom I'm not a MS expert (linux is my choice), but to my understanding SeDebugPrivilege only grants debug right for other processes. If the debugger is run as full admin *and* has vulnerability, there is a trivial problem which OP is already aware of. I still can't see how a *debug process* could be exploited using SeDebugPrivilege. Would you clarify? – goteguru Jun 14 '21 at 16:52
  • Well, [SeDebugPrivilege is a pretty strong privilege](https://devblogs.microsoft.com/oldnewthing/20080314-00/?p=23113). But I was wrong, a debugger doesn't necessarily need it. So you are right, you can run a debugger with the same privilege of any other user program (including the malware). – Margaret Bloom Jun 14 '21 at 18:48
  • Ok. Thank you for the clarification. – goteguru Jun 15 '21 at 21:29