4

It just came to my mind that few years ago many iOS applications where infected by XcodeGhost (notably WeChat). This made me think about few possible scenarios:

Malicious code injected in object files

Compilers produce many temporary files and those files are, often, not even scanned by AV software because it visibly slows down compilation, even if the threat is detected later, by some heuristic, on the compiled executable we'll often dismiss it as a false positive ("hey, it's MY code").

A malicious application (probably a Trojan) is running with low privileges but it has (obviously?) access to user data. A malicious application can EASILY inject malicious code into compiler generated intermediate object files and malicious code will be part of our compiled application. Moreover compiler may apply intra-module optimizations, mixing malicious code with our code making detection in the produced executable harder.

Of course the attacker has to know the exact file format, target architecture and compiler version but it's feasible to support few architectures and popular compilers.

No one, I assume, will inspect generate code...not even when it's easy to decompile like for .NET and Java.

Malicious code injected in temporary files

Temporary files are often (at least in my experience) treated as part of the application and considered secure. Unless you're using a tool like VERACODE where every external input is considered compromised then a malicious application can read and change them, changing behavior of another application and, in some circumstances, even injecting code or exploiting another security issue.

It's true that data are considered shared but it's not uncommon (I won't mention in which applications...) to store a cache of executable code as a temporary file which can be tampered and force the host application to run malicious code. It opens to three possibilities: host application can perform unwanted actions, share data which is supposed to be confidential or execute something with elevated privileges.

A combination of this approach and the previous one: an attacker may simply change the makefile generated by an IDE as a temporary file to include its own library (or obj file).

At least on Windows, temporary files are then open to injection even by an unelevated application (because protection is by user).

Malicious application can read private data

It's surprising the amount of data stored in our temporary files, some should not be visible to an application with low privileges but, again, protection is per user. Even without injecting any code, a malicious application can read data it has not the permissions to read (and I made a very quick scan in the leftovers of my Windows temp folder...all the temporary files properly deleted after few hundred milliseconds may contain even more sensitive data).

Finally, the question

What can we do (as developers and as users) to protect us against these kind of attacks?

The first attack might be mitigated (but not completely avoided) using a build server but a surprising number of applications are built on the development machine (even more often when using Docker). AVs might setup an honeypot to, at least, detect this attack.

The second attack it's the easiest to avoid, we just need to consider temporary files as compromised but the workaround might not be that easy to implement...

For the third one I can't think of a solution (beside stopping to use MANY unsafe applications) or running each application (or group of them) as separate a user. We'd need OS support to store temporary files isolated per-application.

Ideas?

Adriano Repetti
  • 261
  • 1
  • 10
  • 1
    To some extent, this has happened: [According to this Wired article](https://www.wired.com/story/petya-plague-automatic-software-updates/) the Petya ransom-ware was spread by getting inside the developers' network and infecting a software update. And – while MeDoc _didn't_ code-sign their releases – it is believed the attackers were deep-enough in the system either to have signed things themselves, or "_even added their backdoor directly into the source code before it would be compiled into an executable program, signed and distributed_".... – TripeHound Nov 13 '18 at 14:30
  • 1
    ... It also mentions infected developer software that "_inserted malicious code into hundreds of iPhone apps in the App Store that were likely installed on millions of devices despite Apple's strict codesigning implementation._". – TripeHound Nov 13 '18 at 14:30
  • @TripeHound tnx for the reference, I tried to search for some example (I'm pretty sure that all these scenarios aren't new) but I didn't find anything. What you cited is exactly what I'm talking about. With the plethora of add-ins, plug-ins, extensions and developer _tools_ out there I'm sure it's extremely easy to exploit this vulnerability (even simply adding malicious code to generated source files from any DSL). – Adriano Repetti Nov 13 '18 at 14:46
  • It basically boils down to solving the halting problem – wireghoul Dec 07 '21 at 00:46

1 Answers1

0

My answer may look ridiculous compared to your question, but you can simply use code signing to mitigate all of the scenarios you developed.

As you mentioned .net, you cannot "simply" inject code in a strong-named assembly without tempering it.

Before loading anything you verify its authenticity (against your digital cert).

Is this the ultimate solution? of course NO, but it will demand much more sophisticated malware than a simple code injector (whatever where..)

Soufiane Tahiri
  • 2,667
  • 12
  • 27
  • 2
    I'm not limited to .NET only but even in that specific case and only for the second scenario: you inject code **before** the executable is actually generated (in the temporary obj files) then you will actually sign the tampered executable. – Adriano Repetti Nov 13 '18 at 14:08
  • You're right.Sorry fo answering without considering the whole picture. This made me think about the overall problem, and all I've come to is making a custom watcher, which would be probably configurable to "monitor" some directories or some file types and fire/logs accesses or edits/tampering. At least you will be informed that your obj files for example are edited/replaced/tampered.. – Soufiane Tahiri Nov 16 '18 at 14:05
  • Don't worry, to discuss about it IMO is the very first thing in security. I'm sure some AVs (or separate tools) already have that kind of honeypot (by default they probably generate office documents but I suppose they're configurable). It's cheap and it can probably be used both on dev and build machines. – Adriano Repetti Nov 16 '18 at 16:00