It just came to my mind that few years ago many iOS applications where infected by XcodeGhost (notably WeChat). This made me think about few possible scenarios:
Malicious code injected in object files
Compilers produce many temporary files and those files are, often, not even scanned by AV software because it visibly slows down compilation, even if the threat is detected later, by some heuristic, on the compiled executable we'll often dismiss it as a false positive ("hey, it's MY code").
A malicious application (probably a Trojan) is running with low privileges but it has (obviously?) access to user data. A malicious application can EASILY inject malicious code into compiler generated intermediate object files and malicious code will be part of our compiled application. Moreover compiler may apply intra-module optimizations, mixing malicious code with our code making detection in the produced executable harder.
Of course the attacker has to know the exact file format, target architecture and compiler version but it's feasible to support few architectures and popular compilers.
No one, I assume, will inspect generate code...not even when it's easy to decompile like for .NET and Java.
Malicious code injected in temporary files
Temporary files are often (at least in my experience) treated as part of the application and considered secure. Unless you're using a tool like VERACODE where every external input is considered compromised then a malicious application can read and change them, changing behavior of another application and, in some circumstances, even injecting code or exploiting another security issue.
It's true that data are considered shared but it's not uncommon (I won't mention in which applications...) to store a cache of executable code as a temporary file which can be tampered and force the host application to run malicious code. It opens to three possibilities: host application can perform unwanted actions, share data which is supposed to be confidential or execute something with elevated privileges.
A combination of this approach and the previous one: an attacker may simply change the makefile
generated by an IDE as a temporary file to include its own library (or obj file).
At least on Windows, temporary files are then open to injection even by an unelevated application (because protection is by user).
Malicious application can read private data
It's surprising the amount of data stored in our temporary files, some should not be visible to an application with low privileges but, again, protection is per user. Even without injecting any code, a malicious application can read data it has not the permissions to read (and I made a very quick scan in the leftovers of my Windows temp folder...all the temporary files properly deleted after few hundred milliseconds may contain even more sensitive data).
Finally, the question
What can we do (as developers and as users) to protect us against these kind of attacks?
The first attack might be mitigated (but not completely avoided) using a build server but a surprising number of applications are built on the development machine (even more often when using Docker). AVs might setup an honeypot to, at least, detect this attack.
The second attack it's the easiest to avoid, we just need to consider temporary files as compromised but the workaround might not be that easy to implement...
For the third one I can't think of a solution (beside stopping to use MANY unsafe applications) or running each application (or group of them) as separate a user. We'd need OS support to store temporary files isolated per-application.
Ideas?