3

I've recently started working with web applications, and the ones developed by our team seem to use a lot of external components for different minor functionality (e.g. a scrolling slider bar, a markdown editor ...)

The only "security" mechanism that we seem to rely on is that we only use components which are open source; however, we do not ever bother reading the source.

I'm less concerned about them introducing vulnerabilities (these will be picked up by our code scanning and VA/PT), and patching (we have a solid pipeline in place for this) and more worried about inserting some kind of backdoor / private data exfiltration vector.

Is this an uncommon attack vector? Is there a simple checklist of what/how to look for in code/behavior of external resources (kind of like the OWASP checklists for internal vulnerability assessments)?

Jedi
  • 3,906
  • 2
  • 24
  • 42
  • This might also be a topic for https://opensource.stackexchange.com – Philipp Aug 28 '16 at 16:46
  • fwiw, i've never heard of any such problems from an open-source project. – dandavis Aug 28 '16 at 22:00
  • @dandavis does that mean there are no problems, or that no one's looking? Searching for "open source backdoors" returns a huge number of interesting examples though... – Jedi Aug 30 '16 at 13:39
  • i wouldn't go so far as to say "no problems", but they aren't widespread from what i've seen, compared to closed source. There's been some random number generators and the linux kernal that have been "attacked", but it always get ironed out with OSS. – dandavis Aug 30 '16 at 21:29

3 Answers3

1

There is a difference between serverside and client side here. Since your talking about scrollbars and markdown editors I assume this is client side. I would be less concerned with client side problems, since most modern browsers pick up on security issues fast. In addition to that, browsers try to limit security risks, and sometimes act on the behalf of the user (malware detected, invalid xhr requests).

Client Side

Backdoors are rather uncommon for these libraries or any piece of client side software. Thought they do exist, HTML5 and the modern web have driven browsers to implemented some tricks to prevent malicious activities, for example; same origin, mixed content, popup blocks and iframe warnings, the remove of flash, activex and applets, and so on...

Server Side

Serverside is an other story. If you use third party libraries (whether they are opensource or not) you expose the environment in which the application runs. Often employed techniques such as jailing, autdits, permissions per application directory and per database (operation) minimize the chance of successfully corrupt/access data and/or other environment data.

To answer the question, it is a legitimate problem that can cause catastrophic failure. It is not easy/impossible to check every piece of code you just. Nowadays frameworks depend greatly on plugins, extensions and libraries. Opensource does not equal fast security fixes, but we may assume the more people use something, the faster bugs are reported and fixed. Put systems in place to limit the possibilities when it does go wrong. Have a disaster plan, design for failure, use (audit)logs, have daemons crawl the accesslogs and so on. When it comes to backdoor, people often seem to forget the outgoing firewall.

Yorick de Wid
  • 3,346
  • 14
  • 22
  • 1
    " In addition to that it is the browsers responsibility to act when issues occur." - maybe you do not treat DOM XSS as an issue but these are hard to find even if non-malicious, there is nothing the browser can do and they could provide the attacker with full access to the DOM, i.e. stealing passwords, modifying submissions etc. – Steffen Ullrich Aug 28 '16 at 17:07
  • @SteffenUllrich They certainly are, think about flash exploits that, as of this moment, would also still work, and have been used eminently to expolit the client side. Also cookies and adware is often initiated via malicious client side code. – Yorick de Wid Aug 28 '16 at 17:11
  • But a maliciously introduced DOM XSS is nothing the browser can detect since it can easily work around the XSS detection heuristics. Thus I do not think that the argument " it is the browsers responsibility to act when issues occur" is valid in such a case. – Steffen Ullrich Aug 28 '16 at 17:16
  • @SteffenUllrich Thats a rather specific case, but you are right. I wasn't happy with the sentence either :) – Yorick de Wid Aug 28 '16 at 17:25
1

When one packages client side components and deploys them to a browser as part of the application in the context of the domain from which the application is being served, the code is running within the trust boundary that the browser assume exists with the domain.

This means that that third party code can do anything first party code can do. It has full DOM visibility, it can reach out to third party servers, it can render ad and tracking tags to the browser, etc. This is significant exposure.

In terms of what can be done to defend this exposure:

  1. Managing versions and dependencies and checking against a vulnerability service like retire.js can present some protection against known vulnerabilities. In this vein, discourage copy-paste of third party code in favor of explicit dependencies. Perhaps this practice is already in use.

  2. Use of Content-Security-Policy headers allows for some whitelisting of the utilization of and communication with third parties:

    https://content-security-policy.com/

  3. Driving the application with Selenium, etc, in a sandbox/test environment with a proxy presents an opportunity to sometimes discover new third party communication and utilization when introduced.

    That said, more sophisticated attackers with code in the execution path try to detect when being run in sandbox environments or in a discovery mode and will not trigger payload delivery.

Jonah Benton
  • 3,359
  • 12
  • 20
1

Open source is by itself neither a guarantee of quality, nor the assurance of harmless code. I have no example of an open source application, framework or library that have contained a voluntary backdoor, but we must be aware that:

  • vulnerabilities can be hard to find, and impact the whole application. A good example for that is an implementation flaw in OpenSSL that could allow an attacker steal any secrets including private keys: ref. on wikipedia
  • confidential code, or code that is seldom reviewed by security experts (php utilities for example) could have a backdoor for a a rather long time before it would be discovered and make public.

Common practices are to rely only on well known open software, because the more used they are, the more chances are that harmfull code would be discovered.

To be fair, you should also notice that commercial software almost always come with a limitation of responsability. That means that you are not more protected with commercial software that with free (open source) software, and at least the latter give you the availibility to audit their code - or to pay an expert to do it for you.

But you should not rely on poorly reviewed software, even if open source for a highly sensitive application. As usual the good all risk/cost ratio must be considered.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84