6

Is there some kind of automated scanning tool which detects threats in Open Source Java Libraries?

I think the OWASP Orizon project tried to build such a tool, but it seems to be inactive for years now.

My goal is to find a metric which could act as a guide for the decision "is it safe to use library X in my project"...

AviD
  • 72,138
  • 22
  • 136
  • 218
rdmueller
  • 2,413
  • 3
  • 18
  • 17
  • 1
    I think you are looking for 'static code scanners' and there are only one or two, but not much success. – schroeder Mar 06 '12 at 00:30
  • Yes. But not a scanner which finds bugs (like pmd). I need a static code scanner whichdetects (possible) security problems like timebombs, network or file access... – rdmueller Mar 06 '12 at 06:09
  • 2
    IMHO finding things like timebombs is outwith the scope of what most static analysis tools would be able to cover. There used to be a site at https://opensource.fortify.com/ which contained the results of reviews of open source projects by the Fortify tool, but it doesn't seem to be available and I'm not aware of any others. – Rory McCune Mar 09 '12 at 16:17
  • 1
    For Java and .Net, take a look at the [OWASP Dependency Check](https://www.owasp.org/index.php/OWASP_Dependency_Check) tool. – Xander Feb 06 '15 at 18:11
  • 1
    Closed source software has the same problem. Not intentionally malicious, but there's been several commercially available router, and at least one IP camera firmware where backdoors have been found and are easily exploitable. This is a rather old question, but if you're worried about OSS for backdoors, you most certainly should be worried about closed source software possibly even more. – Steve Sether Feb 06 '15 at 19:01

3 Answers3

5

Malicious logic and backdoors. You're not likely to find an automated tool to automatically detect things like backdoors, malicious logic, timebombs, etc. These are too hard to detect with current techniques; it is too easy to hide a backdoor that current analysis techniques are not likely to find. (This is true of both static analysis and dynamic analysis.) Moreover, these kinds of backdoors are very rare -- probably not common enough to justify significant investment into building such a tool.

I think you should be worried more about vulnerabilities and bugs than about maliciously placed backdoors. They're a lot more common. And, if you are in a security-critical setting where you think there is a significant threat that a third party might try to deliberately insert a backdoor into a particular piece of code you are using, well, you shouldn't use that piece of code unless you trust the supplier.

Today, the most effective development-time defenses against malicious backdoors and timebombs are as follows:

  • Vet the developers. Choose developers who you trust. Normally, you'll want them to be your own employees. If you use external suppliers, you'll need to vet them carefully.

  • Mandatory code review. All code should be reviewed by a second person, other than the developer. The software development workflow and repository should be designed to track and enforce this policy. This provides two-person control: no one person can introduce code that isn't reviewed, and thus if everyone is taking the code review requirements seriously, this process should make it harder for a single individual to introduce a backdoor without being detected.

    See also How to review code for backdoors? for related discussion.

  • Secure software repository. Lock down the source code repository and build processes to ensure that no single insider can introduce malicious code into the binary.

However, these techniques remain limited in their effectiveness. I think you should also look at other defenses, such as risk transfer, isolation and sandboxing, and monitoring; I elaborate on this elsewhere on this site.

Bugs and vulnerabilities. I would suggest that, for most purposes, you should be more concerned about bugs and vulnerabilities (inadvertently introduced by a well-intentioned but fallible developers). There are many commercial and open source tools for scanning source code to detect bugs and vulnerabilities. For commercial tools, see the trade press; check out, e.g., Fortify, IBM Appscan, Veracode, and their competitors. The commercial tools are generally better than the open source tools.

If you are using a third-party open source library, I would also suggest that you check the CVE vulnerability database for past and open reports of vulnerabilities. Look to see how many vulnerabilities have been reported, how rapidly they were reported, and whether the project has technical details on the nature of the vulnerability. This should give you an idea of the project's security stance.

If you want a more in-depth look, you could look at the Coverity Scan database to see if they have scanned the library. You could also look through the library's bug tracker to see how they have handled security issues in the past. You could check to see if they have a clearly indicated security bug reporting process or place to report security bugs. These will give you a sense of the maturity of the project's software development process and its attitude towards security.

You may also find the following industry white paper of interest: The Unfortunate Reality of Insecure Libraries.

Open source. Your question seems to suggest you may be thinking that backdoors and timebombs are a greater threat in open source code than in closed source code. While that could be true, I'm not aware of any evidence for that assertion. If you're worried about backdoors and timebombs, you should probably be worried about it in all the code you are using, open source or closed source.

D.W.
  • 98,420
  • 30
  • 267
  • 572
3

Another open source repository for security and defect info I've found is the one from Coverity, but there are a lot of open source libraries used in the various linux distribution and quite a few libraries used in development.

A blacklist of "insecure" open source components would be greatly appreciated, but I was unable to find one.

landroni
  • 164
  • 7
0

You can restrict what Java software may do by creating a security policy with the Policy Tool and setting that as the policy for the Java virtual machine. This is the security mechanism used for Java applets.

You did say "open source." That implies that you have the Java code in hand. So you can go further by creating automated tests for Java software using a framework such as JUnit plus a considerable amount of coding by hand. You can get a certain amount of automated coding for your tests out of tools like Quickcheck. Finally you can use a tool like Cobertura to check that all lines and branches of the code are exercised by the tests.

If that testing succeeds within your security policy, then you have verified that the software never strays from your restrictions. You may still not want to take off the restrictions of your security policy short of actually reading the code. For example, if you have not succeeded to test 100% of the branches of the software (which is often hard to do), there is still the possibility that the software is detecting your security policy, JUnit, or Cobertura and restraining its behavior accordingly.

minopret
  • 434
  • 3
  • 9
  • I am currently trying to understand your answer. Do people really use the policy file? Tests: I am looking not for bugs but for malicious code like timebombs or network connections... – rdmueller Mar 05 '12 at 20:34
  • 1
    Some context: The Owasp Orizon tool that you mentioned is a static analysis tool. The method that I have outlined is an example of the contrasting technique called dynamic analysis. It uses coverage analysis to find dark corners of the code such as timebombs. Last and perhaps best, there is "white hat" penetration testing, in which a clever security tester would interact with the software in an effort to demonstrate and thereby detect any case in which the software permits violating security. – minopret Mar 06 '12 at 04:55