2

I am planning to purchase a security tool like fortify, or sonarqube or snyk.

How do you evaluate if the scanner really picks up static vulnerabilities and malware, as well runtime attacks?

Any good docker image sample which contains good malware and vulnerabilities that I can use for benchmarking?

schroeder
  • 123,438
  • 55
  • 284
  • 319
user12158726
  • 121
  • 3

3 Answers3

2

The type of security tool you're testing will dictate what type of testing container you'll need. If you're testing malware detection, grab a container that contains the EICAR malware test file. If you want to test against an application that has intentional vulnerabilities, spin up a Damn Vulnerable Web App (DVWA) container.

If you want to see if they'll detect active attacks, spin up the DVWA container and follow an online guide on how to exploit the many vulnerabilities it has

Dan
  • 619
  • 2
  • 7
0

There is no "easy" answer to this, as in "just follow these steps and you'll know if it works". Tools like the ones you mentioned will do their best to analyze an application and notify you of possible problems, before they become genuine problems.

So in general, there are 4 kinds of outcomes possibe when the software scans some code:

1. True Positive

This means that the application detected something as a problem that was genuinely a problem. A good example for this could be outdated software with publicly known vulnerabilities.

Tools are pretty good at parsing version information in a dependency file and comparing that to a list of known vulnerable versions. So if whichever scanner you employ says version 3.2.17 of whatever.js is vulnerable to something, it's very likely to be a genuine problem and you should update.

2. True Negative

This means that the program identified some code as not containing any security-relevant issues, and it being true. In other case, the scanner just moves on and you don't ever notice it happening.

3. False Positive

This is the first kind of error, and quite an annoying one. A false positive is an instance, in which the scanner identified something as a potential security vulnerability, when it isn't. The sensible thing to do would be to manually verify that there is no vulnerability, then somehow telling the scanner that this particular finding is a False Positive.

The unsensible thing to do would be to "fix it at all costs", which in the best case leads to wasted manhours, that could be spent productively, and at worst will lead to actual security vulnerabilities.

4. False Negative

These are the really bad ones. Basically, the scanner identifies something as benign, when actually, it includes a security vulnerability. Here's an example:

int strcmp(const char * s1, const char * s2) {
    const unsigned char *p1 = (const unsigned char *) s1;
    const unsigned char *p2 = (const unsigned char *) s2;
    unsigned char c1, c2;
    do
    {
        c1 = (unsigned char) *p1++;
        c2 = (unsigned char) *p2++;
        if (c1 == '\0')
            return c1 - c2;
    } while (c1 == c2);
    return c1 - c2;
}

That's a pretty reasonable way of implementing a string comparison, right? Basically iterate through the string until you reach a null byte or as long as the two characters are identical, and in the end, return the difference between them.

The problem is when you use that to compare a user-defined input to a secret. Notice how the loop ends early if c1 != c2? By carefully measuring execution time, an attacker can make educated guesses about the state of the secret. This is known as a side-channel attack. It's very likely that a source code scanner may not find this and consider it reasonable code.

What does this mean? What should I do?

First, make sure you have realistic expectations of what you want a static analysis tool to do for you. Considering it another pair of eyes, which may notice things your testers didn't see is a reasonable expectations. Believing it will catch anything and everything and "No errors found" means "No error exist" is a less reasonable expectation.

Comparing static analysis tools to each other is a lot like comparing any kind of specialist software to each other - someone will make a test and show the pros and cons. If in doubt, you can always hire a consultancy company to do evaluations for them. I did a few of those in my carreer so far, so it's not unheard of.


Note: Your question mentions Sonarqube and Snyk, both of which only perform static-analysis of code and "code-like things" such as "infrastructure as code". Make sure you know what the solution you are evaluating is actually offering.

The answer for other endpoint security products doesn't change much though.

  • Yes i actually considered sonatype and snyk os static code or open source or packages analyzer, but both of them claimed they are good in dynamic and malware catch as well. All the vendors jusy being not straightforward and mixed static and dynamic and everything else in their sales presentation that leads confusion to management. – user12158726 Nov 12 '21 at 07:14
  • Can you show where synk claims to identify malware or perform dynamic analysis? –  Nov 12 '21 at 15:24
0

Before thinking about purchasing a security tool you should define the problem your are trying to solve. Then you can define true positive and true negative testcases. These test cases can be evaluated against a selection of tools which advertise to solve your defined problems.

Testcases for SAST - Static Application Security Testing - Tools like Sonarqube:

  • a function that passes user input directly into a SQL query, which should be detected by SAST tools like Sonarqube.
  • usage of deprecated algorithms like md5

Testcases for DAST - Dynamic Application Security Testing - Tools like OWASP ZAP:

  • a web application allows path traversal to access files in the root directory
  • HTTP headers are misconfigured without XSS protection
  • SQL injections are possible because of unprotected input fields
  • and so on, checkout the OWASP ZAP alerts list

Testcases for Container Image Scanning to find vulnerabilities in open source components:

  • build a debian/centos/alpine image with a version that has known vulns, check if these vulns are reported
  • include Java/Python/C++ libraries with known vulnerabilities in the image, check if they are found
  • scan a up to date debian image, check if false positives exist

How do you evaluate if the scanner really picks up static vulnerabilities and malware, as well runtime attacks?

I am not aware of a scanner that is able to detect both. Most tools that I know solve specific problems only.

  • Sonarqube, Checkmarx for example is able to do SAST.
  • DependencyCheck, Sonatype Nexus IQ, JFrog Xray and Whitesource just create a Bill of Materials of your dependencies and libraries to report open source vulnerabilities.
  • Gauntlt, OWASP ZAP are doing DAST only.
  • Twistlock is foremost doing runtime defense.
  • checkout devsecops-reference-architectures for more examples

Any good docker image sample which contains good malware and vulnerabilities that I can use for benchmarking?

Just bundle up some dependencies with known vulnerabilities into an image and scan it. In my experience it is way harder to sort the fixable vulnerabilities in dependencies from the un-fixable. This task is really hard because sometimes there is no up-to-date patch available and sometimes the library is not included in the package manager of the OS. The result is that most of the latest container images include vulnerabilities which must be evaluated manually. So when evaluating a dependency scanning tool you should keep your focus also on the remediation part!

Also checkout how you can handle release versions and how you can differentiate between container images that are deployed to production and images that are just build. Often tools have very little support for these workflow topics which requires additional tooling and scripting from your side, which you do not want.

aykes
  • 1