There is no "easy" answer to this, as in "just follow these steps and you'll know if it works". Tools like the ones you mentioned will do their best to analyze an application and notify you of possible problems, before they become genuine problems.
So in general, there are 4 kinds of outcomes possibe when the software scans some code:
1. True Positive
This means that the application detected something as a problem that was genuinely a problem. A good example for this could be outdated software with publicly known vulnerabilities.
Tools are pretty good at parsing version information in a dependency file and comparing that to a list of known vulnerable versions. So if whichever scanner you employ says version 3.2.17 of whatever.js is vulnerable to something, it's very likely to be a genuine problem and you should update.
2. True Negative
This means that the program identified some code as not containing any security-relevant issues, and it being true. In other case, the scanner just moves on and you don't ever notice it happening.
3. False Positive
This is the first kind of error, and quite an annoying one. A false positive is an instance, in which the scanner identified something as a potential security vulnerability, when it isn't. The sensible thing to do would be to manually verify that there is no vulnerability, then somehow telling the scanner that this particular finding is a False Positive.
The unsensible thing to do would be to "fix it at all costs", which in the best case leads to wasted manhours, that could be spent productively, and at worst will lead to actual security vulnerabilities.
4. False Negative
These are the really bad ones. Basically, the scanner identifies something as benign, when actually, it includes a security vulnerability. Here's an example:
int strcmp(const char * s1, const char * s2) {
const unsigned char *p1 = (const unsigned char *) s1;
const unsigned char *p2 = (const unsigned char *) s2;
unsigned char c1, c2;
do
{
c1 = (unsigned char) *p1++;
c2 = (unsigned char) *p2++;
if (c1 == '\0')
return c1 - c2;
} while (c1 == c2);
return c1 - c2;
}
That's a pretty reasonable way of implementing a string comparison, right? Basically iterate through the string until you reach a null byte or as long as the two characters are identical, and in the end, return the difference between them.
The problem is when you use that to compare a user-defined input to a secret. Notice how the loop ends early if c1 != c2
? By carefully measuring execution time, an attacker can make educated guesses about the state of the secret. This is known as a side-channel attack. It's very likely that a source code scanner may not find this and consider it reasonable code.
What does this mean? What should I do?
First, make sure you have realistic expectations of what you want a static analysis tool to do for you. Considering it another pair of eyes, which may notice things your testers didn't see is a reasonable expectations. Believing it will catch anything and everything and "No errors found" means "No error exist" is a less reasonable expectation.
Comparing static analysis tools to each other is a lot like comparing any kind of specialist software to each other - someone will make a test and show the pros and cons. If in doubt, you can always hire a consultancy company to do evaluations for them. I did a few of those in my carreer so far, so it's not unheard of.
Note: Your question mentions Sonarqube and Snyk, both of which only perform static-analysis of code and "code-like things" such as "infrastructure as code". Make sure you know what the solution you are evaluating is actually offering.
The answer for other endpoint security products doesn't change much though.