What are the advantages of using automated tools, as opposed to manual review? What are the disadvantages?
This applies both to external blackbox vulnerability scanning, and to static code analysis.
From the original Area51 proposal
Some good answers here, but I think some points were missing:
Bottom line? They both have a place, and should both be used in the correct context. For low-quality apps, first start by fixing everything the autotool can find, and don't bother with investing in a proper manual review just yet. When you raise the security level, and gotten rid of the low hanging fruit, go the distance and perform an in-depth manual review. And, when you're doing manual testing - first step, is running the autotool, filter the results, THEN begin the real testing.
Automated pros:
Automated cons:
Manual approach basically converts automated pros/cons to their cons/pros. But manual approach requires more deep knowledge of subject.
Semi-automation is the answer. Human intelligence piloting automated tools is the best bet for maximizing test coverage and depth, not either or.
What works: Smart people driving the tools.
What fails: Everything else.
Automated tools can do some things better than a human, and vice versa.
Automated tools for example can try 100s of different ways to find an XSS vulnerability, more than a human can remember. Every time someone finds a new way to do XSS, it is added to the tool and it will test for it.
On the other hand these tools aren't smart, they don't draw conclusions. A human might conclude that when you do X on page 1, and Y on page 2, the result on page Z will be different then excepted.
Automated tools miss things and report things falsely, thus requiring manual work.
Therefore, all automated work creates more manual work.
You may also want to see my answer to this question on White-box vs. Black-box where I explain the best practices as dictated by the literature.
In a recent post, I read that Radware found that attacks that last only an hour or less are on the rise – and more than half of the three biggest attacks fell into that category. The implications of these findings are clear. It’s likely that very soon, even long attack campaigns will be based on short bursts of traffic – bursts which are difficult, if not impossible, for humans to effectively mitigate. It seems to me then that automated security is the future, however, there must still be a human element to properly set this up.