A) Is there an advantage/disadvantage of conducting reviews of
binaries over source-code?
Compilers often DO NOT write code expressly as intended in the source. For example, Return Oriented Programming exploits the fact that compilers will insert many more RET
opcodes than the programmer is aware of. Due to pipelining and other optimization tricks, compilers essentially rewrite your code and can possibly add vulnerabilities of their own. This means that some constructs in the source code may not be expressed in the binary at all!
This is a class of error that is essentially impossible to catch through manual code analysis... and I suspect the risk would still be there for Java/C# JIT code as well, but this would be obfuscated from a static analysis tool.
Manual source code analysis can help by the fact that most common vulnerabilities can be caught by visual inspection. And it has other benefits as well, most notably in just decreasing the number of prod bugs in general. (And thus cost.) And it helps the social aspect as well: source reviews encourage people to write as if someone else is watching. A major disadvantage is that if you're dealing with a dynamically typed language, such as PERL, Groovy, LISP and its derivatives... most data won't be looked at until runtime which means neither static analysis or source code analysis is sufficient.
You can even fool static analysis tools by converting statically typed code into instructions at runtime. Veracode doesn't like your java construct? Rewrite it using reflection, and the Veracode bug disappears and no one is the wiser. As a security expert you also need to assume that the programmers working in your own company are a potential threat.
So in short, there's no way to escape Source code analysis, static analysis, and dynamic analysis in any given application if your hope is an acceptably secure code base.*
B) Which one provides more coverage/vulnerabilities.
Source-code analysis depends on the reviewers having excellent security skills and since they're human beings, you can't reasonably expect perfect performance. That's why static analysis tools come in handy. My preference is to run static analysis tools first and then do the secure code reviews after the discoveries are mitigated, I prefer using STRIDE but there are other frameworks to consider. This lets humans concentrate on the less mundane security problems that are more logic-related. But a smart human will beat a dumb static analysis tool any day of the week.
Source-code analysis can help you find where a programmer left behind a back door... static analysis tools aren't equipped to handle that.
So the answer to this question is... "it depends." What kinds of threats do you want to go unchecked?
*Unless "acceptably secure" is defined as leaving the windows open and the front door unlocked.