Malicious logic and backdoors. You're not likely to find an automated tool to automatically detect things like backdoors, malicious logic, timebombs, etc. These are too hard to detect with current techniques; it is too easy to hide a backdoor that current analysis techniques are not likely to find. (This is true of both static analysis and dynamic analysis.) Moreover, these kinds of backdoors are very rare -- probably not common enough to justify significant investment into building such a tool.
I think you should be worried more about vulnerabilities and bugs than about maliciously placed backdoors. They're a lot more common. And, if you are in a security-critical setting where you think there is a significant threat that a third party might try to deliberately insert a backdoor into a particular piece of code you are using, well, you shouldn't use that piece of code unless you trust the supplier.
Today, the most effective development-time defenses against malicious backdoors and timebombs are as follows:
Vet the developers. Choose developers who you trust. Normally, you'll want them to be your own employees. If you use external suppliers, you'll need to vet them carefully.
Mandatory code review. All code should be reviewed by a second person, other than the developer. The software development workflow and repository should be designed to track and enforce this policy. This provides two-person control: no one person can introduce code that isn't reviewed, and thus if everyone is taking the code review requirements seriously, this process should make it harder for a single individual to introduce a backdoor without being detected.
See also How to review code for backdoors? for related discussion.
Secure software repository. Lock down the source code repository and build processes to ensure that no single insider can introduce malicious code into the binary.
However, these techniques remain limited in their effectiveness. I think you should also look at other defenses, such as risk transfer, isolation and sandboxing, and monitoring; I elaborate on this elsewhere on this site.
Bugs and vulnerabilities. I would suggest that, for most purposes, you should be more concerned about bugs and vulnerabilities (inadvertently introduced by a well-intentioned but fallible developers). There are many commercial and open source tools for scanning source code to detect bugs and vulnerabilities. For commercial tools, see the trade press; check out, e.g., Fortify, IBM Appscan, Veracode, and their competitors. The commercial tools are generally better than the open source tools.
If you are using a third-party open source library, I would also suggest that you check the CVE vulnerability database for past and open reports of vulnerabilities. Look to see how many vulnerabilities have been reported, how rapidly they were reported, and whether the project has technical details on the nature of the vulnerability. This should give you an idea of the project's security stance.
If you want a more in-depth look, you could look at the Coverity Scan database to see if they have scanned the library. You could also look through the library's bug tracker to see how they have handled security issues in the past. You could check to see if they have a clearly indicated security bug reporting process or place to report security bugs. These will give you a sense of the maturity of the project's software development process and its attitude towards security.
You may also find the following industry white paper of interest: The Unfortunate Reality of Insecure Libraries.
Open source. Your question seems to suggest you may be thinking that backdoors and timebombs are a greater threat in open source code than in closed source code. While that could be true, I'm not aware of any evidence for that assertion. If you're worried about backdoors and timebombs, you should probably be worried about it in all the code you are using, open source or closed source.