Open source software is less confidential than closed source software, but that is not relevant when considering backdoors, as opposed to vulnerabilities in general which are almost always accidental. In this answer, I will only address backdoors, and not the wider issues of vulnerabilities in general (only an insignificant fraction of vulnerabilities are deliberate). To combat backdoors, you need strict controls over the quality and integrity of the code. Such controls are not intrisically weakened by making the code public.
Integrity: what they make is what you get
By integrity, I mean the assurance that the code you're running is the code that the developers wrote. This requires either that the distribution chain is traceable, or that the developers sign the code with a signature you recognize. Signing is commonly practiced when secure software is desired; closed source and open source are on an equal footing. Distribution works differently, with no clear advantage to either side: closed source tends to have direct distribution (you get it directly from the vendor), while open source software tends to be replicated many times, which both increases the opportunities for tampering and increases the opportunities to detect tampering.
There's a second part to integrity, which is within the developers' domain. With closed source software, you rely on the internal controls of the software vendor. This is usually not something you have direct visibility on; the only somewhat independent source of information is the vendor's security certifications, and they only provide so much assurance. On the open source side, most high-profile projects have public repositories nowadays, making each change to the source code publicly traceable. In theory, that is: the integrity of the source code is rarely checked, but there's some chance that a breach will be detected by happenstance. In practice, on both sides, the level of integrity controls is all over the map; open source vs. closed source is not a discriminating factor.
On backdoors introduced by the authors
Integrity tells you that only the documented developers produced the code. Quality tells you whether they introduced a backdoor (either willfully, or being subverted). With a closed source vendor, it is rare to have any quality assurance beyond the vendor's reputation, and possibly certifications (assuming you believe they indicate quality — in fact certifications might involve a criminal background check on the developers but would be unlikely to catch subtle backdoors). With open source code, in theory, the code is up for anyone to see. Again, in practice, no one looks at most of it; but as the developer's name is publicly associated with each piece of code, the author of a backdoor takes the risk of being exposed. Assuming that the backdoor is identified as such: most vulnerabilities are accidental, after all.
It's interesting to look at one famous backdoor, which was introduced by Ken Thompson in Unix. I urge you to read his Turing Award acceptance speech, “Reflections on Trusting Trust”, which revealed the backdoor. In a nutshell, Thompson modified the system compiler to add some code to the login program that would let him log into any account. Then he modified the compiler to insert the code for this compiler modification when compiling the compiler, and he recompiled the compiler. Finally he reverted the compiler sources not to do anything special. From then on, the compiler would perpetuate the backdoor even though nothing was visible in the source code. This shows that it is not enough to trace the history of the source code: the history of the whole system must be traced, including the origin of every program and data file involved in building the system at any point.
The Thompson backdoor relied on having control of the compiler. It is possible to inject a backdoor into an application in a similar way given control of the build or distribution system, of operating system, or of the hardware. On the other hand, the author of an application cannot hide a backdoor so easily. There, the backdoor really has to be present in the source code. It is easier to detect a backdoor in source code than in an executable, if you know what you're looking for: many backdoors are obvious in source code if you think of looking in the right place, whereas testing for the presence of a backdoor in a binary-only executable requires more work with a debugger.
Backdoor in the algorithm
In rare cases, backdoors can be hidden in an algorithm. This comes up in cryptography: many algorithm involve “magic constants”, which can be chosen somewhat arbitrarily. Good cryptographic designs use “nothing-up-my-sleeve numbers”: for example, MD5 uses constants derived from values of the sine function. As a counterexample, the elliptic curves standardized by NIST in FIPS 186-3 involve constants which look random. However, it is possible that the constants were derived from a secret value, and knowing that value would make it easy to break cryptography using these algorithms. It is impossible to prove that NIST (or the NSA) did not in fact derive the constants from a secret value that they know.
In this case, open-source vs. close-source doesn't matter. Even if you are sure that the software implements the algorithm correctly, you need to trust the designer of the algorithm as well.
A look at consumer software
Some platforms have prefered ways to install software above the base operating system. On mobile platforms, software is typically distributed through a channel controlled by the platform vendor (App Store, Android Market, …). This channel ensures that what you get is what the platform vendor wants you to get. As such, it protects you against malicious third parties who might try to trick you into installing a trojaned version of a legitimate application. This does not protect against an application that is malicious in the first place, since platform vendors only make extremely cursory checks on the applications they distribute, if any.
On the desktop side, commercial software is usually distributed directly to consumers or through generally-reliable third party (ISPs, in the old days the mail or brick-and-mortar stores). Free software is another matter. Most Linux distributions include a large amount of software, and sign application packages. Like in the mobile case, package makers do not make more than cursory checks (though they do reject dodgy-looking applications), so you cannot count on them detecting a backdoor introduced by the application author. But you can limit your trust to the application authors and the distribution maintainers, because the distribution infrastructure protects against tampering by third parties. If a backdoor is ever discovered, you can count on its being corrected fast, and on receiving security upgrades as soon as they are available. Mac OS X has similar distribution channels (an official store and several free software distributors).
The Windows world is different: Windows users typically install a lot of third-party zero-cost software through channels with no controls, and these have no automatic upgrade mechanism. I can work comfortably on an Ubuntu PC with very little software that isn't provided by Ubuntu in a signed package; on Windows I need many third-party programs (a web browser, a word processor, many “productivity” applets, …). This, together with the fact that attackers target Windows more because most potential victims are running Windows, makes the risk of backdoors higher. Not directly because Windows is closed source, but because it lacks an application distribution infrastructure. Such infrastructures can work both in the free and open source world, and in the closed-source for-pay world.
General considerations
It's difficult to be sure about the history of backdoors or to get accurrate statistics, because by definition the really successful ones are the ones we don't know about. However, the ability to discover failed attempts can be instructive. Regarding the Linux kernel, there was a famous attempt to inject a backdoor which was caught by an inconsistency in source control. You can watch the discovery unfold on the Linux kernel mailing list; there have been many write-ups, e.g. on Kerneltrap, SecurityFocus.
All in all, I don't see an intrinsic winner between open source and closed source software, when it comes to backdoors. Open source software does have the potential to reach to the same level of assurances as closed source software, while keeping the advantages brought by wide exposure. You can certainly have someone taking responsibility for open source software, if you're willing to pay for it. So open source software is able to reach a higher assurance level — but this doesn't mean that it does in all cases.
There is one way in which open source wins: if you're concerned about a specific backdoor in a specific application (as opposed to the system as a whole). With open source code, you can look at the source and see if there is a master password, or whether random numbers are generated properly.