55

My understanding is that open source systems are commonly believed to be more secure than closed source systems.

Reasons for taking either approach, or combination of them, include: cultural norms, financial, legal positioning, national security, etc. - all of which in some way relate to the culture's view on the effect of having that system open or closed source.

One of the core concerns is security. A common position against open source systems is that an attacker might exploit weakness within the system if known. A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.

Question is, are open source systems on average better for security than closed source systems? If possible, please cite analysis in as many industries as possible, for example: software, military, financial markets, etc.

This question was IT Security Question of the Week.
Read the May 25, 2012 blog entry for more details or submit your own Question of the Week.

blunders
  • 5,052
  • 4
  • 28
  • 45
  • Before answering "how safe is this vs that", we need a system of measurement. How do you measure the number of vulnerabilities? This will be harder for closed-source, which I think is why people often **feel** safer with open source. – Nathan Long Feb 06 '12 at 14:59
  • when you show someone how a lock is made it is only a matter of time before he picks it. –  Oct 30 '14 at 20:32

7 Answers7

52

The notion that open source software is inherently more secure than closed source software -- or the opposite notion -- is nonsense. And when people say something like that it is often just FUD and does not meaningfully advance the discussion.

To reason about this you must limit the discussion to a specific project. A piece of software which scratches a specific itch, is created by a specified team, and has a well defined target audience. For such a specific case it may be possible to reason about whether open source or closed source will serve the project best.

The problem with pitching all "open source" versus all "closed source" implementations is that one isn't just comparing licenses. In practice, open source is favored by must volunteer efforts, and closed source is most common in commercial efforts. So we are actually comparing:

  • Licenses.
  • Access to source code.
  • Very different incentive structures, for-profit versus for fun.
  • Very different legal liability situations.
  • Different, and wildly varying, team sizes and team skillsets.
  • etc.

To attempt to judge how all this works out for security across all software released as open/closed source just breaks down. It becomes a statement of opinion, not fact.

  • 4
    I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project. – this.josh Jun 08 '11 at 21:12
  • 3
    True, but giving "access to the source code" is potentially extremely valuable. Having outsider eyes look over your code brings new perspectives that might be missing in the dev team. You could even do something like https://stripe.com/blog/capture-the-flag, with a real project, with prizes for the best bug found (obviously not releasing details until a fix is out!) – naught101 Jun 14 '12 at 00:56
  • 3
    Heartbleed is a good example of this. OpenSSL has been well, open, for years. Still this huge security hole went undetected for ages. – Sameer Alibhai Jun 04 '14 at 19:53
  • 4
    @SameerAlibhai But it did get detected. With closed source software, we simply don't know if such bugs exist. It's much harder to test for them (though we can do some limited dynamic analysis). Such bugs could exist in popular closed source software with much higher frequency... or maybe not. We just don't know. – forest May 03 '16 at 06:36
  • 1
    Closed-source does nothing to alleviate end-users' concerns of there possibly being a backdoor, which is a valid security threat. – Geremia Sep 13 '16 at 00:02
  • 2
    The incentives have little to do with it. Saying that closed source is for-profit and open source is for fun is not only misleading, but downright incorrect. A fully open source company, Red Hat, is in the Fortune 500. Google works heavily on open source (e.g. Chromium and AOSP), and those are used by billions. – forest Dec 27 '17 at 07:56
  • @Geremia An infamous example of backdoor placed by the vendor would be the [Sony rootkit](https://en.wikipedia.org/wiki/Sony_rootkit). So IMO, it really depends of your threat model. If your threat model includes protecting yourself against the vendor, then FOSS is a better choice. – Alex Vong Sep 11 '18 at 02:12
39

Maintained software is more secure than software which is not. Maintenance effort being, of course, relative to the complexity of said software and the number (and skill) of people who are looking at it. The theory behind opensource systems being more secure is that there are "many eyes" which look at the source code. But this depends quite a lot on the popularity of the system.

For instance, in 2008 were discovered in OpenSSL several buffer overflows, some of which leading to remote code execution. These bugs had been lying in the code for several years. So although OpenSSL was opensource and had a substantial user base (this is, after all, the main SSL library used for HTTPS websites), the number and skillfulness of source code auditors was not sufficient to overcome the inherent complexity of ASN.1 decoding (the part of OpenSSL where the bugs lurked) and of the OpenSSL source code (quite frankly, this is not the most readable C source code ever).

Closed source systems have, on average, much less people to do Q&A. However, many closed source systems have paid developers and testers, who can commit to the job full time. This is not really inherent to the open/close question; some companies employ people to develop opensource systems, and, conceivably, one could produce a closed source software for free (this is relatively common in the case of "freewares" for Windows). However, there is still a strong correlation between having paid testers, and being closed source (correlation does not imply causality, but this does not mean that correlations should be ignored either).

On the other hand, being closed source makes it easier to conceal security issues, which is bad, of course.

There are example of both open and closed source systems, with many or very few security issues. The opensource *BSD operating systems (FreeBSD, NetBSD and OpenBSD, and a few others) have a very good track record with regards to security. So does Solaris, even when it was a closed source operating system. On the other hand, Windows has (had) a terrible reputation in that matter.

Summary: in my opinion, the "opensource implies security" idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source. However, you not only want a secure system, you also want a system that you positively know to be secure (not being burgled is important, but being able to sleep at night also). For that role, opensource systems have a slight advantage: it is easier to be convinced that there is no deliberately concealed security hole when the system is opensource. But trust is a flitting thing, as was demonstrated with the recent tragicomedy around the alleged backdoors in OpenBSD (as far as I know, it turned out to be a red herring, but, conceptually, I cannot be sure unless I check the code myself).

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • 2
    Of course how important security is to the maintainer of the software is critical. It can be maintained for useability without being maintained for security. – this.josh Jun 08 '11 at 21:09
  • 1
    +1 for raising the issue of maintenance. Also the "enough eyeballs" theory (also known as Linus' law), depends greatly on having *trained* eyeballs - and when it comes to subtle security bugs, there are far fewer. – AviD Jun 09 '11 at 23:03
17

I think the easiest, simplest take on this is a software engineering one. The argument usually follows: open source software is more secure because you can see the source!

Do you have the software engineering knowledge to understand the kernel top down? Sure, you can look at such a driver, but do you have a complete knowledge of what is going on to really say "ah yes, there must be a bug there"?

Here's an interesting example: not so long ago a null pointer dereference bug appeared in one of the beta kernels that was a fairly big thing, discovered by the guy from grsecurity (PaX patches):

It was introduced in a piece of code like this:

pointer = struct->otherptr;

if ( pointer == NULL )
{
    /* error handling */
}

/* code continues, dereferencing that pointer
   which with the check optimised out, can be NULL. Problem. */

and the pointer == NULL check was optimised out by the compiler, rightly - since a null pointer cannot be dereferenced to a struct containing members, it makes no sense for the pointer in the function ever to be null. The compiler then removes the check the developer expected to be there.

Ergo, vis a vis, concordantly, the source code for such a large project may well appear correct - but actually isn't.

The problem is the level of knowledge needed here. Not only do you need to be fairly conversant with (in this case) C, assembly, the particular kernel subsystem, everything that goes along with developing kernels but you also need to understand what your compiler is doing.

Don't get me wrong, I agree with Linus that with enough eyes, all bugs are shallow. The problem is the knowledge in the brain behind the eyes. If you're paying 30 whizz kids to develop your product but your open source project only has 5 people who have a real knowledge of the code-base, then clearly the closed source version has a greater likelihood of fewer bugs, assuming relatively similar complexity.

Clearly, this is also for any given project transient over time, as Thomas Pornin discusses.

Update edited to remove references to gcc being wrong, as it wasn't.

  • 3
    +1, I've always proposed an amendment to Linus' Law: "Given enough *trained* eyeballs, most bugs are relatively shallow". – AviD Jun 09 '11 at 23:06
  • 1
    from isc.sans.edu/diary.html?storyid=6820 "In other words, the compiler will introduce the vulnerability to the binary code, which didn't exist in the source code." this is a blatantly absurd meaningless statement. **The source code is buggy, so it is vulnerable.** The way the compiler generate code determine which exploits are possible. – curiousguy Jun 27 '12 at 12:42
  • Ok fair enough, you're right, I was wrong - he's dereferencing `tun` when `tun` could be `NULL` - which is downright bad. Fair enough. I'll remove the reference to an offending gcc option, since that wasn't the issue. The rest of the example, as an illustrative point, stands just fine. –  Jun 27 '12 at 16:17
  • If you're staring at the code sample and wondering how it's a coding mistake, don't waste your time. The code sample is botched and doesn't reflect the actual code. My edit was rejected because "This edit deviates from the original intent of the post.". I guess the original intent is to confuse. – André Werlang Aug 12 '20 at 20:44
13

I think the premises that most use to differentiate between closed and open source are pretty well defined. Many of those are listed here, both have their advocates. Unsurprisingly the proponents for Closed Source are those that sell it. The proponents for Open Source have also made it a nice and tidy business (beyond a few who have taken it on as a religion.)

The Pro Open Source movement speaks to the basics, and when it comes to security in general here are the points that fit the most into the discussion:

  1. The Customization premise
  2. The License Management premise
  3. The Open Format premise
  4. The Many Eyes premise
  5. The Quick Fix premise

So breaking this down by premise, I think the last two have been covered rather succinctly by others here, so I'll leave them alone.

  1. The Customization Premise
    As it applies to security, the Customization Premise gives companies that adopt the software the ability to build additional security controls onto an existing platform without having to secure a license or convince a vendor to fix something of theirs. It empowers organizations that need to, or see a gap, to increase the overall security of a product. SELinux is a perfect example, you can thank the NSA for giving that back to the community.

  2. The License Management Premise
    Often it is brought up that if you use F/OSS technologies you don't need to manage technology licenses with third parties (or if you do it is far less.), and this can be true of entirely Open Source ecosystems. But many licenses (notably the GPL) impose requirements on distributors, and most real world environments are heterogeneous mixes of closed and open source technologies. So while it does ultimately cut down on software spend, the availability of the source can lead some companies to violate OSS licenses by keeping source private when they have an obligation to release the source. This can ultimately turn the license management premise into a liability (which is the closed source argument against licenses like the GPL.)

  3. The Open Format Premise
    This is a big one, and one I tend to agree with so I'll keep it short to keep from preaching. 30 Years from now I want to be able to open a file I wrote. If the file is "protected" using proprietary DRM controls and the software I need to access it is no longer sold, the difficulty in accessing my own content has increased dramatically. If there is a format used to create my document that is open, and available in an open source product from 30 years ago, I'm likely to be able to find it and legally be able to use it. Some companies are jumping on the "Open Formats" band wagon without jumping on the Open Source bandwagon, so this argument I think is a pretty sound one.

There is a Sixth premise that I didn't list, because it is not well discussed. I tend to get stuck on it (call it paranoia.) I think the sixth premise is the feather in the cap of defense departments around the world. It was spelled out to the world when a portion of the windows 2000 source was leaked.

The Closed Source Liability premise
If a company has been producing a closed source code library or API through multiple releases through the decades, small groups of individuals have had access to that source throughout it's production. Some of these are third party audit groups, and developers who have moved on to other companies/governments. If that code is sufficiently static, to maintain compatibility as is a closed source benefit premise, so some weaknesses can go unannounced for many years. Those who have access to that closed source have the freedom to run code analysis tools against it to study these weaknesses, the bug repositories of those software development shops are full of "minor" bugs that could lead to exploits. All of this information is available to many internal individuals.

Attackers know this, and want this information for themselves. This puts a giant target on your company's internal infrastructure if you are one of these shops. And as it stands, your development processes become a security liability. If your company is large enough, and your codebase well distributed enough, you can even be a target for human infiltration efforts. At this point the Charlie Miller technique: bribe a developer with enough money and he'll write you an undetectable bug becomes a distinct possibility.

That doesn't mean it doesn't get into OSS products the same way either. It just means you have a set of data, then when released, can expose weaknesses in your install base. Keeping it private has created a coding debt against your customers installed systems that you cannot pay back immediately.

nealmcb
  • 20,544
  • 6
  • 69
  • 116
Ori
  • 2,757
  • 1
  • 15
  • 29
  • 1
    +1 @Ori: Do you know of any OSS that had a backdoor that was found, and clearly designed to be one? Also, Charlie Miller is who, meaning is there a wikipedia page, or something of the like. – blunders Jun 12 '11 at 01:50
  • 1
    He's a "Security Researcher" who's famous for his Pwn2Own exploits. He mentions the human element of coding exploits in his Defcon 2010 talk, which is humorous enough to watch on it's own. http://www.youtube.com/watch?v=8AB3NcCkGNQ – Ori Jun 12 '11 at 04:24
  • +1 @Ori: Ah, thought maybe you meant a "Charlie Miller" that had been bought, then discovered. Exploiting people is nothing new, so it might be a stretch to call it the "Charlie Miller technique". Do you know of any OSS that had a backdoor that was found, and clearly designed to be one? – blunders Jun 12 '11 at 13:06
  • 1
    @blunders Nothing that has been identified and publicized as such. I could go digging for some specifics, but the problem with a well designed "bug" is that it shouldn't be easy to differentiate between an accident and a deliberate placement. – Ori Jun 12 '11 at 14:16
  • @Ori: Agree, and was just wonder if you knew of any already, no need to look for one. Thanks! – blunders Jun 12 '11 at 14:36
  • 3
    @blunders Allegations flew in this case, but seems dubious to me: [What is the potential impact of the alleged OpenBSD IPSEC attack? - IT Security](http://security.stackexchange.com/questions/1166/what-is-the-potential-impact-of-the-alleged-openbsd-ipsec-attack) – nealmcb Jun 12 '11 at 17:36
  • @blunders as @nealmcb mentioned, the OpenBSD IPSec alleged "attack", while dubious, was possible with no stretch of the imagination, and in fact believed to be true for a short time. Additionally, the original "rootkit" was in an opensource package, and a popular one at that (http://en.wikipedia.org/wiki/Rootkit#History). Thus, backdoors in OSS are a definite possibility. – AviD Jun 12 '11 at 18:46
  • @Ori, what you call "Charlie Miller technique" is much better recognized as the "[Kevin Mitnick](http://en.wikipedia.org/wiki/Kevin_Mitnick) technique". – AviD Jun 12 '11 at 18:49
  • 1
    @Ori, your 3rd point - Open Format premise - while a good point, is not necessarily a benefit strictly of F/OSS software. Indeed, your last statement in that paragraph contradicts the rest: `"Some companies are jumping on the "Open Formats" band wagon without jumping on the Open Source bandwagon"`, which proves that it's irrelevant. (Admittedly, in some minds thats not the case, but thats not true.) – AviD Jun 12 '11 at 18:52
  • @AvID I've read the books Kevin's co-authored, researched his alleged actions, and hearing Charlie say (essentially) "Here's a pile of cash, code me a backdoor" was a first for me. I guess you could call it the cash hack, or something equivalent to make it party neutral. – Ori Jun 12 '11 at 21:22
  • 1
    @Avid the open formats battle is being fought by much the same folks, I wasn't trying to present the argument as my own, just that it is often presented as the open source argument. I completely agree it has been decoupled and adopted by closed and open source advocates alike. – Ori Jun 12 '11 at 21:24
3

You'll want to look at these papers:

The upshot is that open or closed is about equivalent depending on how much testing gets done on them. And by "testing" I don't mean what your average corporate drone "tester" does, but rather more like in the field experience.

Bruce Ediger
  • 4,552
  • 2
  • 25
  • 26
0

Let's be honest here, when someone claims open source is safer than closed source, they're generalizing about what happens today in server/desktop operating systems, such as Linux (open source) versus Mac/Windows (proprietary, closed source).

Why malware is more likely to affect the latter and not the former? Because of several reasons, for which I think the most important is the first one (borrowed from this other answer to a question marked as duplicate of this one):

  1. The software installed by the user in the case of a Linux distribution (or other open source OS), is usually packaged by a centralized organization (i.e. in the case of Ubuntu, it's done by Canonical, and hosted by it), which hosts binaries compiled from sources curated/monitored by the open source community. That means that the likelyhood of the user installing infected software, or the likelyhood of the opensource community accepting malicious code-changes, is much lower than in the case of Mac/Windows, where the user usually installs software from many different places on the web, or from many different vendors from AppStores. There's also the risk that the organization's servers (e.g. Canonical) get hacked, but this risk is minor because these organization employ top-notch IT experts for running their servers.
  2. Linux's (or other opensource OSes) number of users is much lower than Windows/Mac users, so then malware creators prefer to not target them (as the benefit/cost ratio is lower in this case).
  3. Linux, being just a kernel, comes in various different distributions you can choose from, so then malware creators would need to spend bigger effort to make their malicious code be compatible with many of them (so the benefit/cost ratio is lower in this case).
  4. Linux's (or other opensource OSes) sources are open for everyone to see/modify. That means that when a security vulnerability is found, anyone can write a fix for it (there's no vendor lock-in, you're not tied to a specific organization you need to wait for, to develop a fix), so in theory the security patches happen sooner than in proprietary-software cases. (However, in practice, there's usually no difference, because the companies that run proprietary platforms such as Windows and MacOS, are big corporations that happen to be competent enough.)
knocte
  • 161
  • 7
0

Jim Fruchterman's OpenSource.com article "Is your open source security software less secure?" gives a very good analogy for how open source, despite attackers knowing how it works, makes the software more secure for end-users:

Think of encryption as a locked combination safe for your data. You may be the only one who has the combination, or you may entrust it to select few close associates. The goal of a safe is to keep unauthorized people from gaining access to its content. They might be burglars attempting to steal valuable business information, employees trying to learn confidential salary information about their peers, or a fraudster who wants to gain confidential information in order to perpetrate a scam. In all cases, you want the safe to keep your stuff secure and keep out unauthorized people.

Now let's say I'm choosing a safe for my valuables. Do I choose Safe Number One that's advertised to have half-inch steel walls, an inch-thick door, six locking bolts, and is tested by an independent agency to confirm that the contents will survive for two hours in a fire? Or, do I choose for Safe Number Two, a safe the vendor simply says to trust, because the design details of the safe are a trade secret? It could be Safe Number Two is made of plywood and thin sheet metal. Or, it could be that it is stronger than Safe Number One, but the point is I have no idea.

Imagine you have the detailed plans and specifications of Safe Number One, sufficient to build an exact copy of that safe if you had the right materials and tools. Does that make Safe Number One less safe? No, it does not. The security of Safe Number One rests on two protections: the strength of the design and the difficulty of guessing my combination. Having the detailed plans helps me, or safe experts, determine how good the design is. It helps establish that the safe has no design flaws or a second "back door" combination other than my own chosen combination that opens the safe. Bear in mind that a good safe design allows the user to choose their own combination at random. Knowing the design should not at all help an attacker in guessing the random combination of a specific safe using that design.

Geremia
  • 1,636
  • 3
  • 19
  • 33