13

I have been wondering about his for awhile. I have Linux running on a PC at home. I had a jailbroken iPhone. They both have attributes that make them very attractive, and they are also FREE! But I haven't been able to find anything that discusses how they are kept secure from Trojan horses. I don't suppose I can run a virus scanner on the source files to see if they are safe. If I can, I don't know how to do that.

How confident should I be about security with Linux and jailbroken iPhones? What prevents intentional security holes caused by private or government hands?

I understand that there are risks even with privately developed operating systems, but in those cases, I have some sense of who has the responsibility for them. Not so for these two cases and for other open-sourced or crowd-sourced software.

Jim
  • 255
  • 1
  • 10
  • This wikipedia article is as good as anyplace to start. http://en.wikipedia.org/wiki/Open_source_software_security – Wayne In Yak Feb 01 '12 at 18:55
  • 1
    [This question](http://security.stackexchange.com/questions/4441/open-source-vs-closed-source-systems) might be of interest, too. On the security of closed-source systems, the unfortunately named [`_NSAKEY`](http://en.wikipedia.org/wiki/NSAKEY) might also be an interesting read - ultimately harmless (in fact beneficial), however, certainly created some controversy. –  Feb 01 '12 at 20:13
  • I wouldn't say iOS is "FREE!" in any practical way, it's tied to (pricey) hardware and the code is mostly proprietary. – Shadok Feb 02 '12 at 10:24

7 Answers7

15

The theory is:

  • Closed-source software is mostly non-trojaned because the vendor of such software is legally responsible for the software contents, and easily tracked down, should a hidden malicious code be revealed to be part of it (e.g. through reverse engineering).

  • Open-source software is mostly non-trojaned because it is very difficult to inconspicuously smuggle extra code when the source code is in plain view of everybody. Presumably, open-source code is regularly read by other people so backdoors would soon be detected.

The practice is:

  • Closed-source software is mostly non-trojaned because nobody bothered to plant a backdoor in it.
  • Open-source software is mostly non-trojaned because nobody bothered to plant a backdoor in it.

If I were to include a backdoor in an open-source operating system, I would try to put it in the Random Number Generator. Writing a proper, secure RNG is hard, so chances are that some obscure alterations could go unnoticed but result in predictable output -- thus predictable outcome of key pair generation (e.g. for SSH keys), and things like that. And due to the hardness of writing a proper RNG, I could still plausibly claim mere incompetence, not malice, should the backdoor be discovered. Case in point: about one year ago, there were such rumours of backdoor in RNG code in OpenBSD (apparently, the rumours turned out to be unsubstantiated -- however, I confess that I did not check the code myself).

What must be remembered is that both detecting backdoors (in source code or compiled code), and planting backdoors without being detected, are difficult arts. Professional spies abhor indiscretion; trojaning is usually felt as "too risky".

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 2
    I like your 'practice' comments - so true – Rory Alsop Feb 01 '12 at 21:00
  • Then of course there was the [mistake in Debian Linux](http://www.schneier.com/blog/archives/2008/05/random_number_b.html) a few years back, where it turned out someone had just *deleted* the PRNG-seeding code from OpenSSL, causing all the random numbers it used and needed to be, uh, **not random at all** ([see also](http://www.xkcd.com/424)) – BlueRaja - Danny Pflughoeft Feb 01 '12 at 23:44
14

Open source software is less confidential than closed source software, but that is not relevant when considering backdoors, as opposed to vulnerabilities in general which are almost always accidental. In this answer, I will only address backdoors, and not the wider issues of vulnerabilities in general (only an insignificant fraction of vulnerabilities are deliberate). To combat backdoors, you need strict controls over the quality and integrity of the code. Such controls are not intrisically weakened by making the code public.

Integrity: what they make is what you get

By integrity, I mean the assurance that the code you're running is the code that the developers wrote. This requires either that the distribution chain is traceable, or that the developers sign the code with a signature you recognize. Signing is commonly practiced when secure software is desired; closed source and open source are on an equal footing. Distribution works differently, with no clear advantage to either side: closed source tends to have direct distribution (you get it directly from the vendor), while open source software tends to be replicated many times, which both increases the opportunities for tampering and increases the opportunities to detect tampering.

There's a second part to integrity, which is within the developers' domain. With closed source software, you rely on the internal controls of the software vendor. This is usually not something you have direct visibility on; the only somewhat independent source of information is the vendor's security certifications, and they only provide so much assurance. On the open source side, most high-profile projects have public repositories nowadays, making each change to the source code publicly traceable. In theory, that is: the integrity of the source code is rarely checked, but there's some chance that a breach will be detected by happenstance. In practice, on both sides, the level of integrity controls is all over the map; open source vs. closed source is not a discriminating factor.

On backdoors introduced by the authors

Integrity tells you that only the documented developers produced the code. Quality tells you whether they introduced a backdoor (either willfully, or being subverted). With a closed source vendor, it is rare to have any quality assurance beyond the vendor's reputation, and possibly certifications (assuming you believe they indicate quality — in fact certifications might involve a criminal background check on the developers but would be unlikely to catch subtle backdoors). With open source code, in theory, the code is up for anyone to see. Again, in practice, no one looks at most of it; but as the developer's name is publicly associated with each piece of code, the author of a backdoor takes the risk of being exposed. Assuming that the backdoor is identified as such: most vulnerabilities are accidental, after all.

It's interesting to look at one famous backdoor, which was introduced by Ken Thompson in Unix. I urge you to read his Turing Award acceptance speech, “Reflections on Trusting Trust”, which revealed the backdoor. In a nutshell, Thompson modified the system compiler to add some code to the login program that would let him log into any account. Then he modified the compiler to insert the code for this compiler modification when compiling the compiler, and he recompiled the compiler. Finally he reverted the compiler sources not to do anything special. From then on, the compiler would perpetuate the backdoor even though nothing was visible in the source code. This shows that it is not enough to trace the history of the source code: the history of the whole system must be traced, including the origin of every program and data file involved in building the system at any point.

The Thompson backdoor relied on having control of the compiler. It is possible to inject a backdoor into an application in a similar way given control of the build or distribution system, of operating system, or of the hardware. On the other hand, the author of an application cannot hide a backdoor so easily. There, the backdoor really has to be present in the source code. It is easier to detect a backdoor in source code than in an executable, if you know what you're looking for: many backdoors are obvious in source code if you think of looking in the right place, whereas testing for the presence of a backdoor in a binary-only executable requires more work with a debugger.

Backdoor in the algorithm

In rare cases, backdoors can be hidden in an algorithm. This comes up in cryptography: many algorithm involve “magic constants”, which can be chosen somewhat arbitrarily. Good cryptographic designs use “nothing-up-my-sleeve numbers”: for example, MD5 uses constants derived from values of the sine function. As a counterexample, the elliptic curves standardized by NIST in FIPS 186-3 involve constants which look random. However, it is possible that the constants were derived from a secret value, and knowing that value would make it easy to break cryptography using these algorithms. It is impossible to prove that NIST (or the NSA) did not in fact derive the constants from a secret value that they know.

In this case, open-source vs. close-source doesn't matter. Even if you are sure that the software implements the algorithm correctly, you need to trust the designer of the algorithm as well.

A look at consumer software

Some platforms have prefered ways to install software above the base operating system. On mobile platforms, software is typically distributed through a channel controlled by the platform vendor (App Store, Android Market, …). This channel ensures that what you get is what the platform vendor wants you to get. As such, it protects you against malicious third parties who might try to trick you into installing a trojaned version of a legitimate application. This does not protect against an application that is malicious in the first place, since platform vendors only make extremely cursory checks on the applications they distribute, if any.

On the desktop side, commercial software is usually distributed directly to consumers or through generally-reliable third party (ISPs, in the old days the mail or brick-and-mortar stores). Free software is another matter. Most Linux distributions include a large amount of software, and sign application packages. Like in the mobile case, package makers do not make more than cursory checks (though they do reject dodgy-looking applications), so you cannot count on them detecting a backdoor introduced by the application author. But you can limit your trust to the application authors and the distribution maintainers, because the distribution infrastructure protects against tampering by third parties. If a backdoor is ever discovered, you can count on its being corrected fast, and on receiving security upgrades as soon as they are available. Mac OS X has similar distribution channels (an official store and several free software distributors).

The Windows world is different: Windows users typically install a lot of third-party zero-cost software through channels with no controls, and these have no automatic upgrade mechanism. I can work comfortably on an Ubuntu PC with very little software that isn't provided by Ubuntu in a signed package; on Windows I need many third-party programs (a web browser, a word processor, many “productivity” applets, …). This, together with the fact that attackers target Windows more because most potential victims are running Windows, makes the risk of backdoors higher. Not directly because Windows is closed source, but because it lacks an application distribution infrastructure. Such infrastructures can work both in the free and open source world, and in the closed-source for-pay world.

General considerations

It's difficult to be sure about the history of backdoors or to get accurrate statistics, because by definition the really successful ones are the ones we don't know about. However, the ability to discover failed attempts can be instructive. Regarding the Linux kernel, there was a famous attempt to inject a backdoor which was caught by an inconsistency in source control. You can watch the discovery unfold on the Linux kernel mailing list; there have been many write-ups, e.g. on Kerneltrap, SecurityFocus.

All in all, I don't see an intrinsic winner between open source and closed source software, when it comes to backdoors. Open source software does have the potential to reach to the same level of assurances as closed source software, while keeping the advantages brought by wide exposure. You can certainly have someone taking responsibility for open source software, if you're willing to pay for it. So open source software is able to reach a higher assurance level — but this doesn't mean that it does in all cases.

There is one way in which open source wins: if you're concerned about a specific backdoor in a specific application (as opposed to the system as a whole). With open source code, you can look at the source and see if there is a master password, or whether random numbers are generated properly.

Gilles 'SO- stop being evil'
  • 50,912
  • 13
  • 120
  • 179
  • Good comments. I suppose that it would easily be detected if the main repository for a signed open source distro were to be compromised, as the signature check would fail. And every checked out copy would probably have a full change history with intact signatures. So the people controlling the distro could recover easily enough. Why do we trust those people, though? Where can I get a better idea of how that process really works (with Linux, for example)? – Jim Feb 01 '12 at 23:36
  • 1
    @Jim There was one famous attempt to insert a root hole in the Linux kernel a few years ago. Many [Google hits for Linux+backdoor](http://www.google.com/search?q=linux+backdoor) discuss it, e.g. [SecurityFocus](http://www.securityfocus.com/news/7388), [discovery on lkml](http://lkml.indiana.edu/hypermail/linux/kernel/0311.0/0635.html), [Kerneltrap](http://kerneltrap.org/node/1584). The attempt was discovered by an inconsistency in source control. – Gilles 'SO- stop being evil' Feb 02 '12 at 00:17
  • @Jim Also, I assume you're familiar with Ken Thompson's famous Unix backdoor? If not, drop everything and read [Reflections on Trusting Trust](http://cm.bell-labs.com/who/ken/trust.html). Introducing the backdoor involved changing the code, recompiling the compiler and changing the code back. A fully auditable source control and build system would have caught that (or at least left the evidence visible). – Gilles 'SO- stop being evil' Feb 02 '12 at 00:42
  • Awesome answer. TL;DR version: "Who watches the watchers?" This is [not a new problem](https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F) – naught101 Sep 11 '13 at 06:45
  • Excellent examples too. That Linux example is heartening: someone was watching closely. I think that couldn't happen today, due to the distributed nature of git, but it makes you wonder. How hard would it be for some well-resourced government or corporate agency to hack the laptop of one of the linux core devs, and replace the git binaries with something that makes such a minor modification to a large commit, and hides it? Not sure how good the Linux kernel review processes are, but something that small can easily slip through one or two reviewers' hands... – naught101 Sep 11 '13 at 06:56
10

With Open-Source software anyone can see and analyse the code, so this model actually has a lot going for it. Many eyes etc...

The problem comes when you have a massive codebase, and not enough qualified, experienced eyes - things may slip through the net.

However with closed-source code, you are essentially putting your trust in the developers - how do you know they have properly analysed the code? From experience, I have found a vast number of vendors who have either performed no code review, or have done it so badly that there are gaping holes in applications, and if you read any advisories you will see just how many are from errors or omissions in coding practices!

I am not sure whether open- or closed-source wins on balance, but I'm a strong supporter of the idea that all software vendors should have an independent code review and penetration test (by approved penetration testers) of all code prior to release!

Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
  • 1
    If developers were responsible before the Law, for all their bugs, then software would be bug-free -- and there would be much less of it, too. – Tom Leek Feb 01 '12 at 20:54
  • Indeed they are. That's why they warn you that the software is not warranted at all or in any way, and that if you choose to use it you are explicitly agreeing to use it at your own risk only. – yfeldblum Feb 01 '12 at 22:07
4

As to the security of open source systems I prefer to consider analogous physical world examples such as a door lock over the many eyes argument as many eyes have missed vulnerabilities for years (e.g. bind). The details of all front door locks are well known and understood but that knowledge does little to lessen its security. The security of that lock comes from its solid design and construction. It exposes screw heads to the interior of the house (if at all); it is constructed with sturdy metal, and so on and so forth.

The same is true of good open source code. A good construction of a good design provides solid security despite the publically available blueprint. That brings us to how do we confirm construction of the software. A popular Linux distro with a reassuring hash check is a great start in my opinion.

Jailbreaking is in a different category. To me that is analogous to removing the faceplate of our house lock and modifying the lock system. I am might do that myself but I personally would be reluctant to let a complete stranger do that for me. By definition, a jailbreak exploits a security vulnerability to allow to ‘root’ your system.

zedman9991
  • 3,377
  • 15
  • 22
3

As far as FOSS is concerned it has always been a stance of a large number of members of the security community that public security is better. ("Public security is always more secure than proprietary security...For us, open source isn't just a business model; it's smart engineering practice." -- Schneier). The reasoning behind that is that yes the "bad guys" can see your code, but so can the good guys. Things you may have overlooked that the "bad guys" might find easily the "good guys" will be able to point out for you whereas if no one could see your code it may have been found too late.

That combined with the fact that you also can develop scanners/tools to protect yourself much easier doesn't hurt at all.

doyler
  • 602
  • 4
  • 11
2

Many eyes can look at the code and spot security holes if the code is open to the public. This is how open source code is improved and approved, which is generally the point of open source software. Otherwise, you better hope your closed-source code has no security issues.

Bernard
  • 201
  • 2
  • 5
0

To detect vulnerabilities, you can do all things with OpenSource, which you can do with ClosedSource (reverse engineer, fuzzy tests, running it behind a proxy, which tracks all traffic), plus you can read the source.

An advantage, which has not beeing mentioned is, that OpenSource projects tend to share much code with other OpenSource projects, which means, that the same code is used in more software.

If it is used by more parties, it is more often tested, more often reviewed, and more often exposed to attempts to break it, so the chance to observe a weakness is higher.

Of course, this idea has its weaknesses. There are new libraries in FOSS too and there are CS-libraries, sold and reused too.

A big problem with closed source is pirated software. Users of pirated software can't complain about malfunction and users, which break copyright protection might also insert backdoors. On FOSS, there is no need to acquire pirated software, and so there is no interested third party, which places backdoors intentionally into the code. But of course you may always pay for your closed software which you order regularly, and then you're not affected with CS too.

To another aspect: The responsible-company-argument for CS doesn't hold:

The developers of FOSS have to lose a name too, and I guess every jurisdiction is similar in that you can't deny responsibility for damages in the EULA or license from bad intent. Well - you can deny it in the text, but it will be void.

user unknown
  • 484
  • 5
  • 11