29

I'm very interested in the OpenBSD OS, as it currently seems to me to be the option that takes security more seriously than its contemporaries. But as I was reading about it, it occurred to me that even if OpenBSD is all it claims to be, how does all that security and openness matter, if I'm running the OS on closed-source BIOS/proprietary hardware?

I am aware of Open BIOS, coreboot and Libreboot, but I wonder why security-focused systems like OpenBSD don't make such a big deal about using open firmware. Don't you defeat the purpose of open security by using closed firmware in the first place?

muru
  • 364
  • 1
  • 3
  • 14
herzEGG
  • 399
  • 3
  • 5
  • 43
    You appear to be equating "open source" with secure. Closed source code can also be secure. Your question will also depend on what you want to secure against and what open source code gives you in your scenario that closed source does not. – schroeder Mar 11 '16 at 16:37
  • To that end, the seriousness of security to the openBSD development team does not rely on it being open source, rather it's the fact that the team holds itself to a high standard; so, the seriousness of the security of the underlying software should be evaluated according to the same criteria. One place to start would be known vulnerabilities against bios/BMC/etc of certain vendors. – Jeff Meden Mar 11 '16 at 16:49
  • 2
    Yes, I do equate openness with more security. I'm not sure if this is a "right" stance in the tech security field or more of "philosophical" one. I just figure that if anyone can look at your code it's harder to get away with something malicious, like in Lenovo's Superfish case, and you don't take the promise of security just at word value. So yes, openBSD development might indeed be of a very high standard, but isn't the fact that it's open that guarantees that it indeed is of a high standard? As "anyone" can look for themselves? – herzEGG Mar 11 '16 at 17:20
  • 11
    Just because a bunch of people CAN look at something doesn't guarantee they will; it also doesn't guarantee disclosure of any vulnerabilities they've found. So, open source doesn't guarantee more safety and security, it simply facilities the process that COULD lead to more safety and security. – Brad Bouchard Mar 11 '16 at 18:13
  • 14
    @herzEGG Being secure by virtue of being open has been disproven many times, most recently by the openSSL project. – Andy Mar 12 '16 at 00:53
  • OpenBSD takes _one aspect_ of security more seriously than anyone else. – Michael Hampton Mar 12 '16 at 03:42
  • 1
    Free software means you can hire _anyone_ to audit it. – Damian Yerrick Mar 12 '16 at 16:27
  • I think this is actually a very good point. – Hack-R Mar 12 '16 at 21:15
  • You may find [this paper](http://blog.invisiblethings.org/2015/12/23/state_harmful.html) relevant or at least interesting. You also might want to consider Qubes OS if "taking security seriously" is your primary consideration. Disclaimer: I've never used it myself, I just find it conceptually interesting. – Harry Johnston Mar 12 '16 at 21:49
  • 3
    That's why we need open hardwares. – ferit Mar 13 '16 at 08:03
  • OpenBSD rides on top of the same underlying proprietary firmware, etc., as all other competing OSs. **IF** that foundation has vulnerabilities, at least OpenBSD may be less likely to increase any inherent risks. – user2338816 Mar 13 '16 at 12:03
  • 1
    @schroeder Yes, "open source" should be equated (or at the very least, strongly positively correlated) with "secure", by way of Kerckhoff's Principle, which states that you must always assume that the adversary has full knowledge of the workings of the system. If the good guys don't have that same knowledge available to them, how can they trust it to be secure? – Mason Wheeler Mar 13 '16 at 17:13
  • @MasonWheeler you are assuming, of course, that an open source project is meant to be secure. GitHub is full of grossly insecure code.... As I say above, one needs to define what one hopes to secure with the ability to see the code. – schroeder Mar 13 '16 at 20:17
  • @schroeder Because someone who does have access to the full workings, and whom you trust to make such judgements, told you so. – user253751 Mar 13 '16 at 23:12
  • @Mason Your conclusion does not follow from the premise. Yes Kerckhoff's principle states that a system must be secure even if the inner workings are known to the adversary. But that does not imply the opposite - `a -> b` does *not* imply `b -> a`. You might have that opinion, but don't misattribute such a sentiment to Kerckhoff. – Voo Mar 14 '16 at 12:46
  • @Voo That's not the premise. According to Wikipedia: [Kerckhoffs' principle was reformulated (or perhaps independently formulated) by Claude Shannon as "the enemy knows the system", i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them".](https://en.wikipedia.org/wiki/Kerckhoffs's_principle) If the assumption that "the enemy knows the system" must be taken as a given, even if it's not necessarily true, then it must also be taken as a given that *you are at a disadvantage if your friends do not also know the system.* – Mason Wheeler Mar 14 '16 at 13:37
  • @mason kerckhoff is making a statement of the form `a->b`. What you are trying to argue then is that because `a->b` then `b->a` must also be true. And that just does not follow in any logical system that I know of. You may be of the opinion that the other statement is true nevertheless, but it does not follow from kerckhoff. – Voo Mar 14 '16 at 14:56
  • @Voo Again, you are misunderstanding the claim I am making. Unfortunately, I'm not sure how to explain it more clearly than what I already said. – Mason Wheeler Mar 14 '16 at 15:37

11 Answers11

42

Historically, the open source movement is not about security but about freedom. Basically, Richard Stallman was very dismayed at not being able to fiddle with his printer because the driver source was unavailable.

OpenBSD's stance on being "secure" does not come from it being open source, but on an avowed goal and pledge to do things properly with regards to security (still historically, OpenBSD came into existence because some developers in NetBSD were much better at programming than at managing human-to-human peaceful relations).

The association between security and open source is more recent. In fact, right from the start, it was explained as being an incomplete concept (see Ken Thompson's famous Reflections on Trusting Trust). One element in the discussion is Linus's Law that says:

given enough eyeballs, all bugs are shallow

The core idea is that, with sufficiently many reviewers, bugs will be found, and this extends to security-related bugs. This holds, however, only on the premise that there are reviewers. Open-source software makes external reviews easier, but that does not mean that external reviews actually happen. When was the last time you went through existing source code ?

Case in point: OpenSSL. After yet another vulnerability was found in the code base, a fork was made, called LibreSSL, and they started an explicit reviewing effort, that found several serious issues in the code base. These issues had been there for years, right in the middle of a library which can be said to be one of the most crucial security-related libraries in the Linux ecosystem. So this was open source, and yet not sufficient (at all) to achieve proper vulnerability detection.

So of course open-sourceness helps with security, but not as much as can be hoped for.

What open source really brings is a much increased risk for people who want to willingly plant backdoors. It is hard to make code that looks innocuous to reviewers and still does bad things (there is a contest for such code).

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 2
    I appreciate the link to The Underhanded C contest! – Mark Buffalo Mar 11 '16 at 18:32
  • 1
    I think the freedom aspect and security are inter-related. It's true what you say about openSSL and the bugs not being discovered. But if OpenSSL were closed, the ability to fork it is zero. Also, with closed source software you simply have to trust whomever wrote the code. With OSS, you can examine the code for how transparent it is. You don't necessarily have to even find an actual exploit, just merely count "WTF's per minute". I don't think any of the major browsers chose to use OpenSSL, possibly for this very reason. – Steve Sether Mar 11 '16 at 20:26
  • 3
    Richard Stallman has little to do with open source. He promotes free software. These are different things. – Mateusz Piotrowski Mar 13 '16 at 11:31
  • It's worth noting in the context of OpenSSL that OpenSSL 1.1.0 apparently takes many of the same steps that the LibreSSL fork has taken, in doing serious code cleanup and actually removing obsolete/vulnerable ciphers. [At this time, it is available as an alpha 3 version.](https://openssl.org/news/newslog.html) – user Mar 13 '16 at 17:46
  • 1
    RMS used to be "against" security in many of the ways it's conventionally understood today, believing that every user of a multiuser machine should be an administrator: https://lists.debian.org/debian-devel/2002/09/msg01810.html – pjc50 Mar 14 '16 at 09:22
  • @SteveSether - "With closed source software you simply have to trust whomever wrote the code." That's not true - just because *you* can't see the source does not mean that an external review has not occurred. – James Snell Mar 14 '16 at 12:49
  • 1
    @JamesSnell In that rare case, the trust is merely placed on another 3rd party, the code reviewer. The code reviewer has an inherent conflict of interest here, since they're paid by the company who produced the source. Also, if a company gets a "bad review", they can just shelve it, and pay someone else for a better one. – Steve Sether Mar 14 '16 at 15:55
25

Open Source Does Not Unequivocally = More Secure/Safe

Anyone CAN look at open source software/hardware, but that doesn't guarantee that "anyone" WILL look at it; further, if they do look at it, it also doesn't mean that they will disclose something that they find that could be a vulnerability. People assume too much about open source, and one of the fallacies they believe is that if a bunch of people CAN look at something that it's all of a sudden safer and more secure. This isn't unequivocally true. It's nice to be able to have a lot of eyes on the product, but the ethics and morals of those eyes are of concern to me as much as their technical prowess is.

That being said, there are many benefits of open source if the concept behind it is implemented properly.

Also, closed source doesn't automatically = less secure/unsafe.

But to directly answer your question, no you don't automatically defeat the purpose of using a known OS that is concerned with security like OpenBSD by running it on top of closed source hardware as the hardware itself could have very secure code/firmware behind it just as much as something open could.

Brad Bouchard
  • 628
  • 1
  • 5
  • 13
  • 6
    Also, color me stupid but I feel like it's common sense that it's harder to find security holes in software that you don't have the source code for than in one that you do... which is not meant to be used as an excuse in improper settings, but which is realistically true. – user541686 Mar 12 '16 at 05:31
  • 5
    There was OpenSSL, where probably lots of people looked at it, and then looked away in disgust, with very little actual reading and understanding the code done, until the shit hit the fan. – gnasher729 Mar 12 '16 at 13:08
  • 2
    This post completely avoids the issue of trust. Even if the firmware is secure, who can say it only does what it claims to? – Sqeaky Mar 12 '16 at 19:57
  • 1
    @Sqeaky: Even if it's open-source and secure, who's to say the hardware is doing what it's supposed to? – user541686 Mar 12 '16 at 23:50
  • 1
    At least you know it's not backdoored... I think you understand the question of OP wrong. – ferit Mar 13 '16 at 08:05
  • @Saibot - false, it's possible to backdoor via hardware too. Unless you make everything yourself (and I do mean **everything**, including your own chip masks), you always have an avenue for somebody else to slip in a back door. Have fun~! – Clockwork-Muse Mar 13 '16 at 08:48
  • @Clockwork-Muse I mean, at least you know that the firmware is not backdoored. – ferit Mar 13 '16 at 09:26
  • @Saibot - depends, how did it get loaded onto the hardware? Did **you** load it yourself, while assembling the board from component parts? If the answer is "no", then it can still be backdoored (and sometimes even if you did...). Will it be the same binary? No, but you might not have any way to observe that, _and_ because you usually have to ask the firmware questions about itself/to perform update functions, it can lie. – Clockwork-Muse Mar 13 '16 at 12:18
  • @Clockwork-Muse I agree, but you don't get the what I mean. Of course, there are other security concerns about hardware, and installation of the firmware, but if firmware gets open sourced, it's a progress. – ferit Mar 13 '16 at 12:50
  • 1
    @Saibot - The fact that the firmware might be open source is immaterial to security concerns if you don't have a trusted setup. You wouldn't accept a Linux box from Joe Random claiming it's secure, would you? You'd have an easier time verifying the (non)security in that case, but it's the same problem. It's the problem mentioned in the Reflections On Trusting Trust link mentioned in another answer; you need to trust the entire chain, not just what you can observe. – Clockwork-Muse Mar 13 '16 at 13:05
  • @Clockwork-Muse If a software or hardware is a black box, you don't have anything to do but to trust. Otherwise, you have the opportunity to audit. Why should I trust any company because they are saying that their product is secure? I should be able to test their claims. – ferit Mar 13 '16 at 13:16
  • The hardware is simply a separate thing to trust. Someone that so concerned about trust that they are auditing everything likely has some mechanism in place to vet or create their own hardware. – Sqeaky Mar 15 '16 at 00:46
21

Leaving aside the "open source == secure" argument, you can also look at this question as "Why run a secure OS when the BIOS/firmware isn't guaranteed to be secure".

Why bother locking my front door when an attacker can just break the windows?

You will never make a completely secure system. What you can do is make sure you work on securing the parts that are easy for an attacker to exploit. It is a lot more work to make firmware exploits, and they are limited to targeting a certain model of hardware. Whereas an OS bug is easier to exploit and affects a larger target base.

So yes, ideally you want both, but having just one isn't useless.

Grant
  • 1,056
  • 8
  • 15
  • Is it also worth mentioning that firmware exploits often require physical access, at which point your "secure" firmware can be replaced or tampered with. – James Snell Mar 14 '16 at 12:23
  • 1
    The threat models and avenues of attack are different. A better analogy would be "why bother locking my front door when I could get mugged on the way to the store?". Having insecure firmware won't make you less secure to, say, an ffmpeg bug anymore than being vulnerable to being mugged makes it not worth it to lock your doors. They fix totally different problems. – forest Apr 05 '16 at 01:58
6

Open source (free/libre) software is not (primarily) about security. One of its more important aspects is trust: you can verify what's running, it is much harder to hide something malicious. Some people also claim to more people will (might) be reading the code, which means chances are higher of vulnerabilities being found and fixed, resulting in higher code quality. This was already discussed in the answer of Tom Leek in depth. I won't get into this debatable topic deeper in this answer, as your question is not about why open source software is more secure, but why bothering at all if the firmware is closed source.

Putting aside the fact that also open source software is not necessarily secure, will running trusted code on untrusted firmware not make code execution untrusted? Sure! But the attack vector is potentially smaller. It is much harder to access the device's firmware interfaces than accessing your computer's operating system and application software, which might even be providing services in the internet (and lots of other interfaces to complete strangers). There will never be full security, but you can try to minimize risk within a given budget.

With adequate effort, closed source (UEFI/BIOS) firmware can be replaced with open source software: Coreboot is a great example which implements an open firmware for some products. But the UEFI/BIOS is not the only firmware: BLOBs like Intel's management engine sometimes are still required, hardware devices like graphics and network cards have firmware, your hard disk has, there is even microcode loaded to the CPU. And all of them have more or less arbitrary control over memory and/or storage. Finally, you might even distrust the CPU vendor, who might implement malicious circuits in plain hardware.

You have to stop at some point, and simply trust the vendor, as costs heavily increase the deeper you descend the stack towards hardware. Do you have the capability of finally verifying a complex CPU design and manufacturing the CPU on your own?

At Chaos Communicaiton Congress 2015 (32C3), there was a great talk how to get Towards (reasonably) trustworthy x86 laptops, providing a summary on the topic.

Jens Erat
  • 23,446
  • 12
  • 72
  • 96
4

There is no such things as full security, but one can make it harder to break security. While it would be possible to compromise the system from inside the BIOS, UEFI, Intel SME, BIOS of the network or graphic cards, CPU Microcode, bad CPU design... this is considerably harder than to use a bug in a user space program or the OS kernel. Thus the OpenBSD guys care about the problems they can solve and which really help. This does not mean that they are not aware of the other problems.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • I'm a layman to the degree that I must look up what the UEFI spec allows; but indeed, compromising a system through the mainboard firmware seems totally trivial (the UEFI specifies network access at boot time). That code would simply be part of the product, placed there by a government agency, nosy company or malicious employee (or all three) at production time. This has apparently been done by the NSA with Cisco products, keyword Jetplow. And the NSA is part of an elected government. – Peter - Reinstate Monica Mar 14 '16 at 07:48
1

Firmware is typically lumped in with the hardware, and in most situations you are forced to trust the hardware (for lack of a better alternative). So you end up trusting the firmware.

Not that this is a good thing - trust never is in InfoSec! But if you're trusting the hardware, you don't gain too much by not trusting the firmware. If you want to scare yourself on this subject, watch Ralf Weinmann talk about the baseband software that every phone has but no-one ever thinks about: https://www.youtube.com/watch?v=fQqv0v14KKY

Graham Hill
  • 15,394
  • 37
  • 62
1

There are always deeper levels to consider, and users have to choose where to stop.

  • Many chips have an unflashable firmware / BIOS. Do you want that, even though you could never edit it?
  • What about the microcode of your processor? That can be replaced (and is)
  • What about the uneditable microcode of your processor / GPU / ...?

The only way to be "truely safe" would be to have the exact design of every chip in your machine, and some way to verify the physical chips have that exact design, but no Intel / AMD would never give you that, there is always some block you can't trust.

0

Please keep in mind that the recent complexity of BIOS software (i.e., that it is vulnerable to attack), is a new development relative to the history of the field. Due to this, there are very few comprehensive threat assessments for BIOS and firmware software. To accomplish a secure situation, you need to use both secure firmware, whether open or closed source, and a secure OS. Using a secure closed source BIOS and a secure open source OS is a perfectly reasonable option.

0

As may have said, open source does not equal security. Open source is tremendously transparent, which can aid in review, but it assumes the review gets done. It also assumes the review gets done with your interests in mind. Is it being reviewed to a level you trust.

In many government labs, open source code is actually distrusted. They trust commercial closed source code more. There are many reasons, but one which is especially relevant here is that commercial closed source code has a commercial corporation behind it. If they have particular security concerns they wish to address, it can be easier to work with a company to resolve the concerns than it would be to hire your own expert to do the review. On the other hand, there is no company producing an open source product, so it can be very difficult to convince the legion of reviewers looking at the open source code to look at your particular issues. They have found that, in the case of adversaries intentionally writing backdoors that target them, rather than targeting any arbitrary user, are much easier to sneak into open source, and are not often caught in review. A company has much more to lose by permitting backdoors hurting their customers than a programmer writing open source in their free time does.

Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
0

Even if you cant't trust the firmware, being able to trust the OS (and applications) increases the overall trust that you can have in the system.

Also, the firmware is seldom (if ever) used beyond booting the system and the chances of an vulnerability or backdoor therein successfully affecting the OS once it's up and running are next to nothing.

Micheal Johnson
  • 1,746
  • 1
  • 10
  • 14
  • Cf. CISCO's Jetplow. How can you say the chances are next to nothing if the attack vector is built-in, demonstrated and totally trivial? How would it help if "the firmware is seldom (if ever) used beyond booting" (potentially from a network location), even if it were true? – Peter - Reinstate Monica Mar 14 '16 at 07:53
  • I thought we were talking about desktop/laptop computers (and _maybe_ mobile devices) here, where the firmware is used to boot the OS and then not again. Because in such systems the firmware is used only to boot the OS, and the OS then reinitialises a lot of things, it is difficult for the firmware to plant anything malicious in the OS. – Micheal Johnson Mar 14 '16 at 13:23
  • Your "the firmware is used only to boot the OS" should read "the firmware is *supposedly* used only to boot the OS". – Peter - Reinstate Monica Mar 14 '16 at 15:15
  • No, it _is_ only used to boot the OS. The firmware can't do a thing once the OS has taken over the CPU and taken control of the hardware. – Micheal Johnson Mar 14 '16 at 15:50
  • I am positive the firmware can install all kinds of backdoors and "hardware abstractions" (read: key logger) during boot. The firmware has access to the hard disk and the network. Hell, it could install a new kernel from wherever. It's not *supposed to.* That much is correct. For anything else all bets are off. You own the UEFI, you own the machine. You own the machine, you own the users. – Peter - Reinstate Monica Mar 14 '16 at 16:15
  • Upon reading some of the other answers we may actually mean two different things. You are right that the actual firmware doesn't do much after boot with modern OSs. It is therefore likely that there may not be many attacks using flaws in an uncompromised firmware once a machine is up and running. My point was that given Snowden's leaks it is actually rather likely that *many device's firmware is intentionally compromised* (as in the CISCO example). I wouldn't be surprised if that affected typical business servers from major vendors as well. If I were China or Korea I'd do it. – Peter - Reinstate Monica Mar 14 '16 at 16:28
  • While there have been some proof-of-concept firmware backdoors/exploits (and potentially some "in the wild" too), most of the time these are detectable and/or avoidable with appropriate security measures. A high-security OS could check the checksums of system files against a trusted source that the firmware is unlikely/unable to compromise (e.g. a read-only removable disk) and then wipe the system's memory and reinitialise everything. Pretty difficult to get a keylogger in there. If the firmware's loading the kernel in a VM, the kernel can detect that based on various hardware attributes. – Micheal Johnson Mar 14 '16 at 20:54
0

I'd remind you that you should not let the perfect be the enemy of the good.

Yes, OpenBSD is at risk because it runs on closed source firmware. There is a bit of a catch-22 here. The closed source hardware platforms make up the bulk of what is commercially available on the open market. So if you want people to use your software/be of service to anyone else, you will need to run on some of that firmware.

From a security point of view, they can fix many problems with software security by writing and running OpenBSD software. The exploits that exist in firmware are usually done by a small and select group of people (most firmware attacks are much more specialized).

Given an open source operating system, open source hardware groups can start with a known software base. Without a large capital base, building both at same time is prohibitive (most years, OpenBSD barely has the base for the software that it writes).

Until a good and affordable open source hardware solution exists, complaining that people are not using an open source hardware solution seems like spitting in rain.

Walter
  • 232
  • 1
  • 5
  • Moreover, open source software is (partly) secure because a lot of people are able to review the code and edit it,if necessary. Even if an affordable open source hardware solution existed, many people would not be able to review the hardware, especially at silicon level, because they lack skills and/or equipment. As such, affordable open source hardware wouldn't be a solution unless one also produces affordable tools for hardware security/validation AND a massive education campaign begins to teach people about hardware security. – A. Darwin Mar 18 '16 at 10:33