60

I wanted to buy a Librem Purism 13 because I care about my privacy and generally wanted a laptop to test Linux on.

However, I was advised against it because it uses Intel i5 processors which contain binary blobs. From what I understand binary blobs are parts of code which cannot be read, and you are not really sure what they do. They could for example extract what you are doing on your computer and send this information somewhere.

Is this possible and if so how much of a risk is this? Wouldn't almost all recent computers be compromised? Is there any laptop that is completely open source with no binary blobs in it?

techraf
  • 9,141
  • 11
  • 44
  • 62
user113581
  • 521
  • 4
  • 4
  • 1
    @DmitryGrigoryev I'm not only talking about deliberate backdoors. There are many other ways to exploit code. – Little Code Jun 06 '16 at 10:27
  • 18
    Even if you can avoid these binary blob bits of software, how can you trust the hardware? Anything you can do in software, I can do in hardware. – user19474 Jun 06 '16 at 15:55
  • 3
    @user19474 I think the concern is less that Intel is behaving badly by spying on you, and more that Intel's binary blob software might be vulnerable to cracking. It would be much harder to covertly modify the hardware than install an infected blob. –  Jun 06 '16 at 17:05
  • 4
    @WilliamKappler: I don't think those are the only two possibilities. While it's almost certain there are vulnerabilities (there's actually been significant recent research on this) in SMM & ME blobs, ME's **intended purpose** is as a backdoor to let the legitimate user (and possibly DRM enforcement policies) get control below the OS kernel level. Its security model treats the OS as untrusted and the proprietary, secret blob as trusted. This is **inherently a backdoor**. The only question is who (else) has access. – R.. GitHub STOP HELPING ICE Jun 07 '16 at 19:40
  • 1
    @R.. And we know that the U.S government often mandates backdoors and backdoor access for U.S based companys, and it often also mandates that these orders cannot be disclosed (the orders are classified/secret) – Magisch Jun 08 '16 at 07:12
  • Related: [How to trust ICs?](https://security.stackexchange.com/q/48568/3272) – Tobias Kienzler Jun 09 '16 at 05:17

7 Answers7

85

Summary:

There's probably some BS marketing going on, but on the whole they probably are making the more privacy-respecting laptop they can. Other answers mention other brands of privacy-focused laptops that avoid Intel chips in favour of 100% libre hardware, but you take a big performance and cost hit for doing it, because, well, Intel is the market leader for a reason.


Who do you trust?

Unless you go out into the woods with a hatchet and build yourself a life, you've got to place trust in somebody. Based on the descriptions on their page, the company Purism seems like a reasonably good group to place trust in (which you seem comfortable with if you're already considering buying their laptop). They clearly put a lot of scrutiny on their component suppliers with regards to privacy. Assuming of course that it's not all B***S***t marketing, which I have no way to know for sure. (See the "Additional Thought" at the bottom for more on this.)

Intel

As for binary blobs and Intel, this is actually a deeper question than you realize. "Binary Blobs" refers to software that's provided to you in binary (executable) form, but you have no access to the source code, or any good way to inspect it. Intel is a hardware manufacturer, so while they may have some binary blobs of software, what about hardware that's provided to you in chip form with no access to the designs? Do you think Intel allowed Purism to inspect the blueprints for all of the chips on an i5 board? Of course not, that's billions of dollars worth of intellectual property!

This debate around Intel, privacy and black-box hardware is currently a hot-topic with Intel's RdRand instruction - an assembly instruction for retrieving a random number from an Intel hardware-RNG chip on the motherboard. I was recently at a crypto conference and overheard one of the designers of the RdRand chip having a discussion with another attendee that went something like this:

Attendee: "RdRand is a blackbox chip. Will you release the source designs for security audit?"

Intel engineer: "No, of course not, that's protected intellectual property."

Attendee: "Intel is in American company, how do we know the American government hasn't forced you to include backdoors?"

Intel engineer: "Well, I guess I can't prove it to you, but I designed it and I can assure you that there aren't any."

Attendee: "Forgive me if that's not good enough. I'm going to continue doing random numbers in software."

Intel engineer: "Your loss, I guess."

So, should we trust them?

At the end of the day, unless you live in the woods, you have to trust somebody. I personally think the Intel corporation is very security-aware - having attended multiple crypto conferences with them, and while they are under American law (I'm not American BTW), I think they are at least as trustworthy as any other closed-source hardware vendor. Ken Thompson's 1984 paper "Reflections on Trusting Trust" showed that trojans can be injected at the compiler level in a way that's almost impossible to detect, so even inspected open source code is not guaranteed to be trojan-free.

At the end of the day, you're got to trust the people, not the code. Do you trust Intel? Do you trust Purism to be scrutinizing their suppliers as well as they are able to? Nothing will ever be 100% provably secure, but Purism's products are certainly better than your standard laptop from Best Buy.

Additional thought:

The Purism Librem product pages say:

Welcome to the beautiful laptop that was designed chip-by-chip, line-by-line, to respect your rights to privacy, security, and freedom.

"Line-by-line" .... right, sure. The linux kernel itself is about 16 million lines. The Librem ships with either their Debian-based PureOS, or Qubes OS, which will both contain tens of millions more lines, plus bootloaders and firmware, plus all the apps in the Debian repositories. You want me to believe that Purism's 6 developers have personally inspected every single line of code for insidious, hard-to-catch backdoors? And have also inspected all the compilers used looking for self-replicating trojans? Please. Over-zealous marketing.

That said, if you take Ken Thompson's "Trust the authors, not the code" philosophy, and we decide that we trust the Debian devs, and the Intel engineers (I have decided to trust them out of convenience), and we trust Purism to apply the "Trust the people, not the code" philosophy appropriately, then we're probably OK.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • 12
    FWIW, GCC 3.0.4 [has been checked](http://www.dwheeler.com/trusting-trust/) to definitely **not** be infected via the Ken-Thompson hack. – ulidtko Jun 06 '16 at 16:39
  • 2
    Out of curiosity, do you have anything to add about AMD? I wouldn't call using AMD "a big performance and cost hit", in fact quite likely the opposite on cost, but then I have no idea if they are any better than Intel in regards to the question at hand. –  Jun 06 '16 at 17:00
  • 7
    @WilliamKappler I'm assuming none of Intel / AMD / nVidia / ATI / other big-name chip manufacturers are "100% libre" in releasing their source / designs. So that comment was comparing "the big guys" against true 100% libre groups like the [Novena Project](https://en.wikipedia.org/wiki/Novena_(computing_platform)) which offers "a 1.2 GHz Freescale quad-core ARM architecture computer closely coupled with a Xilinx FPGA" ... hardly a competitor for the i5 / i7 line in terms of performance. – Mike Ounsworth Jun 06 '16 at 17:37
  • @ulidtko That compiler is [over 14 years old](https://gcc.gnu.org/releases.html). I *hope* that no one is using it anymore. – jpmc26 Jun 07 '16 at 00:58
  • 2
    @ulidtko the final form of ken thompson's hack is a binary hack, it doesn't present in the source code at all. there could definitely be compiled copies of gcc 3.0.4 out there that are currently infected – Steve Cox Jun 07 '16 at 19:22
  • 3
    @SteveCox If you bootstrap GCC from other free C implementations (TCC, Clang, and a [C interpreter](http://stackoverflow.com/q/584714/2738262)), then repeat David A. Wheeler's "diverse double compiling" technique that ulidtko mentioned (which compares the second stage bootstrap binaries), then it's vanishingly likely that the same binary hack will persist in all three. Though modern GCC needs a C++ compiler, you can build a clean old g++ that way and then use that to build a clean modern g++. – Damian Yerrick Jun 07 '16 at 19:30
  • @DamianYerrick if you write your own compiler in a hex editor you trust (maybe an eeprom writer you built from discrete gates) then you can use that to compile gcc without the hack. There are lots of ways to mitigate the risk, but none of those methods proves the binary wasn't infected – Steve Cox Jun 07 '16 at 19:38
  • This answer does not seem answer the OP's questions, and basically says "well, it doesn't matter what you do, because you have to trust Intel anyway". The Librem laptops are basically a sleek modern laptops with a killswitch, with no emphasis on auditability or privacy except the hardware switch and non-Windows preinstall. They also provide basically zero information on what their "privacy" and "free" features entail. They even downright lie about having no binary blobs in their [FAQ](https://puri.sm/faq/). – Willem Jun 08 '16 at 09:32
  • @SteveCox you didn't pay attention. The work I linked above examines exactly the binary hack a.k.a. *the* Ken-Thompson hack, and demonstrates that GCC *binaries* [of certain version] from gcc.gnu.org are not infected. Of course there are 3rd-party builds which could be infected — but these weren't checked, for obvious reasons. – ulidtko Jun 08 '16 at 12:18
  • Regarding the "additional thought", you might want to mention that besides assuming that they checked those millions of lines, they would have to be 100% efficient at detecting any possible kind of backdoor. – Martin Argerami Jun 08 '16 at 16:30
  • @MartinArgerami Is that not implied by `"inspected every single line of code for insidious, hard-to-catch backdoors?"` – Mike Ounsworth Jun 08 '16 at 17:12
  • If this is able to connect to the Internet on its own, and even update itself, even with the PC being off, then why doesn't the router register the IP address that this would presumably need to use? The router is external to the system, so it should see the connection, and if the firewall in the router is blocking said port, how could it ever work? Wouldn't that imply that routers have also conspired to keep this invisible and working somehow? – code_dredd Jun 11 '16 at 10:08
  • @ray Good point. Also the network card has to be in collaboration - can't send stuff if you don't have an IP address. This laptop has a non-Intel network card and a hardware killswitch for the wifi. – Mike Ounsworth Jun 11 '16 at 15:03
  • Yes, but it seems unlikely (to me) that this level of coordination actually exists, even across hardware from different vendors (e.g. non-Intel). – code_dredd Jun 11 '16 at 21:06
26

Yes, binary blobs are a security risk, as any other proprietary software that you cannot audit. I wouldn't call all systems using proprietary software "compromised", but you can only trust such systems as much as you trust people selling them.

Regarding that Purism thing, I wouldn't trust them more than I would any other laptop. Their FAQ states:

Purism provides the source code to ALL the software from the bootloader, kernel, operating system, and software, and does not include any binary blobs in any of them. People can safely verify every single line of code.

Yet, it does seem to use binary blobs, contrarily to the above statement. The libreboot project which specifically targets hardware that can be used without binary blobs writes:

Will the Purism Librem laptops be supported? #librem

Probably not. There are several privacy, security and freedom issues with these laptops, due to the Intel chipsets that they use. See #intel. There are signed proprietary blobs which cannot be replaced (e.g. Intel Management Engine and CPU microcode updates). It uses the proprietary Intel FSP blob for the entire hardware initialization, which Intel won't provide the source code for. The Video BIOS (initialization firmware for the graphics hardware) is also proprietary. The libreboot project recommends avoiding this hardware entirely.

I clearly would expect more transparency from project which asks for extra money to protect my privacy.

Dmitry Grigoryev
  • 10,072
  • 1
  • 26
  • 56
16

Do binary blobs pose a potential security threat?

In short: yes.

Binary blobs are by definition not auditable (barring extended reverse-engineering). You don't know exactly what they do, and whether they have backdoors.

One particular binary blob I'd like to highlight is the one in the Intel Management Engine (and the AMD equivalent, the Platform Security Processor). It is a blob that runs on a processor connected directly to your CPU and main memory; it has full access to your OS, hardware and stays on when your computer is turned off but powered.

It is a remote backdoor as its intended use is allowing BIOS management, microcode updates etc. over the internet, by directly accessing your network hardware and communicating with Intel servers over channels encrypted with keys only Intel (presumably) has. It can not be disabled in most newer CPU's.

You can say: "Well, if I can't trust my hardware developer I have no physical security and I'm screwed anyway", but why go for hardware you know has backdoors that can be used by Intel and the American government, and which will potentially one day be exploited by others?

Are all recent computers compromised by this?

Yes. Every i3, i5 and i7 processor is. Every modern Intel processor. Every modern AMD processor. A back-of-an-enveloppe calculation tells you that's at least 300m active computers (according to Wikipedia there are about 433 million computers on the internet in the developed world).

I don't know about you, but I find this deeply disturbing.

The first person to find an exploit in the Intel Management Engine will make in the order of 10's or 100's of millions selling the exploit, or will be able to compromise 10's or 100's of millions of computers in a day.

So is the Librem a good choice for privacy and security?

No. It comes with Intel ME and some other not-nice stuff (BIOS is a blob, microcode upgrades come as signed blobs (so you can't use your own), the video BIOS is a blob). It does have that nice hardware kill-switch, but that can be substituted for with an external network dongle and a piece of tape on the camera.

The OS looks OK, though I don't see what makes it better than, say, Kubuntu or some other user-friendly Ubuntu/Debian/Arch distro, and this is pretty easy to install on any laptop (though hardware support can still vary a little bit; "best laptops for Linux" on Google seems to have very accessible guides on this).

A guy on Reddit has a full breakdown on the Librem laptop, I thought it gave a very clear overview.

Do any laptops come without binary blobs?

Yes, all FSF certificated laptops. The Libreboot T400 is the one which has the best specs of all 3. Yes, there are very few FSF certified laptops, full list here.

Another place to look could be the list of supported Libreboot hardware, but then you'd have to flash your own bootloader and deal with the other firmware on your laptop yourself.

I personally would recommend you consider the Libreboot X200 or the Libreboot T400 if you're going for a privacy-conscious laptop. They're made by The Ministry of Freedom Ltd., which features Leah Woods, Libreboot's main contributor.

These two run laptops Libreboot which does not use any binary blobs (Libreboot is a deblobbed distribution of Coreboot). In fact, these laptops feature no binary blobs whatsoever, and come with instructions on building and flashing your own firmware.

In addition to coming with the full sources of all firmware and having hardware kill-switches for networking and the camera/mic, it features a fully-disabled Intel ME (Management Engine).

You will be getting a lot less performance (Intel Core2Duo upgradeable to Core2Quad) for your money, but it is the one of the only options available that ticks all privacy and free-as-in-freedom boxes.

Willem
  • 269
  • 1
  • 3
  • 4
    Is there any evidence that the IME can update itself, especially when "turned off"? Seems quite unlikely. – Coxy Jun 06 '16 at 12:41
  • 4
    While this answer provides useful information, and does answer the question, it reads like a giant ad for the libreboot project. I'd give you an upvote if you re-word it to sound less like a Libreboot salesperson; possibly by providing an answer to the other parts of the question `"Is this possible and if so how much of a risk is this? Wouldn't almost all recent computers be compromised?"` – Mike Ounsworth Jun 06 '16 at 13:47
  • 6
    @Coxy, in response to this [I asked exactly that question on Skeptics.SE](http://skeptics.stackexchange.com/questions/34205/do-modern-computers-include-components-that-can-connect-to-the-internet-when-the), followed by doing some research. It seems it can. – Caesar Jun 07 '16 at 01:47
  • 2
    I edited the answer to answer more of the questions asked, and trimmed the Libreboot section. As this was my first answer on Security, would you have any more remarks? – Willem Jun 08 '16 at 09:08
  • @MikeOunsworth How's this? It did turn into a pretty huge answer now. – Willem Jun 08 '16 at 12:24
  • 1
    @Coxy I added your link to the answer, hope you don't mind. I thought crediting you in the answer would hurt readability, and it's already too long as-is. – Willem Jun 08 '16 at 12:24
  • @Willem Yeah cool, +1 – Mike Ounsworth Jun 08 '16 at 14:00
  • Speaking about turned off PC and the Intel Management Engine. Can't we make sure that firewall/router in the network is blocking all packets to/from the turned off machine? There are multiple issues with this of course but the idea of blocking traffic can be interesting. – Vladislavs Dovgalecs Jun 08 '16 at 17:56
  • 1
    @xeon Yes, that would be possible, but isn't that great of a solution unless you're always carrying your own router which connects you to other access points. Firewall rules etc. on your OS won't work as the ME accesses your networking hardware directly. You also can't read the outgoing traffic because it's encrypted with Intel-owned keys. – Willem Jun 09 '16 at 14:39
  • @Willem I actually think this is rather simple to implement if one has access and control of a software firewall. A heartbeat service can run on the machine and when it goes down (shutdown happened?), a new rule is added to the firewall - block all traffic to/from that machine. Sure, nothing can be done when the machine is powered on... – Vladislavs Dovgalecs Jun 09 '16 at 16:29
  • 1
    If this is able to connect to the Internet on its own, and even update itself, even with the PC being off, then why doesn't the router register the IP address that this would presumably need to use? The router is external to the system, so it should see the connection, and if the firewall in the router is blocking said port, how could it ever work? Wouldn't that imply that routers have also conspired to keep this invisible and working somehow? – code_dredd Jun 11 '16 at 10:07
  • @xeon What would stop Intel ME from auto-updating while the computer is turned on? There's also no reason why the ME would have to be limited to one specific MAC address or port; it might as well use a port you normally use with your MAC address (and then instruct your network card to redirect certain packets to the ME). – Willem Jun 12 '16 at 12:09
  • @Willem Some draconian firewalls are set to explicitly allow traffic from certain IPs only. DHCP server can also be set up to monitor what MACs are actually allowed on the network. I am just wondering how all this networking can happen when the machine is turned down and I have the control of the networking devices e.g. router/firewall, dhcp server, dns, proxy etc. – Vladislavs Dovgalecs Jun 14 '16 at 00:33
  • @xeon A trick the ME could use in theory is cloning the MAC address used normally by the OS; it can see and manipulate all internet traffic so there's no reason it couldn't do this. Even if you allow only traffic from a specific MAC and IP, and then only allow encrypted connections readable only by the router, the ME could in theory read the encryption keys from your memory when the computer is turned on; it's ring -3 (deeper than SMM, ring -2) rootkit so it can basically do anything you can think of. – Willem Jun 15 '16 at 10:36
8

Binary blobs are code that you have to send to a device to make it work, but that you can't inspect or modify. They are more of a threat to your legal freedoms than your privacy. In the case of Intel chips, you can't write a free-software BIOS, because it must include the non-free binary blobs.

Binary blobs let chips outsource storage of code to software. The alternative would be to put that (still non-free) code on, say, built-in flash memory. Of course, an expensive CPU die is a horrible place to put flash memory, so if you can leave it out and just have the blobs sent when the chip needs them, you save some cash.

Note that a chip that did spring for the embedded storage could still do the exact same potentially-invasive things the chip with the blobs does. The issue is more a legal one, in that you're being forced to touch the blobs to use the chip, and that contaminates free software with the blobs' non-free licenses.

Reid Rankin
  • 1,062
  • 5
  • 10
  • 1
    They *could* be privacy threats, because you can't inspect what they do. That is, they *probably* don't send your information to the NSA, but how would you know? – user253751 Jun 06 '16 at 20:29
  • 1
    Every modern CPU uses a form of microcode. Whether it is stored in the chip or delivered by software at each power on (via binary blobs) makes no difference to whether it can be malicious. Binary blobs represent no extra threat. – Reid Rankin Jun 06 '16 at 20:41
  • To be truly safe, you need to use an open hardware processor. (Of course, those are much less powerful than ordinary CPUs.) – Reid Rankin Jun 06 '16 at 20:50
  • But then you have to trust your FPGA vendor (or silicon foundry). Better to make the whole thing out of relays you wound yourself with copper you dug out of the ground. (Although realistically, it's probably okay to assume metal sheets and wires you buy aren't bugged). – user253751 Jun 06 '16 at 21:14
  • 1
    While I'd argue that a meaningfully malicious FPGA would be really hard-or impossible-to produce, that doesn't change my point that using binary blobs is no more dangerous than trusting your CPU vendor in the first place. – Reid Rankin Jun 06 '16 at 22:53
  • Is a flash die on a multi-chip package also "a horrible place to put flash memory"? – Damian Yerrick Jun 07 '16 at 19:34
  • Nope. (Thermal considerations notwithstanding). There will be an even greater threat though: malware in binary blobs on a chip with no storage cannot save state across power cycles. Built-in flash would make the blobs go away, but increase the capabilities of any malware. – Reid Rankin Jun 07 '16 at 20:06
3

I had just happened to watch 32c3 conference talks, and while I am far from being into hardware security at least two of those talks touch upon the topic of chipset binary blobs. Both might add to an answer.

This one by Joanna Rutkowska of the Qubes OS project discusses in some details Intel ME and SMM technologies and security concerns. She also discusses possible ways to access and analyze what the various chipsets are actually doing.

https://www.youtube.com/watch?v=H6bJ5b8Dgoc

The other is actually on hardware and chip design, by AMD chip designer David Kaplan, and he does touch upon signing code, and in the Q&A session towards the end mentions a couple of points of broader interest that would be relevant. For example, he mentions the fact that a public key is designed into the CPU to allow for signature checks. The whole session is interesting; Q&A starts around the last quarter; programmable pieces of on-CPU memory are discussed at 46:42, microcode updates are mentioned at 48:38.

https://www.youtube.com/watch?v=eDmv0sDB1Ak

raddaqii
  • 131
  • 4
1

Let's say the i5 contained some spy routines, "collecting info" on you, it would still need to get the data out for it to pose any threat. The only practical way to get the data out would be over IP, which is easily detectable, analysable and blockable. From a more paranoiac view, there might be RF signals leaking from the system, forged by the blob or not, which can be picked up by an antenna, but that is more like a James Bond-level than a facebook-level privacy concern.

multia
  • 19
  • 2
  • 6
    Well yeah, it's over TCP/IP, port 16992 to be precise. It's not easily analysable though (SSL with keys known only to Intel), nor easily blockable (software firewalls on target computer won't help). Do you somehow imply that since the port is known there's no threat in such communications? – Dmitry Grigoryev Jun 07 '16 at 15:16
  • I agree that you can't read the SSL contents without the keys. By analysable I meant seeing _if_ it gets an IP, _if_ it makes a connection and _to what_. It needs to do this if it wants to get out data (on its own initiative) onto the internet. If we're worried about it waiting quietly for some magic packet similar to WOL to open the backdoor, that would technically be possible, but for that I think the threat has to come from within the same network. – multia Jun 10 '16 at 08:57
1

Since I can't comment yet, here's another answer on the subject about:

Lights-out management with Intel Active Management Technology on Intel vPro processors which is

capable of remotely controlling and modifying virtually all aspects of the system, including the ability to download and update software and firmware regardless of the computer's power state.

Do consumer computers include components that can connect to the internet when the computer is apparently turned off?