40

We know that Intel processors have one (ME) so it is definitely possible.

In general, how could you even trust a piece of hardware like a CPU or network card, even if the manufacturer says there is no backdoor? For software, it is easy since you can access the source code verify it and compile it. But for hardware?

kelalaka
  • 5,409
  • 4
  • 24
  • 47
MasterYi
  • 403
  • 3
  • 4
  • 31
    As for verifying software being easy just because you have access to the source code, the [Ken Thompson Hack](https://wiki.c2.com/?TheKenThompsonHack) points out that even that has worrysome corner cases. Especially in languages like C++ where it is unreasonable to start developing a compiler from scratch. – Cort Ammon Nov 26 '20 at 06:37
  • 2
    "We know that Intel processors have one" Bold assertion without actual proofs, lovely. – vaultah Nov 26 '20 at 15:21
  • 16
    @vaultah is the linked intel management engine not a backdoor? – theonlygusti Nov 26 '20 at 15:37
  • 9
    The bad news: You can't. The good news: Hardware is pretty much never the weakest link in a system. Other bad news: Humans set a really low bar for "weakest link". Also: https://xkcd.com/538/ – Extrarius Nov 26 '20 at 17:09
  • 1
    @theonlygusti maybe we'll get a definitive answer someday. – vaultah Nov 26 '20 at 18:02
  • 4
    For software it is easy? Yeah, in theory. In practice though, nobody is going to read all the code before they compile it, every single time. Instead, we all download and install packages and trust the supply chain. – reed Nov 27 '20 at 00:52
  • 1
    @theonlygusti: it's definitely not a backdoor, since it's credentials can be configured by the user. Unless you also consider the OS's login prompt a "backdoor". – Martin Argerami Nov 27 '20 at 15:31
  • Real men have fabs ! – fraxinus Nov 27 '20 at 15:49

5 Answers5

40

The short answer is, you can't. The longer answer: there are a few things that can be done to increase your trust in hardware, though they also just shift the root of trust elsewhere.

A first interesting question you pose is the software/hardware distinction. To not go into the discussion about the possibly blurred boundary between the two here, I'll understand "hardware" to be non-reconfigurable logic implemented in some physical device (i.e. I'll exclude firmware such as the intel ME or microcode).

Backdoors can be inserted into a physical device in a number of stages: from conceptual architecture, through logic design, up to fabrication. To ensure no backdoors are inserted, you would need to validate the whole process from the end product to the beginning.

The good news is that the initial stages are very similiar to software - in fact, logic is usually designed using hardware description languages (HDL). These designs can be audited in the same way software can be audited. The step from here to fabrication involves multiple conversions e.g. to lithography masks using synthesis software, in a similar way to how software is compiled using a compiler. These too can be audited just like a compiler. (As a tangent - the bootstrap problem is a really interesting problem where you consider the possibility of the compiler compiling your compiler being untrustworthy)

So this leaves the last step: fabrication. Validation at this stage is usually done both by inspecting the fabrication process, and by randomly sampling devices from the same production batch (produced using the same lithography masks). For instance, the masks used can be compared to a validated trustworthy copy to ensure that no backdoors are inserted at this stage. Similarly, randomly sampled devices can be delayered and inspected under an electron microscope.

However, as a consumer, these steps are usually not available to you. For most chip producers, this whole process involves a lot of closely kept trade secrets, and aren't publicly available. This is why there is a movement towards creating open-source hardware toolchains and HDL implementations of common logic modules and systems - though there are a number of problems here too.

Finally, as @knallfrosch correctly points out in the comments, backdoors may also be inserted after production, either at a distributor, while the product is being shipped to a customer, or in-place (c.f. evil maid attack). An example of such practices by the NSA has come to light through the Snowden affair. Tampering at this stage may range from hardware implants added to the device to editing the circuit on a silicon die e.g. using a Focused Ion Beam (FIB). Mitigations at this stage usually rely on these kinds of tampering leaving externally visible traces, which may be additionally enforced using e.g. tamper-evident packaging (something every-day users can do is the glitter and nailpolish technique). Furthermore, minute device-specific imperfections that are a side product of fabrication may be used to create so-called Physically Unclonable Functions (PUFs) which may be designed in a way that tampering will most certainly alter or destroy the PUF and therefore be detectable.

plonk
  • 633
  • 4
  • 13
  • 1
    This is actually one of the areas where I would expect the EU to regulate heavily when they get around to looking at it. – Thorbjørn Ravn Andersen Nov 26 '20 at 09:22
  • 2
    @ThorbjørnRavnAndersen how so? What kind of regulation do you expect? – plonk Nov 26 '20 at 09:29
  • 4
    No undocumented functionality. Perhaps official specifications too. Forensic teams analyze samples. Heavy, heavy fining of those who try to circumvent it. GPDR shows the EU has a hammer large enough. – Thorbjørn Ravn Andersen Nov 26 '20 at 09:59
  • 4
    Backdoors can also be implemented after manufacturing. "The NSA routinely receives – or intercepts – routers, servers and other computer network devices being exported from the US before they are delivered to the international customers." https://www.theguardian.com/books/2014/may/12/glenn-greenwald-nsa-tampers-us-internet-routers-snowden – knallfrosch Nov 26 '20 at 12:33
  • 14
    "No undocumented functionality" implies "bug-free". Letting the legal system decide what's a bug and what's a backdoor is opening a whole new can of worms. – gronostaj Nov 26 '20 at 12:55
  • @knallfrosch This is a very good point! I‘ll update my answer to add that possibility. – plonk Nov 26 '20 at 14:15
  • I’m a digital design engineer working on the cellular transceiver for a very popular smartphone brand. It would be exceedingly difficult for me or a colleague to insert a meaningful backdoor. It would have to be a collaborative effort and you’d have to do it at a point where you can actually interpret (to receive commands) and alter the received data. Which is pretty much only done in software (though with the help of specialized DSP CPU cores). – Michael Nov 27 '20 at 07:47
  • @Michael It actually doesn't have to be that complicated. I remember one PoC where a capacitor was charged slightly when a rare instruction was executed. If you execute this rare instruction many times in a loop, the capacitor charges to a logic 1 level, which disables the check which prevents you running kernel-mode instructions in user mode. They did this at the mask level, no source code. – user253751 Nov 27 '20 at 13:13
  • 1
    @user253751, but doesn't that imply that there needs to be some connection between that instruction, the cap, and the circuit that implements the kernel-mode check? So you'd at least need to know where all that is when fixing the mask, and possibly arrange that connection it beforehand – ilkkachu Nov 27 '20 at 13:27
  • @ilkkachu Yes, they had to somehow find out where that stuff is, but I guess they could do it in a simulator or something – user253751 Nov 27 '20 at 13:28
  • @ThorbjørnRavnAndersen how does one remove undocumented functionality from software? – Tim Nov 27 '20 at 14:02
  • @Tim You don't. You scrutinize samples deeply using forensic techniques and make it very illegal and very fineable to do it. – Thorbjørn Ravn Andersen Nov 27 '20 at 14:04
  • @ThorbjørnRavnAndersen surely you’ve now just crippled society by making software development 10x more expensive, if every vendor has to forensically examine software. – Tim Nov 27 '20 at 14:07
  • @ThorbjørnRavnAndersen in fact, now I think about it, I’m not sure it’s even possible. If I document my software as never entering an infinite loop, I don’t think there’s a way for you to prove it doesn’t, is there? – Tim Nov 27 '20 at 14:08
  • @plonk No, so you need trusted factories and trusted software vendors. – Thorbjørn Ravn Andersen Nov 27 '20 at 14:56
  • @ThorbjørnRavnAndersen Can you produce a simple real-life tool such as a hammer and an accompanying dosumentation in such a ways that the hammer is withoout undocumented functinality? – Hagen von Eitzen Nov 27 '20 at 15:39
  • @HagenvonEitzen This is very much missing the point. GPDR shows that the EU has a hammer big enough to scare everybody - including US companies who usually don't care much about this - into compliance. The same could happen with computers in general. "If we discover a backdoor, any backdoor, in your product, we will fine you so much that you won't like it regardless of how big you are. Try us." – Thorbjørn Ravn Andersen Nov 29 '20 at 12:45
16

plonk's answer already outlines the technical options for increasing trust in your hardware, such as device inspection or open-source hardware.

However, at the end of the day, the thing is:

You need to trust someone.

Fundamentally, this is the same for any type of hardware (or service) use use: How do you know that the manufacturer or the installer of your door lock did not retain a key? How do you know that your physician does not secretly share your confidential health information with a shady company to get kickbacks?

The answer is: You cannot know for sure - but there are strong social and legal incentives for them to play by the rules.


So in the end, you can only try to choose trusworthy vendors. That's more of a social than a technical problem; some things you can look for are:

  • What are the vendor's incentives? Do they seem to want to build a sustainable business?
  • What laws is the vendor bound by? What to they have to lose when they break them?
  • What type of scrutiny are they subjected to? How likely is it for any backdoors to be found?

The last point specifically is where things like open-source hardware help:

Even if you cannot verify the hardware, someone else (or multiple someones) may, and the risk of being found out will help to keep a vendor honest.

sleske
  • 1,622
  • 12
  • 22
  • 7
    The level of "You need to trust someone" might vary though, if you are in a sensitive governmental agency you might choose to use your hardware disconnected from the network in an RF isolated room – Rsf Nov 26 '20 at 09:07
  • 1
    @Rsf: Yes, of course. But the hardware must still communicate _somehow_ , and could leak data that way. So you still need to trust the people who develop the software, and the hardware - they could for example encode secret data in small dots in a printout. – sleske Nov 26 '20 at 14:40
  • 2
    In fact, [they *do* encode secret data in small dots in a printout](https://en.wikipedia.org/wiki/Machine_Identification_Code) – user253751 Nov 27 '20 at 13:13
  • " legal incentives for them to play by the rules." - well, does your CPU has a backdoor mandated by the government of the manufacturer? – Thorbjørn Ravn Andersen Nov 27 '20 at 14:05
6

It doesn't matter that much

Hardware backdoors are expensive. Very expensive. You need to influence complex supply chains, and the people responsible for them. Each time a backdoor is used it risks being discovered and published, becoming useless over time, and bringing down costly reputations of people and companies. For each of these times you need to account the benefit brought to the attacker vs the cost of the attack.

Hardware is usually the most secure part of systems. The security track record of software you typically have on your computer is frankly terrible. Even if you are an above-average user, there's good chances you'll have at least one critical vulnerability in your computer software within a year.

Also, there is no way to have absolute trust in anything. As the old saying goes: the only secure computer is the one that is turned off! And disconnected from any power sources.

But that doesn't really answer your question - How can you trust it (somewhat)? There's actually a number of practical things that you can do as an interested user or developer.


Detecting hardware backdoors

In most realistic attack scenarios, a hardware backdoor will have to escalate to a intrusion on the OS level or communicate over a distance (network, radio, ...). You can put systems in place to detect these.

Intrusion detection systems can help you detect backdoors. This can be done both within the potentially compromised system and at the network level. Obviously you can only trust the former so far (read on for mitigations to that problem).

Anti-virus and rootkit detection might offer some very baseline of protection if they're heuristic-based, but most likely would be evaded by an attacker that has a hardware backdoor if they're well known.

A SDR receiver let's you monitor the airwaves for anomalies if you have reason to suspect them. Easier said than done, but within the realm of possibility in a controlled environment.

aa

The RTL SDR costs around $25

It's worth noting that a sophisticated attacker will likely keep the detectable time window very small, so detection alone might not help you a lot.


Mitigating attacks

Using open source hardware is the easiest thing you can do. There's a number of vendors selling completely open systems, including BIOS and peripherals firmware, or disabling Intel's Management Engine, for example.

A well configured hardware firewall can prevent undesirable communication with the outside world.

Of course, the firewall itself can be compromised. Stacking several firewalls/IDS with different hardware, software and OS makes things more expensive for your attacker (and you!)

If your attacker is very sophisticated they might decide to build a mesh network to bypass your firewall entirely, so it might be a good idea to consider what other things your computer can communicate with on the same network.

Unplugging the network might not be a bad idea, actually. While you are at it, do the same to any plugged USB cables that might have embedded radio transmitters, including your keyboard. Or maybe just put your computer into a large safe, that could be safer and almost as convenient.

If you really have enough time money and willpower to do the whole computer-inside-safe thing, you're a real pain. Your attacker will avoid the network altogether. They will transmit data from your computer using sounds you can't hear, heating up your CPU or transforming your motherboard wires into a GSM antenna

Perhaps you should consider using a typewriter instead? :)

loopbackbee
  • 5,308
  • 2
  • 21
  • 22
3

I've got a computer sitting on my bookshelf that has no possibility of having a backdoor.

The reason for this is quite simple. No remotely plausible backdoor design could know how I wired up the IO ports. My assembly code drives all IO manually including the IO clock, which is just another data pin as far as the CPU is concerned.

In addition, had there been a backdoor looking for an SD card, I would have noticed because I debugged my SD assembly with LEDs on the SD pins.

Having a machine that has no backdoor can be used to verify that another machine doesn't have one. I could bootstrap a compiler using this machine as the origin, and use that to cross-check something with an Ethernet port, and so on ladder up until I could check current gigabit Ethernet.

In the case of IME, it was quite defeatable for the simple reason that it couldn't work if you added a PCI card with an Ethernet controller it didn't have a driver for (which was pretty much any non-Intel).

But not-trusting is stupidly expensive and you probably can't afford it. Intel burned a lot of trust with IME (there should have been a simple on/off switch in bios settings that really worked) and it's going to be hard to get back.

Joshua
  • 1,090
  • 7
  • 11
  • But where did you write the assembly code? :) (nice answer!) – loopbackbee Nov 27 '20 at 02:50
  • 3
    @goncalopp: If a backdoor can determine that I'm writing an OS for a processor it's never seen before, and outsmart me to fit a backdoor on my custom OS into its space constraint (imposed by the hardware architecture--the instruction space for the OS was strictly limited and I filled it to within 5 machine words verified by hexdump), than the backdoor is AI complete and deserves to win. – Joshua Nov 27 '20 at 03:25
  • So that solves it for you. Does this scale? – Thorbjørn Ravn Andersen Nov 27 '20 at 15:00
  • @ThorbjørnRavnAndersen: Only if you're willing to assume an entire line is clean when a single instance of that line is checked. – Joshua Nov 27 '20 at 16:11
  • 2
    @joshua random samples and the power of statistics. – Thorbjørn Ravn Andersen Nov 27 '20 at 16:19
  • @Joshua I see how you can be certain your bookshelf computer has no backdoor _that communicates_, and I understand how you could trust that computer to create a brand-new OS and processor architecture without inserting a backdoor, but I don't see how you can trust it to verify an existing OS and processor architecture. Why couldn't there be a backdoor waiting to feed you false information about that? – ash Nov 28 '20 at 19:57
  • @ash: Because once having a computer with no working backdoor, we can grab the bus of a suspect computer and check. An effective backdoor has to be using bus cycles to do anything. It will take several ladder-ups to reach modern hardware, which is why I said it's too expensive. – Joshua Nov 28 '20 at 21:07
  • @Joshua: I don't understand why you think that secret wiring of ports protects you from back doors. A back door can disclose your password or any other information including port wiring via many ways, e.g. by specific time delays in the outbound network traffic, or by some effects in video card or in sound card. – mentallurg Nov 29 '20 at 20:40
  • @mentallurg: Play the game from the backdoor-author's perspective. You probably tapped into the standard IO devices. I simply didn't use them and added all the IO devices I'm actually using to the IO pins however I felt like. You haven't seen this OS so your booby-trap the OS routine simply never fires. – Joshua Nov 29 '20 at 21:03
  • @Joshua: I don't know where are YOU tapped :) If you use network, your network card DOES send data to the network. Even if there are no deviations in data itself, the timing CAN reveal information. The OS does *not* control what the hardware (CPU, network card) is *actually* doing. I don't know why you don't understand this :) – mentallurg Nov 29 '20 at 21:10
  • @mentallurg: I tied the wires directly to IO pins on the CPU with a soldiering iron. – Joshua Nov 29 '20 at 23:24
  • @Joshua: What does it have to do with protection from back doors? Is your computer connected to a network or not? If connected, your CPU may generate traffic that your OS does not control. You cannot prevent it. The only way is to disconnect your computer from the network. – mentallurg Nov 30 '20 at 00:01
-1

In general, how could you even trust a piece of hardware like a CPU or network card [...]?

A backdoor is pointless unless it produces some effect noticeable from outside the computer. For example / most likely, it communicates information about the system over an internet connection. So one pragmatic check is to use another, separate piece of hardware to monitor the inputs and outputs of the system. For example, check that all the packets out of the original hardware are as expected, and there are no extra packets.

Now the concern is that the second computer is also compromised to hide this information. But you can increase your confidence by making it as different as possible from the original hardware: different age, different manufacturers of the components like the processor, created and purchased in a different country. The level of conspiracy needed for a backdoor coordinated between these products is quite a bit higher.

One wrinkle is that if you are not running open source software, then you may not even be able to know/check what the expected packets would be (for example DRM with some encryption key you can't access).

The other concern is that the system is communicating in a way you aren't monitoring. For example, you don't think it has a wifi connection but it does; or it uses high-pitched sound waves that you can't hear; or it waits for a USB stick to be plugged in and hides information on it; or something else. You'd want to make a list of such possibilities and check for them.

I'm not saying this is easy or accessible to the average person, but the question reads as more theoretical to me.

usul
  • 657
  • 4
  • 6
  • 1
    A back door does not need to be active to be there. It can be passive, and only triggered in the specific case where it's needed, just like a real back door doesn't always need to be opened. – plonk Nov 27 '20 at 23:23