0

SoC's have begun integrating a hardware Root-of-Trust to mitigate attacks on Secure Boot. Examples include Google's OpenTitan & Intel PFR. What are the threats addressed by discrete "Secure Enclave" type root-of-trust solutions? What are the benefits over Secure Boot from ROM?

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 1
    This looks like a homework question. Is it? – schroeder Jun 20 '20 at 20:31
  • 1
    I googled "root of trust vs Secure Boot from ROM" and this was one of the top hits: https://uefi.org/sites/default/files/resources/UEFI%20RoT%20white%20paper_Final%208%208%2016%20(003).pdf – schroeder Jun 20 '20 at 20:38
  • Thanks for the comments. Its not a homework question. Of course I've read this already. IMHO its a bit "hand-wavy", making broad statements like: "There’s little doubt that a hardware-based root of trust combined with the chain of trust process used in UEFI Secure Boot (or a similar approach) is the best way to ensure system security". – Indranil Banerjee Jun 20 '20 at 21:38
  • 1
    "Of course I read X..." --- there is no possible way for us to know what you have read and what you haven't. It would be helpful if you included what you ***do*** understand about what you've asked, because as it stands, the question reads like a question someone else asked you and you have no foundational knowledge (like it's a homework question). It also didn't help when one of your links was unrelated to the subject. – schroeder Jun 21 '20 at 07:24

1 Answers1

0

TL;DR: It lets manufacturers implement Secure Boot in a system where only the stuff that absolutely needs to be read-only is non-patchable, and protects them from their own mistakes when attempting to implement the basis of the root of trust themselves by having experts create and review the silicon in a transparent way.


The core concept of any Secure Boot system is that the system can only be started if the firmware verifies that the next step in the system (typically a bootloader) is trusted (typically, this means "signed using a specific key, or a key issued by a trusted authority"). Each subsequent step then verifies the one that comes after it. However, this presents a complication: where do you store the firmware? If you put it in writable flash memory, an attacker can overwrite the firmware with one that does not verify the rest of the chain, and bypass secure boot entirely. If you store it in read-only memory, then there's no way to update the firmware without tearing apart the hardware, and firmware can have bugs or need changes same as any other computer system.

The stuff you linked is an attempt to create a standard way to store a public key and a module which verifies that the device's firmware is signed by the holder of the corresponding private key (or by a key chaining to it), and which mediates access to that firmware. This allows you to create a device-agnostic chip (or chip module) that supports the fundamental baseline requirement of Secure Boot with the absolute minimum required functionality. This provides a minimal attack surface against the part of the system that can't be updated, reducing the risk of it containing a vulnerability.

It also means manufacturers, developers, and users can see what the root of trust is doing and how it works. Secure Boot has a major problem that gets at the very basis of what "security" means: preventing unauthorized access or operation. But, authorized by whom? For some systems like PCs, it's the owner; if I want to use Secure Boot on my laptop, I can control what signing keys are permitted. For others (phones, consoles, etc.) the manufacturer asserts that the owner does not actually get to decide what constitutes authorized use, and thus historically has tried to make the whole root of trust as opaque and user-inaccessible as possible, and limited outside review. Sometimes this backfires horribly, as with the PS3 and "fail0verflow", where Sony's poorly-implemented trust root relied on security through obscurity and was easily broken once people figured out how it worked. If a system like the ones above had been available back then, Sony and other companies could have had a system developed and reviewed by people who actually know what they were doing. The resulting product would both have been more secure (for the company, in that case, rather than for the users) and more updatable to fix the various other firmware bugs.

CBHacking
  • 40,303
  • 3
  • 74
  • 98