3

The paper Trustworthy Execution on Mobile Devices: What Security Properties Can My Mobile Platform Give Me? describes 5 desired security features for mobile devices:

  1. Isolated Execution
  2. Secure Storage
  3. Remote Attestation
  4. Secure Provisioning
  5. Trusted Path

If I understand correctly, one cannot design a system that leverages TrustZone's bare capabilities alone to provide these features.

My question is:
What extra features are required (for a system that uses TrustZone) to provide these security features?


My current understanding is as follows (Beware: There are many mistakes here, as @Gilles pointed out):

  • Secure Boot is a must.
    I.e., upon boot, the first code to run on the device must be trusted (e.g., burned into a ROM chip by the manufacturer), and then each piece of code is verified before it is run, at least until control is transferred to normal world.
  • Combining Secure Boot and TrustZone is sufficient to provide Isolated Execution and Trusted Path.
  • Isolated Execution and Trusted Path can be leveraged to provide a Secure Storage that is based on a secret known only to the user, e.g.:
    • A trusted app in secure world prompts the user for a secret password.
    • The app derives a symmetric key from the password, and removes the password from memory.
    • The app uses the key to encrypt some secret data (or other keys) and stores it on disk.
    • The app removes the key (that was derived from the password) from memory.
    • Later, when the user wishes to access their secret data, the trusted app prompts the user for the password, and then uses it to derive the key and decrypt the secret data.
  • In order to provide Remote Attestation, as well as Secure Storage that is based on sealing data to this specific device, a feature that allows only trusted* secure world code to access a device-unique key is required, e.g., Samsung Knox's Device-Unique Hardware Key (DUHK). (This is also discussed in this answer.)
    * 'trusted' might mean that this code runs after a Secure Boot. E.g., the Knox whitepaper explains that in Samsung Knox devices the DUHK becomes inaccessible if "the device has ever been booted into an unapproved state".
  • Finally, Remote Attestation can be leveraged to provide Secure Provisioning, e.g.:
    • A trusted app in secure world generates an asymmetric key pair.
    • The app creates a remote attestation that includes a signature of the public key.
    • The app sends the attested signature alongside the public key to the remote machine that holds the data that is to be provisioned only to a trusted app in secure world.
Oren Milman
  • 149
  • 7

1 Answers1

3

Strictly speaking, you can't do anything with TrustZone itself. TrustZone itself is an isolation feature of the CPU core. It adds another level to isolation between processes: in addition to user/kernel or user/kernel/hypervisor, there's normal-user, normal-kernel, normal-hypervisor, secure-user and secure-kernel. But since normal-kernel mode (or normal-hypervisor mode if the hypervisor is in use) can access the whole RAM, there's no place to put code and data out of reach of the normal world. To do something useful with TrustZone, you need not only a core with TZ (which is the case of all Cortex-A CPUs) but also a TrustZone-aware memory controller on the CPU (which some manufacturers omit). A TZ-aware memory controller can firewall memory regions to block a region from being accessed when the CPU is in normal mode. In addition, the DMA controller should also be TrustZone-aware, otherwise the normal world may be able to circumvent memory protections by leveraging a peripheral (e.g. set up DMA between memory used by the secure world and the flash controller, then read the content back from flash).

If your CPU includes a TZ-aware memory controller, it supports isolated execution. However, this alone is of limited use because without other features, the isolated execution environment only runs whatever code was there when the isolation was set up, and can't keep anything past a reboot.

Secure boot, where each stage of the boot process verifies the signature of the next stage before allowing it to run in the secure world, allows whoever controls the ROM to control what runs in the secure world. All Cortex-A processors boot from ROM (as in, actual read-only memory, not flash or EEPROM).

This is still of limited use because you have a secure environment… to do what? If the only privilege that the secure environment has is access to a particular region in RAM, that doesn't do anything useful.

CPUs that leverage TrustZone additionally have a secret key that's accessible only to the secure world. See Does the ARM TrustZone technology support sealing a private key under a code hash? and Secure keys in hardware. With secure boot (for code integrity) plus isolated execution plus one bootstrapping privilege (access to a secret key), the software running in the secure world can implement features such as remote attestation and a limited form of secure storage.

The secure world can store data persistently by encrypting and authenticating data and letting the normal world take care of storing the encrypted blobs. This guarantees the confidentiality and authenticity of the data, but not its integrity. The normal world can still delete data, and can roll it back. To prevent rollback, the secure world needs at least a small amount of rollback-protected storage. In practice, on mobile devices, this is done with the cooperation of the flash controller. The flash controller dedicates a partition to the secure world, known as replay-protected memory block (RPMB), and uses the RPMB protocol to communicate with the secure world on the main CPU, using a shared key that is established when the device is manufactured. Secure storage doesn't need to involve the device user at all: the secure world already has a secret key, it doesn't need to derive one from a password, which would be weaker (typical passwords, even with strengthening applied, are weaker than a run-of-the-mill 128-bit secret key).

“Trusted path” in this context refers to an isolated channel between the CPU and a peripheral. A trusted path needs support inside the CPU, not only TrustZone, but also a peripheral that can be put in a secure mode where it only obeys the secure world, and the bus(es) in between need to isolate the communication. The most deployed kind of trusted path on mobile devices is with the display subsystem, to apply DRM to video rendering. Providing a trusted path for user input and display is technically difficult and not widely deployed.

Remote attestation is not needed for secure provisioning: a server can send data encrypted with a key that only a specific device can decrypt, it doesn't need to receive an attestation from the device. It might want to receive an attestation, either because that's how it discovers the device's public key or because it wants additional content in the attestation (e.g. knowing what software versions it's running).

The big thing you left out is: who controls all of these features? Ultimate control belongs to whoever decides who signs the code that the ROM is willing to load, and that's usually the chi manufacturer, and they allow only code from the device manufacturer. On mobile phones and tablets, the end-user typically has no access to the secure world.

Gilles 'SO- stop being evil'
  • 50,912
  • 13
  • 120
  • 179
  • So interesting! To get all this working, how many different keys are baked into a device and where do they reside? I counted counted a total of 3 keys. 2 secret symmetrical keys: (1) _Trustzone/secure world key_, (2) _RPMB key_ and 1 public key: (3) _secure boot key_ (located in ROM). There is no way for any of these keys to be changed, correct? – el_tigro Feb 10 '19 at 03:12
  • Sounds like it's impossible to establish a Trusted Execution Environment if either the memory controller or the DMA controller is not TrustZone-aware. Does [StrongBox](https://proandroiddev.com/android-keystore-what-is-the-difference-between-strongbox-and-hardware-backed-keys-4c276ea78fd0) ([more info](https://developer.android.com/training/articles/keystore#HardwareSecurityModule)) solve this problem by having the secure world run on a Secure Element (e.g Titan M chip)? – el_tigro Feb 10 '19 at 03:48
  • Thanks a lot for the elaborate answer! With regard to "Secure storage doesn't need to involve the device user at all": Isn't some secret known only to the user required to protect the data in case the device is stolen? – Oren Milman Feb 10 '19 at 06:29
  • @catanman Typically there are two things in write-once memory: a 128-bit or 256-bit secret from which all secret keys used by the secure world are derived, and a 128-bit or 256-bit hash of a public key or certificate such that the first stage of secure boot must be signed by a key with this hash. – Gilles 'SO- stop being evil' Feb 10 '19 at 11:15
  • @catanman A secure element is a physically separate execution environment (as in a different logical chip, it can be in the same piece of silicium). It has its own CPU, RAM and flash, so logical isolation isn't a problem. This makes it more secure for what it does. The advantage of TZ is that it uses the main CPU so it gets more performance, faster communication with the normal world, and the ability to run larger amounts of code (e.g. fingerprint processing, facial recognition, …). – Gilles 'SO- stop being evil' Feb 10 '19 at 11:19
  • @OrenMilman If there's a secret known to the user, it's used to unlock the storage encryption key, never _as_ the storage encryption key. And most mobile devices don't have a trusted path to user input, and even if they do phone manufacturers usually want users to be able to use their device without entering a long password (non-reproducible input such as fingerprints and low-entropy inputs such as PIN or unlock pattern don't help). – Gilles 'SO- stop being evil' Feb 10 '19 at 11:22
  • @Gilles so the RPMB key that is established when the device is manufactured is derived from the root key that is stored in write-once memory (and is also established when the device is manufactured)? – el_tigro Feb 10 '19 at 18:53
  • 1
    @catanman Yes. The secure world derives a key and the flash controller stores it. – Gilles 'SO- stop being evil' Feb 10 '19 at 21:25
  • @Gilles Regarding RPMB, if I understand your answers above as well as the answers you provided in the links you shared: The device manufacturer's (e.g. Samsung) firmware uses the secure world to obtain a RPMB key which is derived from the root key (aka HUK - Hardware Unique Key). The HUK itself is built into the SoC (e.g. Snapdragon) and is stored in write-once only... – el_tigro Feb 12 '19 at 17:09
  • ...The device manufacturer's firmware then stores the derived RPMB key on the eMMC flash controller (e.g. Micron) in one-time programmable memory (i.e. the RPMB can't be changed). This process only happens once at the device manufacturer's factory. Is this correct? (I also got some information from [here](https://events.linuxfoundation.org/wp-content/uploads/2017/12/Implement-Android-Tamper-Resistant-Secure-Storage-Bing-Zhu_and-Secure-it-in-Virtualization-Bing-Zhu-Intel-Corporation.pdf)) – el_tigro Feb 12 '19 at 17:09
  • 1
    @catanman That's correct yes, although the details may vary slightly depending on the manufacturer. – Gilles 'SO- stop being evil' Feb 12 '19 at 18:08