5

I was in a workshop about privacy recently and at some point, a passionate debate started about Intel's Software Guard Extensions (SGX). Although I have a security background (Master in Information Security), I find it very difficult to understand exactly how SGX works. But I understand that it is an alternative to homomorphic encryption, since it can process data securely way faster than homomorphic encryption does.

At that workshop some people made the argument that there are no guarantees about privacy when it comes to SGX and that, by using SGX, you basically have to consider Intel a trusted third party.

My question is: What are the concerns, or drawbacks, regarding Intel's SGX when it comes to privacy?

Aventinus
  • 329
  • 4
  • 14
  • 1
    I wouldn't necessarily say SGX is an alternative to homomorphic encryption. SGX requires that you trust Intel and is ultimately a hardware solution to a problem. Homomorphic encryption requires that you trust math, and is a software solution. – mikeazo Sep 27 '17 at 14:46
  • Here is a good resource: https://www.youtube.com/watch?v=0ZVFy4Qsryc – mikeazo Oct 06 '17 at 17:13
  • 1
    When using an Intel processor, you _always_ have to trust them, regardless of whether or not you are making use of SGX. – forest Dec 19 '17 at 04:42

1 Answers1

1

Intel SGX does not really replace homomorphic encryption. It is designed to protect from compromise of one of the communicating computers by verifying, that the other computer runs the correct, unmodified software and that any data the SW saves can be only read by the unmodified software. You have to trust Intel to achieve this. This can be used to for example make sure, that self-destructing messages are really deleted by the other party in communication. Signal wants to use it to confirm they don't keep user metadata and contact list.

On the other hand, to protect the data, you can still add your own encryption as an inner layer, whether in transport or at rest.

As for privacy and trust in Intel, this is a moot point considering the Intel Managment Engine is effectively a backdoor into your computer black box with full access to your computer, that can not be fully removed or disabled.

Peter Harmann
  • 7,728
  • 5
  • 20
  • 28
  • `This can be used to for example make sure, that self-destructing messages are really deleted by the other party in communication` If the data exists outside SGX then there is no way that SGX can ensure that it is deleted. – AndrolGenhald Apr 20 '18 at 15:17
  • 1
    @AndrolGenhald You obviously misunderstand how SGX ower network works. SGX adds a layer of encryption, that can be only decrypted by an app running verified code. Therefore, it is possible to make self-destructing messages, the app just has to delete its own encryption key (separate from SGX), and it can not be extracted or not deleted as the app is protected and verified by SGX. Yes, technically you can still store the ciphertext, but you can't get the key ever again. – Peter Harmann Apr 20 '18 at 16:27
  • I assumed we were talking about messages shown to the user, was that an incorrect assumption? If the plaintext ever leaves SGX then you can't guarantee it won't be copied. – AndrolGenhald Apr 20 '18 at 19:03
  • @AndrolGenhald No, I was talking about IM apps, such as signal, that allow self-destructing messages. But Signal is open-source, so you can easily make a version that would store the messages despite them being supposed to be deleted and the other user would not know. Of course, screenshots are still possible, but those would not really by trusted by anyone, as it would be very easy to manufacture them in MS Paint or Photoshop. Also, you could add countermeasures and SGX would verify they are in place making it even harder. – Peter Harmann Apr 21 '18 at 10:40
  • In that case your statement still sounds very misleading to me. The Signal protocol offers message repudiation, but that's very different from ensuring deletion, and that feature is unrelated to SGX anyway. – AndrolGenhald Apr 21 '18 at 17:09
  • I am sorry to confuse, that was a what it could be used for example. It is not currently doing so and there are no plans for that as of now. – Peter Harmann Apr 22 '18 at 01:20
  • The IME is not a backdoor and is far less scary than many other components of a modern x86 chipset. It's only at risk of remote exploitation when you are using AMT (a remote management technology), which is only the case for certain enterprise servers, not consumer devices. Additionally, you can only communicate with it over privileged IO ports (e.g. using the HECI virtual device). And you _can_ disable it, either by killing the first 4 KiB block or by setting the HAP bit (or that other similar one whose name escapes me). – forest Dec 10 '18 at 11:06
  • @forest I was unable to find the article now, but I remember reading about a vulnerability in the IME, that was in the part of code, that could not be disabled even by HAP. So there is that. Reading my answer after this amount of time though, I guess the IME comment may not have been relevant here. – Peter Harmann Dec 10 '18 at 15:44
  • @PeterHarmann The vulnerability you're talking about requires physical access or to otherwise overwrite the ME region in the BIOS, in which case naturally HAP is irrelevant. It isn't a vulnerability for the ME's code, but a vulnerability of the CPU to do correct verification of the ME region on boot. – forest Dec 11 '18 at 03:09
  • @forest That is not the vulnerability I had in mind. I think this is the one: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5712 – Peter Harmann Dec 11 '18 at 14:58
  • @PeterHarmann Oh, that one only applies to enterprise server systems with provisioned AMT. And it most certainly can be killed by HAP, since the vulnerability is in the AMT modules. – forest Dec 11 '18 at 15:05