3

The Telegram FAQ's state that they are not able to comply with court orders from one country alone because they "split the encryption keys and spread them over multiple jurisdictions".

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data.
Thanks to this structure, we can ensure that no single government or block of like-minded countries can intrude on people's privacy and freedom of expression. Telegram can be forced to give up data only if an issue is grave and universal enough to pass the scrutiny of several different legal systems around the world.

As this all talks about non-e2ee chats, how is this realistically possible? All messages are stored in a "cloud chat" so they are stored on the server indefinitely. My device(s) need(s) to download these messages somehow, so how can one government not just tell them to download all my messages in my name?

Is this just a legal issue or is there a technical possibility that I'm not seeing here?

deiShie0
  • 43
  • 2

2 Answers2

3

Key splitting (best case scenario)

The ideal cryptographic algorithm for this purpose is Shamir's Secret Sharing, as it provides Information theoretical security, meaning it's mathematically impossible to recover the actual key without having the k number of shares specified by the person splitting the key.

We don't know how many shares are needed, and/or if it's even the case Shamir's scheme is used, but to analyze the security in academic way we should assume good faith and assume that is indeed the case.

But even with Shamir's Secret Sharing, it is the case this only applies to the offline-copy of the key. There is no precedent that this argument would hold in court, and since Telegram hasn't shown e.g. third party precedent, we can only trust their word. It is very unlikely it would hold in court, because the Shamir shares are not what the courts would go after. This is because there is another copy.

Database encryption design (best case scenario)

The reason there is another copy, is, the key is also used for something, namely, database encryption. Every time you send a cloud message to Telegram server, that cloud message needs to be committed to the Telegram server's database.

The most secure possible way to handle database encryption is to encrypt data first, and then commit the ciphertext to the database as a binary blob. The server will then decrypt the data right before it's sent to the recipient (or back to the user who e.g. has added a new device, which is now requesting the message history). Again we don't know if this is the case as it might be really inefficient for making searches etc. but again since it's security we're trying to assess, we assume good faith and that this can be done even if it requires spending more computational effort for database management, searches etc.

The "commit encrypted blobs" approach is essentially on-the-fly encryption of data going in and out of the database, i.e., the server constantly encrypts and decrypts data. Every operation of encryption and decryption requires the CPU that does the encryption or decryption, to have access to the key.

Location of the key in memory

The way computers work, is running programs store all their data to the Random Access Memory (RAM) of the server. The CPU can also store frequently used data to its cache memory to make it more readily accessible.

In servers the RAM does not necessarily have to exist on the same motherboard, as RDMA allows access to RAM of close proximity servers with an interface such as InfiniBand.

RAM encryption

Modern servers can also encrypt the RAM with Intel TSME or AMD SME, so again, assuming good faith that such is used, one can't just pull RAM sticks out of Telegram's server and obtain the database encryption key with a cold boot attack.

But RAM encryption or displacement to another motherboard via RDMA doesn't make the key unavailable to the operating system and/or Telegram server application.

Summary of best case scenario

So hypothetically, assume

  • Telegram servers have been booted only once
  • Telegram keys were split to shares with Shamir, and carried by Telegram staff to multiple countries before anyone could get their hands on them
  • Telegram servers have never been rebooted so the incredible trouble of having to reassemble the key has never needed to be done.
  • Telegram servers encrypt every message before committing the ciphertext as a binary blob to the database.
  • Telegram servers decrypt every ciphertext read from the database just before sending it to the user with MTProto client-server encryption

The security assessment of the best case scenario

Threat 1: Subpoenas

According to this tweet Telegram has servers in "[...] London for European users, Singapore for Asian, San Francisco for American."

So three servers. As per above, the entire database encryption key for all European users must by definition exist in RAM of the server in London, thus Telegram can access any user's data either via the interface they use to monitor reported chats, or, by printing out the key by adding a single line of code print(server_private_key) to the server, and then using a secondary program to decrypt the database entries. This is so simple to do, failure to do so would with overwhelming probability be considered obstruction of justice. Every major Internet company in the world encrypts its databases, and is obliged to comply with court orders. Telegram's claim that it is unable to do the same thing despite using exactly same database encryption technology, and having the key physically present in the server rack, is pure fabrication.

Threat 2: Telegram staff

When Telegram server receives a client-server encrypted (cloud) message from a contact, it first authenticates and decrypts the client-server MTProto ciphertext to recover the plaintext message.

After the plaintext message is available, Telegram staff can do whatever they want with that message. They can make as many copies of the message as they want, they can analyze it to create a profile about the user in secret, etc. The only thing that's stopping them, is if they have an internal policy to not do that. The public privacy policy does not necessarily reflect the truth, and isn't written in stone, Telegram might in future serve e.g. targeted ads based on that data. So this boils down to whether or not you trust Telegram to do the right thing, always, indefinitely.

Threat 3: Hackers

Telegram's server contains billion of messages from its 400 million users. This makes it a very tempting target for both organized crime, and nation state actors. How would Telegram's key splitting system protect its users in case of breach? How would the breach work?

It all begins with a vulnerability, an un-patched bug that endangers the security of the server. Majority of vulnerabilities are known and fixed, but for every up-to-date system, it is extremely likely there exists at least one vulnerability that the attacker knows, and that the developers of the software that runs in the server (OS, server application etc.) does not know. This is called a zero-day.

The vulnerability is exploited with a piece of code called the exploit, that then opens up the route to deploy what is called the payload. The payload is that does the dirty deeds of the attacker. (An exploit against a zero-day vulnerability is called zero-day exploit.)

So to give an allegory, vulnerability is a weak skin, exploit is the stinger of a wasp, and the payload is the venom of the wasp delivered via the exploit.

In order to wreak havoc, the zero-day exploit must also be able to perform something called privilege escalation. This requires a vulnerability in the operating system of the server.

So when such zero-day exploit is performed, it allows the attacker to run any command and program with kernel-level privileges, which also means that the CPU runs in something called kernel mode. To quote Wikipedia on CPU modes,

Kernel In kernel mode, the CPU may perform any operation allowed by its architecture; any instruction may be executed, any I/O operation initiated, any area of memory accessed

When the CPU runs in kernel mode, it can access any area of memory. This by definition includes the memory of the Telegram server program that's doing the back-and-forth encryption/decryption for databases. The attacker's payload-program that's running in kernel mode will scan the RAM for the Telegram's database encryption key, and sends it to the attacker.

Since in kernel-mode any I/O operation can also be done, the attacker can also use the payload program to dump the entire Telegram message database, and the attacker can then decrypt the messages with the stolen key either on their end, or at server's end.

Can such an attack be detected?

It depends on the quality of the exploit. E.g. to quote Wikileaks on CIA's GrassHopper framework

The requirement list of the Automated Implant Branch (AIB) for Grasshopper puts special attention on PSP avoidance, so that any Personal Security Products like 'MS Security Essentials', 'Rising', 'Symantec Endpoint' or 'Kaspersky IS' on target machines do not detect Grasshopper elements.

So it's very likely a nation state actor who hacks Telegram servers won't be detected. It is also the case the payload is a form of rootkit which "often masks its existence or the existence of other software.". Thus, the rootkit will change the system so that nobody, not even a root user can detect its presence.

But isn't such attack loud form the perspective of amount of material that needs exfiltration, and detectable from other systems in the server infrastructure?

It is entirely possible. But dumping the database isn't the only way to access client-server encrypted cloud messages. If instead of database encryption key, the attacker compromises the private key that the server uses for the MTProto client-server encryption (either persistent Diffie-Helmman private value, or a RSA signing key), the attacker can perform completely invisible MITM attacks against the client-server protocol. This only requires lifting extremely small amount of data from the server, can be done in milliseconds, and it still allows mass-attack against Telegram users from the backbone of the internet. There is no way for client to detect this attack, and no way for server to detect incoming connections do not come from the phone but from the MITM attacker.

Can the problem be fixed?

Yes. End-to-end encryption exists to eliminate this problem entirely. Telegram also supports end-to-end encrypted 1:1 chats on mobile so the developers clearly recognize the benefits, but alas end-to-end encryption is

  • not enabled by default
  • no available for group chats
  • no available for Windows/Linux desktop clients

There is a lot of misinformation going about that says that these are not doable, but there are counter-examples such as Signal and Wire.

Telegram staff have argued for why they do not make every chat E2EE, which I have refuted here.

maqp
  • 73
  • 7
1

The thing that's being split is the decryption key, which gets synced from multiple servers to make sure your device can actually read the message. For any government to read a message, even if they seize some server in, say, Germany, they'd have to seize two or three more to get all the relevant key parts (hell, maybe five or six, don't know how many parts the keys are in.) This is understandably unlikely since for local law enforcement to coordinate on an international scale to seize the decryption keys to get evidence on one person is, well, impractical. Would be easier to just frame the guy and be done with it.

Even if they do coordinate and start trying to seize the servers: in the time it takes, the data could be easily moved, wiped or encrypted with a different key. In fact, there's no telling which server stores the relevant key parts even. So even if Germany and France and Belgium cooperate, they're likely to get left with nothing.