24

The premise of end-to-end encryption (E2EE) is that the client is secure and trustworthy, your end devices is secure and trustworthy, but the network and server need not be trusted. You've read all the code in the client, or someone you trust has done so for you. But after loading the code onto your phone -- installing the Keybase app -- and starting a chat with your friend, you still need to verify that the server sent you the right encryption key. Since you can't host your own server, it has to be the Keybase, Inc's server that sends you the encryption key of your friend.

It's pretty standard for E2EE software that the (client) code is open, the server sends you the encryption key, and you can check the encryption keys out of band. In Signal you check the safety number, in Wire you check the device fingerprints, and in Telegram you check the encryption key. If they match, you know (based on the open client code and the cryptography it uses) that there is no (wo)man in the middle.

How does this work with Keybase? Their documentation explains parts of it:

Alice and Bob share a symmetric encryption key, which they pass through the server by encrypting it to each of their devices' public encryption keys.

Okay, so either we verify the symmetric key (identical on both phones), or we verify the public key (I should be able to display mine on my phone, and my friend will call up what their phone thinks my public key is, and that should match).

The weird thing is that there is no button to do either of this. The documentation doesn't mention any and I can't find it when looking around.

Further on, it explains the public keys are put into your signature chain:

All Keybase devices publish a crypto_box public key and a crypto_sign public key when they're first provisioned. Those keys live in the user's signature chain [...] A chat symmetric encryption key is 32 random bytes. It's shared by everyone in the chat, [...] When a new device needs this shared key, another device that has it will encrypt it with the new device's public crypto_box key and then upload it to the Keybase server.

So if I'm reading this right, when my friend opens a new chat with me, their client generates the shared secret, it will take the public key from the crypto_box inside my signature chain and encrypt the shared secret with that public key, and upload the result to Keybase's server so that I can download and decrypt it, thereby establishing the shared secret and starting the chat.

... but where does that signature chain come from? This has to be fetched from Keybase servers, so since there is no way to display it, I have to trust the server to send me the right key. How does that make sense when they claim "all messages are secure, end-to-end encrypted"? The client also occasionally and briefly displays a banner above the chat saying "end-to-end encrypted".

There is documentation about how to do the verification step by step, but aside from that the instructions are broken (there is no sigs field in root_desc, the result of root.json?hash=X), this is not something I can do on my phone. A malicious server could return the correct answers to my command line client while telling the mobile app that the signature chain contains an entirely different key (namely the one the malicious actor uses to perform the MitM).

When talking about it to others, they mention this blockchain thing as the reason why it's secure without needing to verify anything yourself, but I can't figure out how it works.

  1. Do Keybase chats require any kind of key to be manually verified for end-to-end encryption, or does the blockchain data (which the app could verify silently under the hood) somehow prevent us from having to compare fingerprints/keys/sigchains?
  2. If we do need to compare fingerprints/keys/sigchains, how can that be done in the Keybase app?

Update: A friend linked this GitHub issue, where it is basically said that the whole blockchain thing isn't yet done by the app. Leaving in the middle whether (something like) SPV is a good solution, it is currently not implemented. That strikes out the first option if I'm not mistaken.

Since the app also seems not to allow fulfilling the second option (comparing keys), I guess this concludes that there is no E2EE currently implemented in Keybase? An attack is hard to pull off: even a malicious server (very ideal position) would have to MitM from the start, so they have to have prior knowledge of whom to target. But we do trust the server to be honest at least some of the time. Or am I still missing some way that makes this true end to end encryption?

Update 2: It was mentioned on hacker news that the app should check third party proofs by itself. This is not exactly what end to end encryption means since it still relies on third parties, but nevertheless, having to own 2 or more companies' servers before being able to MitM someone's keys (which are additionally TOFU'd) should give quite some confidence.
However, when checking in Wireshark whether it actually does this (ask the Twitter API for the proof string and verify the signature with the the public key it received from Keybase), Keybase on my phone did not contact Twitter at all. (It did, however, proudly proclaim that the new chat was end to end encrypted.)

For those wondering whether the Wireshark test controls for various things:

  • The username with whom a chat was started was never typed into the test system (the one that started the chat) prior to the packet capture.
  • The two accounts never had any sort of contact before, on any device. They also don't follow each other. Until the chat was established, the other party did not even know my username, so they could not have triggered anything either.
  • The packet capture started before the username was typed into the search field on the test device and ended only after Keybase completely established the chat and claimed it was end to end encrypted.
  • Twitter seems to host all its services on its own IP range, and none of the IPs the test system talked to during chat establishment fell inside that range. If this assumption is false and the Keybase client uses endpoints outside of 104.244.40.0/21, please comment. I checked all IPs and all are implausible to be a front for Twitter (details on hacker news), but I would be interested to know if Twitter publishes tweets on Amazon Web Services' Simple Storage Service or something.
  • It is deemed implausible for the mobile Keybase client to simply have downloaded all signature chains from all users that exist on Keybase and to have checked all their proofs prior to starting the packet capture. This is the only way I can think of how the third party hosted proof could have been verified prior to the packet capture.
Luc
  • 31,973
  • 8
  • 71
  • 135
  • It's surprising that it's this difficult to access the remote party's public keys through Keybase. With EncryptedSend (www.encryptedsend.com) the remote party's public key is readily accessible. – mti2935 Nov 30 '19 at 11:31
  • The answer, as far as I understand, is that the app should check third party proofs by itself as you mentioned, and that is what allows you to trust the keys that are provided. What may be happening however is that it only verifies it once or periodically - if it already checked a proof on your device it will not check it again on each new chat session. – CristianTM Dec 10 '19 at 19:45
  • @CristianTM I made sure to never have looked up the username with whom I tested on my phone before, it definitely didn't check before claiming the chat is secure. Do note that this is documented in the hacker news thread. – Luc Dec 10 '19 at 19:50
  • You have not accessed even the profile of the user? If it is checked at least once it wont check it one more time. One more thing: you are not tracking the user, are you? When you track I guess tyou start trusting that key without further checkings. I guess that IF it does not verify the proofs at least once it would be a bug/error rather than a design choice. The whole thing about keybase is that clients verify the proofs OR you explicitly start trusting someone by following it (as in PGP). – CristianTM Dec 11 '19 at 11:26
  • 1
    @CristianTM Did you see the edit in the post? I think that should clarify your questions. Let me know if not. – Luc Dec 11 '19 at 13:20
  • 1
    Looks really strange. I think its a case to take for the devs or look on the source code. But really an expectation of security seems to be broken here if they rely on server-side checking of proofs or something like this. – CristianTM Dec 11 '19 at 18:35
  • *On behalf of [ylk](https://security.stackexchange.com/users/233283) who can't comment due to silly stackexchange restrictions:* "Take the example of what happens when I "follow" Alex. My client downloads both of our signature chains from the server, and runs them through cryptographic verification, checking that our hash chains are well-formed and signed. It furthermore checks new data against cached data and complains if the server has "rolled back" either chain." https://keybase.io/docs/server_security – Luc Apr 27 '20 at 11:51

1 Answers1

5

Keybase's chat in the app was never verifiably end-to-end encrypted.

The first time you talk to someone, it downloads their encryption key from the Keybase server and you have to trust the server to send you the right key. It could also send you a fake one such that you encrypt your messages using a malicious key instead. There is no way to verify this in the app.

In Keybase's case, to be fair, the attack is hard to pull off due to the checks they made available, but it is not end-to-end encryption unless you want to broaden the definition to include trusting the server for the key exchange (combined with a bit of wishful thinking that there is even a minority of people that ever checked the Merkle root for inclusion of their key despite that the instructions on how to check were broken, as noted in the question).

If you want to verify that the Keybase server is not performing any kind of attack, you will have to use the command line; it is not possible in the official application. The "proofs" (Twitter proof, website proof, etc.) did not seem to work reliably (as described in the question), but even if they did, Keybase also labeled chats as E2EE without requiring any third party proof to be in your account. And I don't think my mom would add proofs if not given a reason why (and perhaps not even then, which is okay, but then she shouldn't be lead to believe it's more secure than it is). The mechanism Keybase applied is commonly called TOFU: trust on first use. It downloads the keys one time, and if it doesn't change, it assumes everything is fine.

This is what Keybase said about TOFU last year in a blog post:

it's just server-trust [...] (ewww!)
[...] these apps don't have to work this way

I agree. Good thing they paid over $100 000 to NCC Group's security and cryptography experts to have an audit which... never mentioned this issue in their report? (I also emailed one of the auditors with a link to this question, but never got so much as an acknowledgement of receipt.) The auditors did find other issues:

[The two] bugs [found by NCC] were only exploitable if our servers were acting maliciously. We can assure you they weren't, but you have no reason to believe us. That's the whole point!

Except that we do have to believe them, because we can't just check for ourselves.

Perhaps I am not giving Keybase enough credit, though. The blog I quoted from rightly accuses other apps of just accepting and using new (potentially malicious) keys without requiring input from the user. WhatsApp is a prime example: the best you can do is turn on warnings, but the WhatsApp client will still happily participate in a switcharoo where the attacker sends you a new encryption key right when you send a message, allowing them to learn of at least one plaintext even if you are super careful and only send messages when you saw the key didn't change. But by default, no new key notices are shown, so it's opportunistically encrypted by default. Many tech people I know seem to trust it because it uses the Signal protocol and so it must be good. I expect the Keybase app does this properly. While it might be good to check again if there aren't any flaws in the mechanism, this is the kind of thing I expect an audit would have revealed if it were there, so Keybase probably does that fine.

But this specific question is moot now that Keybase is being shut down by Zoom. I was kind of waiting for Keybase to either fix this issue and post a happy answer, for an independent person to re-do my analysis and answer with their conclusion, or for Keybase to acknowledge that they've had the whole security community fooled. A kind stranger called Jared opened a bounty on this question and I posted it to Hacker News to attract an answer, hoping there existed a single person who had attempted the key verification as well. But none of that will happen now, so let's look at the broader perspective, the reason behind this question.

Users should have demanded followable instructions. We should have questioned Keybase, now Zoom, and anyone who makes a strong security claim. They claim it? You should want to see steps you can follow to verify it. Since you put your trust in the published code, those instructions should not involve any command line coding work, and definitely not have gaps like verifying keys on your laptop and hoping that the server sent the same keys to your phone.

To give one good example, and I hate to say it, Telegram's calling feature nailed this. They show four emojis on the screen for key verification. Heck, it's fun: in calls with my girlfriend I sometimes find us doing the verification randomly, comparing emoji library differences or making fun of one of the icons. I haven't looked into its security, maybe you'd need five emojis to get above 128 bits of entropy or whatever, but from a UX perspective, this is a great example. Matrix does something similar but not (yet) for calls.

The tech communities on Hacker News, Reddit, PrivacyTools ("Keybase [provides] E2EE."), even the auditors overlooked this purely practical issue (and I do trust their expertise in general, no audit is ever really complete in a project as complex as Keybase's or Zoom's). People were hostile when I asked questions, assuming I was a noob who couldn't read the docs and figure out how the verification worked when they never actually tried it themselves. Be more curious. Key verification should not be hard when it is called "end to end encryption" because the end users have to be able to verify it. While my mom may have to trust someone to have checked the code, I (or a project like F-Droid) can compile the binary independently and give it to her. However, the encryption key is something she needs to be able to check if she wishes to do so. We know that even the NSA has trouble breaking good cryptography, and kudos to Keybase for building good cryptography, but attackers have no trouble working around it by swapping out server responses.

Luc
  • 31,973
  • 8
  • 71
  • 135