22

From time to time, questions come up in this board concerning web applications that utilize client-side cryptography (or ‘in-browser’ cryptography), where these applications claim to be designed in such a way that the operators of these applications have ‘zero-access’ to users’ private information. See links to several related questions below. A common example of such an application is Protonmail, which aims to provide end-to-end encrypted email. Protonmail claims that ‘your data is encrypted in a way that makes it inaccessible to us’, and ‘data is encrypted on the client side using an encryption key that we do not have access to’.

In discussions around this subject, the ‘browser crypto chicken-and-egg problem’ frequently comes up. The term was coined in 2011 by security researcher Thomas Ptacek. In essence, the problem is: if you can't trust the server with your secrets, then how can you trust the server to serve secure crypto code? Using Protonmail as an example, one could argue that a rogue Protonmail server admin (or an attacker that has gained access to Protonmail’s servers) could alter the client-side javascript code served by Protonmail’s server, such that the code captures the user’s private keys or plaintext information and sends these secrets back to the server (or somewhere else).

This question is: Can the ‘browser crypto chicken-and-egg problem’ be solved using the following method?

  1. The web application is designed as a single page web application. A static web page is served at the beginning of the user’s session, and this static page remains loaded in the user’s browser throughout the user’s session. Like Protonmail, all cryptography is done in-browser - the user’s plaintext secrets and private encryption key never leave the browser, and only ciphertext is sent to the server. However, unlike Protonmail, where a new page is dynamically generated by the server after each action by the user – users’ requests are sent from the static page to the server by way of client-side AJAX or XHR calls to the server, and the static page is updated with the server’s responses to these calls.

  2. All supporting files depended upon by the static page (e.g. javascript files, css files, etc.) are referenced by the static page using subresource integrity.

  3. The user’s private encryption key (or password from which the private key is derived) is stored by the user. The user enters their private key (or password) via an interface on the static page, which in-turn passes the key to the client-side scripting running in-browser. All in-browser cryptography is handled by the browser's native Web Crypto API.

  4. To mitigate XSS attacks, all external content is sanitized in the client side scripting before being written to the static page; all external content is written to static page elements using the elements’ .innerText attribute (as opposed to .innerHTML), and a strict content security policy (CSP) is applied, prohibiting the use of inline scripting.

  5. A trusted reviewer (TR) reviews the static page, and all supporting files. TR determines that the client-side code is ‘as advertised’, and at no point does the client-side code send the user’s secrets back to the server (or anywhere else), and at no point does the static page request a new page from the server, and that all of the above have been implemented correctly. TR then signs the static page with his private signing key and makes the signature public.

  6. The user points his browser to the static page. Then, the user clicks the ‘save page as’ feature on his web browser, to save the static page (which is currently loaded in his browser) to his system. Using TR’s public key he verifies TR’s signature on the static page. If the signature is verified, then the user proceeds to use the service by way of the static page already loaded in his web browser.

To summarize: The static page, which has been reviewed and signed by TR, remains loaded in the user’s browser throughout the user’s session, and at no point is it replaced by a new page from the server. The user verifies the integrity of the static page (cryptographically, in a manner similar to the way that the integrity of downloadable files is often verified) at the beginning of his session by verifying TR’s signature of this page using TR’s public key. [It would be nice if browsers (or perhaps a browser extension) had a built-in method for performing this function, but until that day comes, the procedure of step 6 above will suffice]. The use of subresource integrity (SRI) in step 2 ensures that supporting files cannot be modified by the attacker, as doing so would either break the SRI check, or necessitate a change in the root document, which would cause the signature verification in step 6 to fail.

For the sake of this question, assume TR is competent to perform the task at hand, and that the user has a reliable method (e.g. through a trusted third party or some out of band method) of verifying that TR’s public key is true and correct. Also, for the sake of this question, please set aside the possibility of side-channel attacks, such as browser vulnerabilities, a compromise of TR’s device or user’s device, a compromise of TR’s private key or user’s private key, etc.

Given the above, can you think of some way that a rogue server admin (or a hacker that has gained access to the server) would be able to steal the user’s secrets, and if so, how?

Related:


Edit 2/27/2021

I've developed a small browser extension for Firefox, incorporating many of the ideas in this question and the following answers and responses, aimed at solving this problem. See my answer below for more info.

Dalmarus
  • 101
  • 2
mti2935
  • 19,868
  • 2
  • 45
  • 64
  • 1
    I suppose browser extensions would interfere with this procedure so you'd have to disable all the browser extensions that modify the page each time you load it – nobody Sep 17 '20 at 18:02
  • @nobody, Good point. I agree, all bets are off if browser extensions are in play. But, if you take browser extensions out of the equation, do you see a vulnerability in this method whereby the server operator could steal the user's secrets? – mti2935 Sep 17 '20 at 18:10
  • 1
    i don't think all those parts will work from file:/// urls. If you want to run locally, consider in-lining all the code instead of using SRI, which will simplify distro and remove net access as a requiment. for my [nadachat.com project](https://nadachat.com), I let people [download the source](https://github.com/rndme/nadachat) so they need not trust my server. Your plan is good, but the part I'm fuzzy on is the TR; who, when, how? – dandavis Sep 17 '20 at 18:10
  • @dandavis, I just checked out nadachat. Very cool. I agree that downloading the document and running locally eliminates the need to re-verify each time. Good suggestion. Did you have to include CORS headers in your server responses to your XHR requests (to relax SO policy) in order to get this to work? Regarding the TR - the TR could be yourself or anyone you trust to review the code (I can think of a few good candidates on this board) - the point being that the code is reviewed and signed, then this enables the user to confirm that the code has not changed since it was reviewed. – mti2935 Sep 17 '20 at 18:24
  • no cors needed, all the urls are absolute+sri or page-relative for flexibility. Also I just checked and file:/// is evidently now considered secure (crypto.subtle used to only work from https.) – dandavis Sep 17 '20 at 18:29
  • 1
    I have little to no experience with web development so I can't say much. The main issue here seems that when implementing something so complicated, it would be easy to make some subtle mistake which the reviewer might miss. By the way, if you only insert content into innerText, wouldn't that prevent you from loading images? – nobody Sep 17 '20 at 21:08
  • @nobody Thanks for your comments. I was hoping you would reply to this, as it relates to your recent question about a Signal web client. Yes, it all hinges on the TR. But, this is no different than any downloadable file on the web whose integrity is verified using a hash posted by a trusted party or digital signature made by a trusted signer. The point being that a change to the static page by a rogue admin or attacker would be detected, because it would break the signature. Re images - perhaps it could be opened to images as well, but I haven't thought about the implications of that. – mti2935 Sep 17 '20 at 21:19
  • Wait. If you can't trust the application's server, why should you trust the "trusted reviewer's" server that gives you the signatures etc.? In the end you always need to trust someone or something, unless you write your own application and give it to the other party in person. Anyway, what you are proposing is just a downloadable application that runs in the browser. You could do the same in, say, python or java or any other language and run it directly on your OS. – reed Sep 18 '20 at 12:55
  • 1
    What I don't know is if such an application could still be called a "web application" or not, but I guess not. You are just writing an application that needs to be run in a browser. And as I said, at that point you might as well think of writing it in another language (Java, Python, etc.) – reed Sep 18 '20 at 12:57
  • 1
    @Reed thanks for your comments. Yes, it always comes down to who you trust, and how you go about verifying the public key that you have for that person is true and correct. As I pointed out above, this is no different than any downloadable file on the web whose integrity is verified cryptographically. Maybe 'web application' is the wrong term for this, but the advantage of running in the browser is that browsers are ubiquitous on almost every end-user device, whereas far fewer users have python interpreters of jvm's installed on their systems. – mti2935 Sep 18 '20 at 13:12

4 Answers4

10

First of all, as I mentioned in a comment under the question, this method will not work if the user has any browser extension running that modifies the page source since the signature would no longer remain valid. This also applies to antiviruses that intercept and inject scripts into web pages. Browser extensions can easily be disabled but disabling the antivirus may not be possible in some cases.

Secondly, although this procedure can work when using a laptop/desktop, making it work on smart phones will be much more cumbersome, perhaps almost impossible. As far as I know, there is no easy way to save web pages as html in browsers on iOS.

Finally, to answer the question asked, there does seem to be a way for the server to load a malicious version of the page. The HTTP Refresh header, which is an unofficial header, but appears to be supported by many browsers, could potentially allow the server the to redirect the user to a malicious page. By serving the original page and setting a refresh time of, say 5 minutes, the server could be reasonably sure the refresh occurs after the user has verified the integrity of the page, and then hope the user does not notice the redirect. Since this is sent as a header, it will not affect the integrity of the original page and the signature will remain valid.

nobody
  • 11,251
  • 1
  • 41
  • 60
  • Thanks again for your response. I realize that the solution may be limited to certain environments (e.g. desktops/laptops, no antivirus, no extensions), but the refresh header is an interesting attack vector that I hadn't thought of. I suppose what is needed is a way to verify the integrity of the headers, as well as the content. My hope is that this is possible, so that we can finally use the browser for serious client-side crypto applications, like the one that you describe in https://security.stackexchange.com/questions/238011/why-is-there-no-web-client-for-signal – mti2935 Sep 18 '20 at 13:01
  • @mti2935 I am afraid that for *practical* in-browser crypto we'll probably have to wait for browsers to implement a mechanism to verify the integrity of the pages they receive. Trying to make the given proposal work will probably only result in lots of people using the web apps without verifying integrity. Any sane person who needs serious crypto (and understands the security implications of web apps) will eventually resort to using the regular apps. – nobody Sep 18 '20 at 14:21
  • 2
    But if a user has to download the "page", that is the application file, why bothering serving it as a web page? I would prefer to click on a link to download it, or even just download it from GitHub or an external repo. Then you can just check the integrity of the HTML file (no plugins will interfere, no headers, etc.) – reed Sep 18 '20 at 14:59
  • @reed Exactly the point I was trying to make in my last comment. Anyone who cares about verifying the integrity will prefer to simply download the application in whatever form and check the integrity once. Those who don't bother to verify the integrity will continue using the web app with a ***false** sense of security*, which is more dangerous than no security at all. – nobody Sep 18 '20 at 15:07
  • Sadly, I agree with, 'I am afraid that for practical in-browser crypto we'll probably have to wait for browsers to implement a mechanism to verify the integrity of the pages they receive' - although, it's worth noting that a number of in-browser crypto services (e.g. protonmail) appear to be gaining traction, even with people that take security seriously. What are your thoughts on whether implementing a mechanism to verify integrity of rec'd pages (and possibly headers) can be done with a browser extension? (Of course, this introduces another trust requirement...) – mti2935 Sep 18 '20 at 15:38
  • @reed and nobody, Downloading the page, storing it, and running it locally is an interesting idea - and yes, this eliminates the need to re-verify its integrity each time. The one possible implication that I see to this is this would prevent CSP from being applied (because there are no headers served when opening the page locally), and this could possibly open the door to XSS attacks. Also, I believe the server would then have to include CORS headers in the XHR responses to relax same-origin policy, as the XHR requests from locally hosted page to the https server would be cross-origin. – mti2935 Sep 18 '20 at 16:27
  • @mti2935 CSP can be applied using ` – nobody Sep 18 '20 at 19:02
  • @nobody I never knew that it was possible to implement CSP using a meta tag. I've accepted this answer, because you've included several useful ideas (and important caveats to consider) in the answer and the ensuing comments. Thanks for responding. – mti2935 Sep 19 '20 at 23:48
4

It is possible to do this via browser extensions and it is not necessary for the user to download the signed web application and run in locally.

Since you already have a trusted reviewer/trusted third party that does a code review and signs the web app along with all (transitive) sub resources. They can also publish a browser extension that does the verification on the fly. The hashes that are checked against might be built in to the extension or downloaded regularly from a server of the TTP.

There are a lot of hooks that are provided to such browser extensions to control the executions of requests. The trouble is that browsers start to execute code as soon as it is there while a request is not yet complete. That means that the web app must be written in such a way that the execution starts after the request is received and validated. The validation can be done through this (if it were implemented anywhere; haven't tested it) or by reading the page content after the onCompleted callback.

The trouble is that this might be insufficient. If the web app is misbehaving it might not wait until the request is fully loaded and start sending data off to somewhere else. A CSP policy enforced through the extension might be the way to go there. Additionally, the extension might block (through the webrequest API) any network requests from the web app until it is verified. If the verification failed, the extension can close the tab and open a notification saying that the web app source changed in an unexpected way which might mean that the operators tried to sneak something in or simply that some other extension manipulated the source.

Sadly, it doesn't seem to be possible to look at the actual received response through the extension API which means that the web app might be sneaky by running some JavaScript and immediately removing the evidence that some JavaScript was executed. I'm not sure there is a way to disable JavaScript for a short amount of time as long as the page is not validated.

In case this is solvable, then you've added three more trusted third parties, because you now trust Google, Apple and Mozilla as the operators of their respective Web Extension stores.

If it works this has great user experience because new versions of the web app can be reviewed by the TR and the respective hashes added to the extension to be verified.

Artjom B.
  • 285
  • 1
  • 4
  • 13
4

Based on the ideas in this question and the responses to this question, I've developed a small browser extension (called Page Integrity) for Firefox, for verifying the integrity of web pages containing browser-crypto code.

Page Integrity showing integrity information for an example page at https://www.pageintegrity.net/signedpage.html. Page Integrity showing integrity information for an example page at https://www.pageintegrity.net/signedpage.html. The SHA-256 checksum hash of the page source is shown. The page is also signed with three signatures - the signer's public signing keys for these signatures are shown as well.

Clicking the button in the browser toolbar shows integrity information about the currently loaded page in the active tab of the browser, including the SHA-256 checksum hash of the page source. Additionally, if the page source is signed with one or more digital signatures, PI will verify the signatures, and show the signer's public signing keys for these signatures.

For more information, see https://www.pageintegrity.net/.

mti2935
  • 19,868
  • 2
  • 45
  • 64
2

I read this about signal pretending they can use SGX so A can remotely assert that B run a trusted code : https://signal.org/blog/secure-value-recovery/

I don't knwo if they found new theoretical concept or if this is just a stack of protection with the hope that they don't all fall off. The technical details are not trivial.

Do you think it could be a solution ?

Sibwara
  • 1,316
  • 7
  • 19
  • If you are interested, you should see [Matt Green's thoughts on SVR](https://blog.cryptographyengineering.com/2020/07/10/a-few-thoughts-about-signals-secure-value-recovery/). The *"So how has SGX done so far?"* part is... scary. In any case, I don't think this will solve the browser crypto problem. – nobody Feb 20 '21 at 10:45