11

Imagine that you have a web application that encrypts the user's data, such as a note or spreadsheet, on both the server and client.

The normal process for a user using this web application is something like this:

  1. The user logs into the application using a login/password-hash stored on the server. (Like normal web applications.)
  2. The user enters an additional secure key that is used to encrypt the client side data. The web application uses a client side encryption library such as SJCL

In this example let's just focus on the client side.

The situation is this: The server has been compromised and an attacker access to the server side keys. The attacker does not have the client side keys as they are never stored on the server.

Now the attacker needs to modify the Javascript to read the client side key when the user enters it in the web application (client side). The Javascript would be programmed to send the key to the attacker/server. Now the attacker has won.

I understand that it's assumed that once you take over the server, you've lost, but I would like to know if my thoughts below allow for a client side secure solution.


The situation

The HTML is assumed to contain some Javascript code inside some script tags, and there is also lot of Javascript code loaded via external Javascript files that reside on the server. It's the Javascript that runs the web application that is the problem. We have to assume that the attacker has modified any Javascript, be that inline or external.

Possible solution?

I want to be able to generate a hash of all of the Javascript loaded from my server. The has will act as a fingerprint for the client side Javascript code and the user will be wary of a new hash.

These are the two ways I have thought about so far:

  1. Take a hash of all files loaded to the client. This means requesting all of the files included again.

  2. Take a hash of all of the Javascript code in memory. (Can this even be done?)

The common problem with both options is that whatever function is actually doing this hashing, it needs to be small enough that the concerned user can verify it's safe to use within a few seconds.

I am thinking that this hashing function loads into the browser like normal, and the user can type the function name from the console without the () so they can see the code, and then type again with () to run the code.

Then the hash should be good enough for proving that the web application is in a state that the user knows they have inspected in the past.

This could even become a plugin at some point, although I am determined to see if a native solution is possible.


Essentially what I am asking is, what methods exist that allow us to prove the integrity of the client's state?

Joseph
  • 215
  • 2
  • 7
  • 5
    [Do not use JS for crypto.](http://www.matasano.com/articles/javascript-cryptography/) – Tobi Nary Mar 21 '16 at 12:01
  • 1
    No methods at all. Once you part with your code, you're at the hacker's mercy. – Deer Hunter Mar 21 '16 at 12:03
  • I'm guessing the only way to get client side encryption is to make a plugin, would you agree? This would prevent the client from having any control no matter what. – Joseph Mar 21 '16 at 12:03
  • 2
    Not a duplicate but have a look at [Verify CDN javascript's integrity](http://security.stackexchange.com/questions/74424/verify-cdn-javascripts-integrity). – Steffen Ullrich Mar 21 '16 at 12:13
  • So where is the fingerprint posted? Why couldn't an attacker change that? Are you suggesting that the user remembers the hash they saw last visit and takes some action (not sure what) every time they visit the site and the hash has changed? – Neil Smithline Mar 21 '16 at 20:17
  • Could a solution be to pull the identical JavaScript from 2 or 3 different sources/servers coupled with Sub resource integrity? Assuming each remote source/server had different sign in credentials it would be pretty difficult for a hacker to hack all separate sources. A self checking system for each downloaded source would need to be implemented - each js source checks the integrity of the others before proceeding..... I haven't implemented anything like this but I've often thought that just trusting one source is dangerous. – user2677034 Jun 10 '19 at 02:30
  • See https://security.stackexchange.com/questions/238441/solution-to-the-browser-crypto-chicken-and-egg-problem for a few ideas around solutions to this problem. – mti2935 Jan 08 '21 at 18:00
  • are there any actual applications that have this behaviour, client side encryption based on a secret by the user ? – gaurav5430 Apr 21 '21 at 08:25

5 Answers5

9

You can't be sure it hasn't been tampered with. An attacker is running code on your system - given sufficient effort, they can manipulate anything that happens within the browser context that you're running in (so, a plugin doesn't suffer the same way - it's in a different context).

Not all of the points in the Matasano link from @SmokeDispenser are totally correct anymore, although the basic principle stands. Efforts such as the WebCrypto API are trying to address some of the problems, but are not mature yet - even if they were, it wouldn't be possible to determine with certainty that the code was not doing something malicious at the same time as performing the expected behaviour.

Matthew
  • 27,233
  • 7
  • 87
  • 101
  • "*code was not doing something malicious at the same time as performing the expected behavior*" Cloud viruses are around the corner – Xenos Feb 08 '17 at 11:07
5

A web-page with JavaScript in it is essentially a small application that runs in a sandbox on your computer. Each time you visit the page you download the latest version of the application and run it. (Obligatory XKCD comic)

This means that if an attacker has control of your server and can supply poisoned code, then your problems are very similar to if your user has downloaded a spyware-ridden version of your software from a dodgy download site. Any protections you insert into your application can just be removed or bypassed by the attacker.

The only way you can keep a web application secure against an attacker who controls the server is if some part of your web app is stored on the user's computer. For example, this could be a downloaded file, or a data: URL bookmark. This piece of code would be loaded first, and could then contain enough logic to check the integrity of all the additional resources before execution - e.g. via subresource integrity or in older browsers verifying the hash before using exec().

(I wrote a small sha256 implementation to play with this idea of bootstrapping from a data: URL, and even a module loader based on it for fun, but obviously wouldn't recommend actually using this in production.)

In short: if you want your users to just type in a URL and load your site, then this is entirely dependent on the security of the server. Even monitoring your own site might not help you if the attacker is targeting only particular users.

cloudfeet
  • 2,528
  • 17
  • 22
2

If I've understood you right, you want to ensure that the code being supplied by the server matches some notion of recognized-as-good on the client. But for browsers, the only place which can supply content to the browser is the server - so your means of validation are delivered from the same source and via the same channel as the content you want to validate (as Matthew has said).

There is some scope to exploit this to your advantage if you can separate the time at which the 2 parts are delivered to the client (i.e. using different cache times, and have each half validate the other). But its going to be far from foolproof.

Javascript provides adequate reflection to make the validation straight forward (yes, you can read what's in Javacript's memory). The problem is differentiating between the code which came as part of the page / loaded by the page and what is already built-into the browser. The latter will vary by make and version. And as long as your code is calling out to the browser supplied code (e.g. to write stuff on screen) you need to be able to validate the browser code too. This is a problem, since it's simple to replace any javascript function (including the built-in ones) with something else:

_orig_write = document.write;
document.write = function (str) {
    send_data_to_evil_site(str);
    _orig_write(str);
}

You can't rely on detection:

if ('function write() { [native code] }' != document.write.toString()) {
     alert("maybe the toString was changed too?");
}

You might want to have a look at transferring your javascript in signed jar files. While originally intended for giving Javascript access outside its sandbox, the mechanism built in to the browser for vaildating the content should be more robust than a homegrown solution - but then again do remember that this code can potentially have impact outside the sandbox (which might be a turn-off for any security conscious customers).

symcbean
  • 18,278
  • 39
  • 73
2

Validating the client-side code makes sense even if your server-side code was not compromised. If an attacker is able to modify code or inject new code, he can easily capture credentials or modify the markup of the page and do phishing, and this is sufficiently severe for people to worry about.

About the solutions proposed so far:

  • Subresource integrity - only validates the integrity of third party code, and then only when loading. An attacker can inject code inline or poison existing code. Thus, SRI is not efficient against this particular class of attacks. It is meant to detect when your CDN was compromised.
  • WebCrypto - it's nice to have standard crypto in the browser, but like any other native function available, it can be poisoned.
  • Other solutions proposed rely on their code being executed first than a possible adversary. The problem is that is very hard to assure. That's why standards like CSP are carried in HTTP headers and the browser has by definition to enforce them first, prior to load any JS. (BTW, CSP does not work against code poisoning either).

There's no bullet proof solution. What you can do is raise the bar as much as you can, to mitigate most attacks and demotivate others.

I'm surprised no one suggested JavaScript obfuscation. If the obfuscation is resilient enough and even polymorphic, it can generate outputs that are unfeasible to understand and sufficiently diverse. You can rotate protected versions periodically to achieve this. With that you eliminate automated poisoning targets, as names and shapes and even layout of the code keeps changing constantly. I'm assuming the attacker is remote to the browser (hence the need to automate the attack). Also, there are solutions today that produce self-defending code which makes the code resistant to tampering and poisoning, which makes it increasingly more complex to defeat.

To deal with modifications to the DOM specifically, you need something slightly different that is able to detect these modifications and remove them.

schroeder
  • 123,438
  • 55
  • 284
  • 319
Carl Rck
  • 147
  • 3
  • 1
    How can obfuscation help the OP? If the server is compromised, it may still deliver obfuscated, tampered code. – Tomas Langkaas Feb 08 '17 at 13:17
  • The OP stated: "Now the attacker needs to modify the Javascript to read the client side key when the user enters it in the web application (client side)." The code protection (and obfuscation) would be useful to make the analysis and manipulation of that code hard to accomplish. – Carl Rck Feb 10 '17 at 15:36
  • If the attacker fully controls the server, then the client-side has to verify everything that comes from the server. Similar to Subresource Integrity, but in this case for its own server. The enduser could be given a token (you can call it client key) that is able to verify the integrity of the code that is being executed. For instance he could insert that key, and the JS would compute a number of different hashes of the code to verify itself and then warn the user if something is off. The code protection would be useful here also. – Carl Rck Feb 10 '17 at 15:38
  • If the attacker replaces this code with something completely new, the verification mechanism would no longer be there, and that itself would be a warning for the user. Of course this isn’t perfect, but it raises the bar significantly more, when compared with other options mentioned. – Carl Rck Feb 10 '17 at 15:38
  • The OP threat model: "We have to assume that the attacker has modified any Javascript". Tampered client-side verification function (unobfuscated version): `function verifyClientCode (secretKey, code) {sendToServer(secretKey); return true;}`. Still, I don't see how obfuscation raises the bar for an attacker. It could indeed do the opposite; make it even harder to identify tampered code. Unless client side verification code involves code that does not originate from the server, the original problem will persist. – Tomas Langkaas Feb 10 '17 at 15:54
2

The OP asks if it is possible to prove that client side JavaScript is secure, in the case that the server has been compromised. As noted by others, as long as the server provides the client with the JavaScript code, it may be tampered with, including the code that is meant to verify that the code is secure.

This is already noted by the OP, which suggests that client side code inspection could be used to verify:

I am thinking that this hashing function loads into the browser like normal, and the user can type the function name from the console without the () so they can see the code, and then type again with () to run the code.

If the hashing function is provided by the server, this is again easily circumvented, try inspecting the harmful function below in the console:

function harmful(){
  /*evil code*/
}

harmful.toString = function(){
  return 'function harmful(){/*I am harmless*/}'
}

The main point being that it is not possible to verify the security of client side code in the event of server compromise, as long as all client side code is provided by the server. And, JavaScript is so flexible that harmful code may be disguised as harmless upon code inspection in the console.

  • While relevant your answer does not address the question asked and I believe it would be better suited as a comment. No downvote from me though – Purefan Feb 08 '17 at 12:24
  • 1
    @Purefan, elaborated the answer, thought it was important to respond directly to the proposed solution by the OP, thanks for the feedback. – Tomas Langkaas Feb 08 '17 at 13:00