1

Threat model:

Malicious user gaining physical access to browser cookies (e.g., 3rd party repair guy copying cookies to his own device or something like that). Let's say legit user did not clear cookies beforehand.

Possible mitigation:

Pre-authorize user device:

  • Get user's browser signature via JS and save it in a db for later use (dbCopy). This is done by an "admin" account physically present in the device before a "regular" user can use it.

Now in every HTTP request of the user, the web app will:

  1. Get browser sig via JS (currentRequestSig)
  2. Set cookie hash = hash('sha256', currentRequestSig + randomToken) in user's browser
  3. Store copy of the hash and currentRequestSig in the app's db
  4. Get the cookie hash
  5. Make sure the associated currentRequestSig of the cookie hash is the same as dbCopy
  6. Make sure hash is not yet used before
  7. Mark hash as used in db
  8. Delete hash cookie

✓ Browser is verified at this point, user can proceed with using the app

Threats mitigated

  • If malicious user copies the cookie hash to his device, it will not work because of browser sig mismatch. This assumes browser sig is hard to replicate.

  • If malicious user disables JS, effectively disabling steps 1-3 (an HTTP requests therefore begins in step 4 with a copied cookie), the cookie he copied will still not work because it's already used before (step 6 will invalidate it).

My question is, is this a viable solution to mitigate physical cookie theft, or is it overkill and I am missing something?

IMB
  • 2,888
  • 6
  • 28
  • 42
  • You are trying to defend against an attacker that already has user or even admin privileges. That is going to be tricky if there is no hardware security approach involved, but that would be infeasible in a web context. Here, a fault is that the JS can't be trusted by the server. Attacker could manipulate browser to create a predictable signature, then terminate session to force new pre-authorization? Even better, currentRequestSig can be obtained remotely by *any* website that the browser visits… – amon Jan 19 '21 at 07:13
  • The Brave web browser is developing a feature to randomise the browser’s fingerprint. Some privacy extensions to other browsers do similar things. Do you want to break your website for privacy-aware users? – Mike Scott Jan 19 '21 at 07:23
  • @amon This assumes that pre-authorization code was done by an "admin" account so it can't be done by the malicious user. Not sure why `currentRequestSig` being obtained by any website will affect this though? – IMB Jan 19 '21 at 08:00
  • @MikeScott This app is actually an internal app, not made for general public use. – IMB Jan 19 '21 at 08:03
  • I think part of the problem in this discussion is that it's not entirely clear what messages are sent when in your protocol. E.g. the `hash` is generated in the browser, but also stored in the server's DB, and can only be used once. Drawing a diagram would clarify this for you. – amon Jan 19 '21 at 09:26

1 Answers1

1

This is almost useless for preventing cookie theft. You are relying on a non-secret property of the browser (every site you visit knows your browser signature) to attempt to provide security. This will just not work. All this will do is force the person copying the cookies to carry out one more step1: Grab a copy of the browser fingerprint as well. Remember that JavaScript on client side can not only be disabled, it can be modified too. The attacker can now simply replace the currentRequestSig function with one that will always return the victim's browser signature.


1This assumes there is some rate-limiting implemented server side which prevents simply bruteforcing the browser signature.

nobody
  • 11,251
  • 1
  • 41
  • 60
  • Isn't the scenario you describe mitigated by step 6 above? – IMB Jan 19 '21 at 08:05
  • @IMB No, it isn't. Once the attacker has the browser fingerprint, they can create a new hash by appending a new random token and hashing it. – nobody Jan 19 '21 at 08:11
  • What if the random token in step 2 is a random secret provided in the server side that the attacker can't just generate on his own? – IMB Jan 19 '21 at 08:18
  • 1
    @IMB What stops the attacker from requesting the random token from the server? – nobody Jan 19 '21 at 08:19
  • Well this assumes the only way to get a new hash is if you go through all steps 1-8 (which happens in a single request). I guess I am trying to get my head around how an attacker can request a new hash without going through steps 1-8 on an unauthorized device i.e his own device? – IMB Jan 19 '21 at 08:27
  • 1
    @IMB The only thing different about the unauthorized device is that it has a different browser fingerprint. Since the attacker has obtained a copy of the authorized fingerprint, they can present the server with the stolen fingerprint instead of their own, and the server won't be able to tell the difference between the devices. – nobody Jan 19 '21 at 08:36
  • Yeah I get that part. But I don't quite get how an attacker can bypass step 6 because as soon as you get a new hash, it's already marked as used? – IMB Jan 19 '21 at 08:40
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/118630/discussion-between-nobody-and-imb). – nobody Jan 19 '21 at 08:46