1

I have a feature that would require the user being able to provide the URL for a custom script, store the URL in a cookie, and incorporate the script into subsequent responses.

This, of course, immediately raises concerns over possibilities of a cross-site scripting attack, but using HTTP, given that no third-party can manipulate the cookies* unless performing a man-in-the-middle attack, at which point it is easier to directly inject scripts into the page, this does not further compromise the integrity of the site.

* assuming all ports of all subdomains of the originating domain are trusted, since cookies do not fall under normal same origin regulation

Using HTTPS, however, one would expect that, using appropriate encryption and certification, like any other part of the communication, the cookies can be trusted to come from the user, either by setting indirectly through the server or directly through a dedicated user agent interface, eliminating the risk of a man-in-the-middle.

Turns out, this is not the case, since browsers can be instructed not to send cookies set through secure channels on unsecure channels (using the secure attribute), but they readily send cookies obtained through unsecure channels on secure channels.

So, for example, an attacker, diverting traffic around a wireless access point, could lure a victim into requesting a page in the given domain through HTTP (by clicking a link, displaying an embedded resource like an image, or redirecting to it), respond with a forged cookie, and have the victim use the forged cookie while communicating with the server using HTTPS (injecting a malicious script, fixating her session etc.).

Considering all these, I have several questions:

  1. Is the above understanding correct, and should a server communicating with the client through HTTPS consider all cookies to be possibly compromised, only using them in a way which prevents compromising the security of the site (allowing only a selection of trusted predefined scripts, regenerating the session id, etc.)?

  2. Is there any way given the available framework of HTTP to ensure the integrity of cookies when using a secure connection I haven’t thought of? (Perhaps using cryptography?)

  3. Is there any standardization effort ongoing to solve this problem, that I just haven’t heard of? If not, how come browser vendors, who go out of there way to apply the same origin policy wherever they can and alert users to the risks of mixed content, do not address this weakness in the protocol?

Joó Ádám
  • 133
  • 1
  • 7
  • Note: the same vulnerability is present when using the double submission technique to prevent CSRF attacks, see details in this answer: http://security.stackexchange.com/questions/59470/double-submit-cookies-vulnerabilities/61039#61039 – Joó Ádám Sep 18 '15 at 21:03
  • I think that any solution to the problem as stated will lend itself to social engineering attacks. An attacker says "here, use my JS code, it does all these wonderful things" but it has a trojan inside it that does something really bad. The script won't have any SOP restrictions. This seems very bad. – Neil Smithline Sep 18 '15 at 22:46
  • 1
    @NeilSmithline the vector you describe is the same as asking the user to run an arbitrary native executable. There’s not much one can do about it. Therefore I don’t really worry about that, but I do worry about the possibility that someone can install a script without the user’s knowledge. – Joó Ádám Sep 18 '15 at 23:34
  • You get pop-ups when you download an install a a binary. You'll get no dialogs for downloading and "installing". I'm concerned this functionality is risky. – Neil Smithline Sep 18 '15 at 23:44
  • 1
    The form, through which the user would install the script (by storing its URL in the cookie) could well describe the risks of running untrusted scripts. The specific use case is not the point of the question, it is the integrity of cookies in general. – Joó Ádám Sep 18 '15 at 23:52
  • The HttpOnly flag is designed to help prevent scripts from interacting with a cookie. – k1DBLITZ Sep 23 '15 at 17:32
  • @JoóÁdám: Do you know of the white paper published earlier this month regarding this? Zheng et. al., 2015-09, Usenix 24, [*Cookies Lack Integrity: Real-World Implications*](https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/zheng). See also: [Vulnerability Note VU#804060](https://www.kb.cert.org/vuls/id/804060) – StackzOfZtuff Sep 25 '15 at 18:08
  • @StackzOfZtuff, no, I haven’t seen that, thanks for the link! Hopefully this will bring some attention to the problem. – Joó Ádám Sep 25 '15 at 18:52
  • @JoóÁdám: The second link there cites HSTS, which is the generally accepted solution. I think that's all you're going to get in bringing attention to the problem - it's already well known that cookies occupy a shared origin within the security community, and it can't be straight-up fixed because it would break many things. – SilverlightFox Sep 26 '15 at 07:58
  • 1
    @SilverlightFox, one could well come up with a backwards compatible solution. If one don’t want to mess with the Cookie header (because it can break things), than one can add an additional header, say, Reliable-Cookies, which lists the names of cookies set with strict same origin rules. Do you see any potential downside to that? – Joó Ádám Sep 26 '15 at 10:16
  • 1
    @JoóÁdám: Reminds me of the `Set-Cookie2` header. That added port and comment functionality. It never caught on so it was obsoleted. – SilverlightFox Sep 26 '15 at 10:35
  • Well, the past two decades made it clear that there is no point in standardizing features without existing implementations in browsers… – Joó Ádám Sep 26 '15 at 10:47
  • And took only two months: https://groups.google.com/a/chromium.org/forum/m/#!topic/security-dev/2PK3q_VE1rg You’re welcome :) – Joó Ádám Nov 28 '15 at 22:37

2 Answers2

2

Generally speaking, a server shall consider every incoming data element as being potentially hostile. There is, here, a definition issue when you talk about "site integrity": this can be about integrity of the site as the user sees it, or the integrity of the site as other users see it. If all the server does is to send back the cookie back to the client who said it, as a script to execute, then the problem is simpler.


As I understand it, you fear the following scenario:

  • The site allows a user U to upload some Javascript, that your server will "store" and serve back to the same user U. This uploading goes through a properly secured process (HTTPS, user authentication...). The storage is not actually done in the server, but as a cookie value in the user's own browser.

  • By exploiting the user's gullibility, or hijacking the user's network access, the attacker succeeds in pushing some hostile Javascript H into the user's browser, attached (as a cookie value) to your server name.

  • When the user U next connects to your server, the hostile Javascript H is sent to your server (as a coookie) and the server sends it back to the client, to be executed.

Described that way, then your problem really is one of recognizing "official" cookies that have been pushed to your server through the normal process (with HTTPS and user authentication). Cryptography can help, with a MAC: you want to tag a cookie value, such that only your server can compute a proper tag, and can verify it. That way, your server would reject cookie values that have not been previously recorded through the secured registration process. In the scenario above, the hostile Javascript H value would lack a proper MAC value (because the attacker does not know the MAC key used by the server), so the server would refuse to send back the cookie contents as a script to execute.

In all generality, what you are trying to do is to offload server state onto clients. This can be done with:

  • a MAC to preserve integrity (against clients who modify their cookies, willingly or not);
  • encryption, if the state contents must not be shown to the user (this does not apply to your specific case, as you describe it).

See this question for some details.

Take care that the size of an individual cookie, and the number of cookies recorded by a browser for a given site, are both subject to browser-specific limitations.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 1
    You got it right, but to further clarify the objective: I would like to provide a way for users to customize the site without storing state on the server. I thought of a MAC, but without authenticating the user and storing the identifier in the tag, I cannot see how to prevent an attacker requesting a valid cookie for himself and injecting that into the victim’s cookie store. – Joó Ádám Sep 18 '15 at 20:35
  • Tom, thank you for the answer, I will mark yours as accepted (see my comment on SilverlightFox’s answer). – Joó Ádám Sep 24 '15 at 20:02
2

Yes, your understanding is correct.

Short answer - set a HSTS Policy (HTTP Strict Transport Security). Once this is set per browser instance, any plain HTTP connection made by the browser will automatically be upgraded to HTTPS.

Note that this only takes effect after the first visit, when the HSTS HTTP header is received over HTTPS. To guard against any attacks setting the cookie before their first visit from a browser, you can apply to have your domain listed in the HSTS Preload. This list is included in builds by major browser vendors, and doesn't require a first visit to a domain to enable HSTS for any preloaded domains.

Using HSTS Preload means that every subdomain on your site will need to use HTTPS - this is good practise anyway, because as you noted the Same Origin Policy for cookies is not very strict when it comes to domains.

The above approach will prevent a Man-In-The-Middle from injecting cookies into your site by intercepting any plain HTTP request and redirecting it to your domain, and will prevent subdomain attacks.

The flow with HSTS

Without preload

First HTTP visit --> Server redirects to HTTPS --> HSTS Set
--> Cookie containing JavaScript set

Next HTTP Visit | browser upgrades to HTTPS --> Safe JavaScript in cookie runs

With HSTS preload

First HTTP visit | browser upgrades to HTTPS --> ...

Any attacker redirecting the user by a MITM attack

Without HSTS

User requests a plain HTTP site --> HTTP 3xx redirect to your site --> Evil cookie set over plain HTTP
User requests your site over HTTP --> Server redirects to HTTPS --> Malicious JavaScript is ran

With HSTS

User requests a plain HTTP site --> HTTP 3xx redirect to your site | browser upgrades connection to HTTPS --> Attack thwarted as attacker cannot MITM the HTTPS connection

Even with Secure cookies, the server cannot query the fact that the secure flag has been set - all the server gets is the name/value pair. Therefore, there is no way to know that the cookie has been poisoned. HSTS is an effective solution because this cuts off the insecure, plain HTTP channel.

Note that the questions says that a URL is stored in the cookie not direct JavaScript - however including JS code from a third party domain has exactly the same effect - the code runs in the origin of the requesting domain

If you need plain HTTP support

The reason you need HSTS is because the Same Origin Policy for cookies is more lax than those of the DOM and during AJAX requests:

cookies have scoping mechanisms that are broader and essentially incompatible with same-origin policy rules (e.g., as noted, no ability to restrict cookies to a specific host or protocol) - sometimes undoing some content security compartmentalization mechanisms that would otherwise be possible under DOM rules.

Therefore only way to protect your cookie containing JS code securely is to have plain HTTP support only on a completely different domain. e.g. https://example.org with HTTPS and HSTS for your secure content and the cookie, and http://example.com for your non-secure content.

An alternative to cookies

If you cannot do the above, the why not use HTML5 Session Storage instead of cookies, and only set or read the value over HTTPS? This method will allow you to store the JavaScript URL in the browser and run it as needed, without the possibility of a MITM intercepting and modifying the value over plain HTTP:

separation means that a value saved to LocalStorage on http://htmlui.com cannot be accessed by pages served from https://htmlui.com (and vice versa). [*]

SilverlightFox
  • 33,408
  • 6
  • 67
  • 178
  • HSTS, of course, is only a solution if there is no requirement to support plain HTTP. – Joó Ádám Sep 22 '15 at 11:37
  • Protecting against injected JavaScript is not possible at all with plain HTTP. – SilverlightFox Sep 22 '15 at 11:39
  • Of course. If someone decides to use the http:// version of a site for whatever reason, then he takes that risk. The problem with the current state of cookies is that we cannot guarantee the security of a user even if he visits through https:// exclusively, because the mere fact that http:// is available opens the possibility to inject a cookie. It sucks. – Joó Ádám Sep 22 '15 at 12:09
  • If you need to support both HTTPS and plain HTTP, then I would recommend a completely different domain for your plain HTTP content. e.g. `http://example.com` for plain and then `https://example.org` with HSTS for secure. Otherwise you are open to MITM cookie poisoning attacks, and if you're messing around with JavaScript URLs in cookies then you have no option but to support HSTS on an HTTPS only domain. – SilverlightFox Sep 22 '15 at 12:48
  • I will probably drop this requirement, because it’s too risky and not a priority, but yeah, a different TLD for HTTP and HSTS on the secure domain would be a solution. – Joó Ádám Sep 22 '15 at 19:27
  • Need any further help or info? – SilverlightFox Sep 24 '15 at 07:43
  • No, thank you for the answer, both yours and Tom’s would be a good solution if not for my specific requirements (no authentication, plain HTTP supported), but I can use neither of them. I will mark Tom’s answer as accepted simply because he was the first, and I have to choose one. But again, thanks! – Joó Ádám Sep 24 '15 at 20:00
  • Answer updated with an alternative method for you. – SilverlightFox Sep 26 '15 at 07:53
  • I thought of it, but session storage is a *relatively* new feature, and the reasoning behind the functionality would be to provide a user with the ability to apply fixes even on a legacy browser. Hence cookies and server-side processing. But now that you bring it up, I may give it a second thought, as additional functionality for capable browsers… – Joó Ádám Sep 26 '15 at 10:28