46

Same-origin policy (SOP) makes browsers block scripting from one origin to mess with another, unless explicitly being told not to do so. But cross-site POSTs are still allowed, creating the vector for CSRF attacks. The defense is anti-forgery tokens.

But why don't browser developers just follow the same SOP's philosophy when dealing with POSTs?

Anders
  • 64,406
  • 24
  • 178
  • 215
Andrada2
  • 575
  • 4
  • 7
  • Just realized SOP doesn't block XSS (CSP does). It just block scripts from one origin to access data from another. But the question still stands. – Andrada2 Apr 18 '18 at 11:44
  • 12
    Why should browsers block only cross-site POST? CSRF is also possible with GET. Also, don't correct yourself only in a comment but fix the question by editing it. – Steffen Ullrich Apr 18 '18 at 11:50
  • 1
    @Steffen Ullrich thanks for the tip, i just edited my question. As why not blocking GETs, that's another question that I would love to read an answer. – Andrada2 Apr 18 '18 at 12:07
  • 1
    What makes you think browsers don't do this? If you send a POST XHR to a different domain in Firefox, it will first send an OPTIONS request to the server, and if it doesn't like the CORS headers it gets on the response, it will refuse to send the POST. (I haven't tested this behavior in other browsers, but I'm very familiar with this behavior in Firefox because I've spent the last few weeks building a server that does different things on different ports on the same domain, and apparently that's enough for Firefox to count it as "cross-origin.") – Mason Wheeler Apr 19 '18 at 03:00
  • The current cross origin blocking is already excessively strict. Any request without implictly added cookies (or implicitly added http auth headers) should be fine cross origin. – CodesInChaos Apr 19 '18 at 18:19
  • @CodesInChaos or implicitly added source IP address? – curiousguy Jun 11 '18 at 06:25
  • @curiousguy An IP address is not sufficient authentication, regardless of cross-origin restrictions in a browser. For example different user accounts on the same machine usually share an IP address even if they don't trust each other. That's also why a server which only checks if the IP is localhost to allow privileged operations suffers from a local privilege escalation vulnerability. – CodesInChaos Jun 11 '18 at 16:39
  • @CodesInChaos Source IP address is very often used as a filter often with an explicit security goal. Connections coming from an internal network are more trusted. Connection to a local IP address, not routable from the outside, are more trusted. – curiousguy Jun 11 '18 at 19:17

3 Answers3

63

In theory your suggestion is perfectly reasonable. If browsers blocked all cross origin POST requests by default, and it required a CORS policy to unlock them, a lot of all the CSRF vulnerabilities out there would magically disappear. As a developer, you would only need to make sure to not change server state on GET requests. No tokens would be needed. That would be nice.

But that is not how the internet was built back in the day, and there is no way to change it now. There are many legitimate uses of cross origin POST requests. If browsers suddenly changed the rules mid game and forbade this, sites relying on the old rules would stop working. Breaking existing sites like that is something that we try to avoid to the largest extent possible. Unfortunately we have to live with our past.

So is there any way we could tweak the system to work better without breaking anything? One way would be to introduce a new HTTP verb, let's call it PEST, that works just like POST only that all PEST requests are preflighted and subject to CORS policies. That is just a silly suggestion I made up, but it shows how we can develop the standards without breaking them.

IBam
  • 265
  • 2
  • 8
Anders
  • 64,406
  • 24
  • 178
  • 215
  • 8
    Is there any work in progress with the intention to implement this "PEST" idea? – Andrada2 Apr 18 '18 at 16:31
  • @Andrada: you could already use a PEST method within XHR if you allow it on the server. But don't expect this method to be a possible action for a HTML form. Because, if this would be the case then trivial use of PEST might be classified as a *simple request* in the CORS specification (i.e. could be achieved without script) and would therefore not be subject to preflight requests. IMHO the better way to deal with this is the one I've outlined in my answer: don't include authentication information in cross-site requests, no matter if POST, GET or whatever method is used. – Steffen Ullrich Apr 18 '18 at 16:38
  • 9
    @Andrada No, the PEST method was something I just dreamed up. My purpose was to highlight how to retrofit existing systems without breaking them. I don't think there is any serious work on this, but as Steffen say, nothing stops you from using arbitrary HTTP verbs. Im not sure I would recommend it though, since it is... well.. non-standard. – Anders Apr 18 '18 at 17:38
  • 11
    “how *the internet* was built back in the day” do you maybe mean the web? – Andrea Lazzarotto Apr 18 '18 at 20:31
  • 2
    Counter-example: Flash was heavily used back in the day, has security issues, is blocked in modern browsers by default, and this causes many websites to stop working. I'm not sure I see any particular problem with this. Note that "by default" means a user can still enable it if they choose to, either on a site-by-site basis (because they trust the site) or globally (because they aren't concerned with this security issue). – Jon Bentley Apr 18 '18 at 23:25
  • Standards and implementations have evolved since the first web browsers (e.g. CSS box model) and browsers came with solutions: doctypes, quirks mode, "use strict" for JavaScript... Theoretically there could be an opt-in option for webmasters. – pyb Apr 18 '18 at 23:32
  • 2
    @JonBentley I would argue blocking Flash has a *much* lower impact than blocking cross site POST. Flash you're still on the same site, but cross-domain errors are basically impossible to log, and it isn't obvious what the client should be doing. Should it warn the end user? Then what? The originating site doesn't get feedback a cross-site POST was denied on the client side. If the client reports back to the originating site, then that's a completely new browser behavior. It's a logistical nightmare. – Nelson Apr 19 '18 at 01:46
  • 7
    `But that is not how the internet was built back in the day, and there is no way to change it now.` ...and that's one of the main reasons we don't have IPv6 yet : – xDaizu Apr 19 '18 at 06:35
  • 9
    @xDaizu its getting there, at this rate we'll have full adoption by the turn of the millennium! :D – James T Apr 19 '18 at 07:04
  • Same-site cookies also are a new feature that could help with this. – Flimm Apr 19 '18 at 07:09
  • @JamesTrotter I studied IT in college around 15 years ago. I was told "Go and study IPv6 - it's going to be HUGE soon and you'll need to know it". I have yet to configure a single IPv6 address in a production environment! (NB: I'm not in networking, so I do appreciate they exist in the ether!) – Dan Apr 19 '18 at 10:04
  • Note that unlike POST requests, GET requests have a limit (rarely encountered) in how much data you can send... – Jared Smith Apr 19 '18 at 14:58
  • 2
    @xDaizu A lot of mobile network operators and startup ISPs use IPv6. This is solely because they couldn't get an IPv4 address. For backwards compatibility, they use some horrendous form of cross-protocol NAT that somehow manages to work. – user253751 Apr 20 '18 at 05:52
24

The problem is not the request method: CSRF could also be done with a GET request. The problem is instead that authentication information like (session) cookies or the Authorization header are automatically included with the cross-site request, thus making CSRF possible. Therefore the mitigation would not be to prohibit such methods to be used within cross site requests but instead to not send these authentication information.

With cookies there is a proposal for a samesite flag which would make sure that the cookie is not sent within cross-site requests. Unfortunately the flag is currently only available in Chrome, but will become available for Firefox with v60 in May 2018. Also, it would have been much better if this restriction would be enabled by default and would need to be explicitly changed to be less secure (like in CORS) instead of being insecure by default. Only this would mean a serious change to the current behavior and would probably break many existing applications.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • I don't think the way we handle authenticated sessions is the root problem. Of course, those are the most wanted targets, but I think that the mere fact that one can make a post from any origin is a vulnerability by itself. – Andrada2 Apr 18 '18 at 16:41
  • 2
    @Andrada: You are specifically asking for cross-site POST as a way for CSRF attacks and in this context the relevant part is the authentication. To cite from [OWASP](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)): *"Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they're __currently authenticated__"*. – Steffen Ullrich Apr 18 '18 at 17:00
  • Making this default would also fix a huge portion of tracking/adtech abuse. – R.. GitHub STOP HELPING ICE Apr 20 '18 at 01:50
  • @Andrada2 why do you think it's a vulnerability by itself? – Michael Gummelt Feb 07 '20 at 20:19
9

I partly disagree with Anders on

But that is not how the internet was built back in the day, and there is no way to change it now.

The developers of major browsers do have pretty much power to change the Internet and guide web developers to the direction they want. Obsoleting cross-site POST data would be possible, if it was seen as a major threat. There's examples of such progress on other things, although it's not sudden nor fast:

  • Flash. While it was formerly seen as the future of the web, major browsers have announced not to support it in the future, and web developers are adjusting.

  • HTTPS has been slowly forced by the browsers, with small steps towards warning about plain HTTP being insecure. We may eventually see a world where plain HTTP is slowly suffocated to death.

I'd like to see this to develop towards prioritizing security over compatibility more widely. Naturally, such a big change would not be something to do over-nigh, but by giving alternatives and discouraging it first. The path to achieve this could be like this:

  1. Introducing a Same-Origin Policy header for POST requests, that allows explicit consent.
  2. Starting to show warning of possible security problem on cross-site POST without the consent.
  3. Sites still needing this functionality starts slowly to adapt, to get rid of the warning.
  4. After a long transitional period the action could be changed to be more rough.

Discouraging POST on plain HTTP is quite close to discouraging cross-site POST, both being against the standards. This is just conscious loss of backward compatibility, for increasing security.

Esa Jokinen
  • 16,100
  • 5
  • 50
  • 55
  • 3
    I'd counter that Flash was never a feature of Web Browsers, and I'd love to know if that was ever actually considered "standard". Many have suggested it was Apple not providing such an interface on iOS that killed Flash. I tend to agree on the point of HTTPS, only in that Mixed Content was only proposed in 2014 and was (relatively) rapidly adopted by browsers as a warning and then as a breaking change sometime later. But at that time, the minority of sites on the internet were using HTTPS at all, and those that were would invest to make changes for security. – nbering Apr 18 '18 at 20:40
  • 1
    nbering Flash was never a W3C standard. Most browsers treated it like a plugin/extension opposed to a feature. This is why it had to be installed separately. Chrome baked Flash in for a while but this was mostly so they could improve it's security and ensured it would be updated regularly. Otherwise browsers never supported Flash directly. But they did support the ability to allow such technologies to work. And when browsers decided to opt for better security and lock the platform developers did have to adjust by re-coding their applets to HTML5 technologies or risk becoming obsolete. – Bacon Brad Apr 18 '18 at 21:55
  • 9
    Do note that HTTPS *is* HTTP wrapped inside a TLS tunnel, so the HTTP standard is still around, and wrapping a protocol in another is a relatively trivial change to make, but making a backwards incompatible change to a protocol is not. – Randall Apr 19 '18 at 00:44
  • I'm aware of this all, and the examples are not there because of their technical similarities, but to embody the influence of major web browsers. Added some clarification on the means and timeline needed to achieve something like this: I'm not suggesting that any browser should immediately stop complying with the HTTP standards. – Esa Jokinen Apr 19 '18 at 09:22
  • 1
    @Randall "_HTTPS is HTTP wrapped inside a TLS tunnel_" + a specific URL scheme + "secure" cookies ... + all the specific requirements re: HTTPS. It isn't just invisible layering. (IPsec is invisible layering) – curiousguy Jun 12 '18 at 05:17