28

I've always read: Put validations in the backend. Frontend validations are for UX, not security. This is because bad actors can trick frontend validation. But I'm having a hard time wrapping my head around how a bad actor could trick it.

I never thought about it much, I just thought this meant someone could bypass the validations by making a request on something like Postman. But then I learned that with a same origin policy that's not possible.* So how are these bad actors making same origin requests?

The only other idea I can think of is bad actors can go into the code (ex: on DevTools) and edit the request there and make an edited request from the same site. Is that what they do?

What does tricking frontend validations look like in practice? How are they making a request that gets around CORs?

*Update: I was wrong here about what SOP is. It doesn't stop requests from other origins, ex Postman. Many answers below clarify what SOP is for.

  • 26
    What happens if I turn off JS in my browser...? – Luke Jan 30 '21 at 14:22
  • 26
    Why do you think the same-origin policy has anything to do with Postman? – chrylis -cautiouslyoptimistic- Jan 30 '21 at 21:55
  • @chrylis-cautiouslyoptimistic- I thought when a server has a same origin policy it means that if the request isn't coming from the same origin the server will stop the request from proceeding. Since it's on Postman and not coming from the same origin I thought it that meant the server would block it. – Dashiell Rose Bark-Huss Jan 30 '21 at 22:54
  • 19
    @DashiellRoseBark-Huss No; the server tells web browsers to block it. Everything coming from computers not under your control is potentially evil; don't trust it. – wizzwizz4 Jan 30 '21 at 23:21
  • 2
    @wizzwizz4 ok yeah I'm learning from this thread that I have CORS/ SOP all wrong. thanks – Dashiell Rose Bark-Huss Jan 31 '21 at 02:54
  • There are exceptions to this rule. For example if the user is a student taking a test on your computer at your proctoring site monitored by your invigilators then it might be possible to rely on frontend validation, but even then I can not think of any good business reason to do so. – emory Jan 31 '21 at 15:47
  • To summarize: SOP is there for the benefit/security of the user, and is enforced by the user-agent (browser). It prevents scripts making requests / sending data to unexpected sites. Naturally the site has some interest in the security of the user, which is why it plays along, but it's nothing to do with validating what the user sends you. – Steve Jessop Jan 31 '21 at 16:08
  • SOP and CORS = browser believes communication is acceptable. No browser thus = no SOP or CORS – jmoreno Jan 31 '21 at 19:51
  • Front-end validation isn't some inaccessible mystery black-box. Your question is akin to how do humans use words to trick other word-speaking humans? – MonkeyZeus Feb 01 '21 at 18:52
  • I'm reminded of a company that tried to block access by non-customers to their camera's firmware by sticking `if(Password == 'supersecretpassword') location.href='https://www.example.com/supersecretdownloadurl/';` in their html. Perhaps they didn't like the idea of a researcher finding and presenting the vulnerabilities in their camera software to a hacker convention...because that's what happened. – Brian Feb 01 '21 at 22:41

7 Answers7

98

I think you are very confused about what both CORS and SOP do... neither is relevant to these attacks at all.

There are lots of ways to bypass client-side validation. HTTP is just a stream of bytes, and in HTTP 1.x they're even human-readable text (at least for the headers). This makes it trivial to forge or manipulate requests. Here's a subset of ways to do it, grouped by rough categories:

Bypass validation in the browser

  • Browse to your site and input the invalid values. Use the browser dev tools to remove the validation events or manipulate their execution to pass validation anyhow. Submit the form.
  • Use the browser dev console to send requests from the site as though through the validated form, but with unvalidated inputs (just directly invoke the function that makes the request).
  • Use the browser dev tools to "edit and re-send" a request, and before re-sending, change the valid values in the body to invalid ones.
  • For GET requests: just type any URL with invalid parameters into the location bar.
  • For POST requests that use non-samesite cookies for authentication: create a web page that POSTs to your server with the expected values (including any CSRF-protection token) but with invalid values, load it in browser, and submit.

Bypass validation using non-browser tools

  • Set the browser to run through an intercepting proxy (like most in the security industry, I usually use Burp Suite, but you can use others like Fiddler too). Capture the outbound request, tamper with the fields to make them invalid, and send it on its way.
  • Use an intercepting proxy again, but this time replay a previous request with modified, invalid values (in Burp, this is exactly what the Repeater tool is for).
  • Right-click a request in the browser's dev tools' network history, select "Copy as cURL", paste the resulting curl command into a command line, edit the validated fields to make them invalid, hit Enter to send.

Crafting malicious requests from scratch

  • Using Burp Repeater, specify the protocol, domain, and port for your site. Add the necessary headers, including any cookies or other headers needed for authorization. Add the desired parameters, with invalid values. Click "Send".
  • Using curl, send a request to your site with the required headers and whatever body, including invalid values, you want.
  • Using ncat, open a connection to your site, using TLS, on port 443. Type out the HTTP top line, headers, and body (after all, it's just text, although it'll get encrypted before sending). Send the end-of-file input if needed (usually the server will just respond immediately though).
  • Write a little script/program in any language with a TCP or HTTP client library (from JS running on Node to a full-blown compiled golang binary) that creates a request with all the required headers and invalid fields, and sends it to your server. Run the script/program.

SOP only applies when the request is sent using a browser AND the request originates from a web page hosted at a different origin (combination of domain, protocol, and port) than the request's destination. Even then, SOP primarily protects against the originating page seeing the response; it doesn't prevent attacks from occurring. If you're the attacker trying to get past client-side validation, then you can just send the request from the origin you're attacking, and SOP is entirely irrelevant. Or just send the request from something that isn't a browser (like a proxy, or curl, or a custom script); none of those even have SOP in the first place.

CORS is a way to poke holes in SOP (CORS doesn't add any security; it's a way to partially relax the security feature of SOP), so it doesn't even matter unless SOP is relevant. However, in many cases you can make a cross-origin request with invalid parameters (as in the case where I create my own attack page and point the browser at it, then use it to submit an invalid request to your site) because for most requests, SOP only restricts whether you can see the response - you can send the request cross-origin even if the server doesn't allow CORS at all - and often, seeing the response isn't needed.

Pulling the authorization tokens (cookies, header values, whatever) out of the browser after authentication is easy (just examine the network traffic in the dev tools, or use a proxy). Remember, for validation to even be in question, the attacker has to be able to use your site via a browser, which presumably means they can authenticate. Or just submit an authentication request using curl or whatever, scrape the returned token out of the response, and use it in the malicious invalid requests; no browser needed at all!

There's nothing the browser can do (in terms of sending requests to the server), that I can't do from a shell prompt with some common, open-source utilities.

EDIT: As @chrylis-cautiouslyoptimistic points out, this includes spoofing the Origin header in the request. Although the Origin header is not normally configurable within the browser - it is set automatically by the browser when certain types of requests are made - it's possible to edit it after sending the request (by intercepting or re-playing the request), or to simply forge it in the first place. Any protective measure based on the presence, absence, or value of this header will only be effective in the case that the attacker is indirectly submitting the request through an unwitting victim's browser (as in the case of CSRF), typically because the attacker can't authenticate to the site or wants to attack through another user's account. For any other case, the attacker can simply choose what value, if any, to give as the Origin.

EDIT 2: As @Tim mentioned, things like SOP and anti-CSRF measures are all intended to protect the user of a website from an attack by somebody else on the internet (either another user of the same site, or an outsider who wants to exploit your access to the site). They don't provide any protection at all when the authorized user is the attacker and the target is the site itself or all of its users, through something like a stored XSS, SQL injection, or malicious file upload (the types of things people sometimes try to prevent with client-side validation).

CBHacking
  • 40,303
  • 3
  • 74
  • 98
  • 33
    It may be relevant to OP's understanding that there's really no such thing as "sending from an origin"; there's only "including certain HTTP headers in the request". – chrylis -cautiouslyoptimistic- Jan 30 '21 at 21:56
  • Nice answer, well written. I might add the mention of techniques like using headless browsers too, as part of the toolset: https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Headless_mode – Ed Daniel Jan 31 '21 at 02:26
  • I presume any in-browser hacking using devtools can also be automated with the appropriate browser plugin, like GreaseMonkey – Jonathan Jan 31 '21 at 12:00
  • 2
    ncat can do encryption for you? Neat! – John Dvorak Jan 31 '21 at 14:16
  • Edited in how the Origin header works and its limitations, thanks! I haven't personally used headless browsers to do attacks like this but I expect it can work fine. GreaseMonkey and similar also work, as do custom browser extensions that inject scripts or modify requests (or make their own; they're allowed to bypass many restrictions depending on their permissions). And yep, ncat (part of the nmap suite) is excellent; far more powerful than the legacy `nc`/`netcat` [including TLS and DTLS support](https://nmap.org/ncat/guide/ncat-protocols.html). – CBHacking Jan 31 '21 at 15:05
  • For folks looking for a better netcat-esque tool, [`socat`](http://www.dest-unreach.org/socat/doc/socat.html) also bears mention. – Charles Duffy Jan 31 '21 at 19:09
  • 9
    It might be helpful to add that SOP and CSRF tokens are tools meant to protect *legitimate users* from *third-party attackers*. They do nothing when the user is the attacker. – Tim Jan 31 '21 at 23:27
  • It might help to note that SOP only prevents seeing results of GET requests, but still sends them, but it does block other methods, which is another reason to avoid modifying state with GET requests. – Jan Hudec Feb 01 '21 at 09:27
  • @JanHudec SOP, in the absence of CORS, also allows cross-origin POST and HEAD requests (so long as they are "simple" requests without custom headers; same for GET requests), though the response is still not visible to the client. HEAD is as idempotent as GET (in theory), but POST is frequently state-changing. A "simple" cross-origin request (possible to make without CORS) is whether it would be possible to submit the request via an HTML form's submit button. Scripts could simply create and submit such a form, so blocking such requests from XHR/Fetch is pointless. – CBHacking Feb 01 '21 at 15:36
  • And if all else fails, just write your own "browser" which can put together exactly those malicious messages you want. – vsz Feb 02 '21 at 06:31
  • @CBHacking, hm, I guess I forgot the exact rules. And, hm, you can have form posting absolutely anywhere with no SOP at all, making a XSRF token absolutely essential with authenticated forms (while SPAs these days generally use custom token for authentication and that is protected by SOP). – Jan Hudec Feb 02 '21 at 06:57
  • @JohnDvorak if ncat can't, there's also `openssl s_client` – user253751 Feb 02 '21 at 08:59
23

Maybe a very short answer will help as well.

I never thought about it much, I just thought this meant someone could bypass the validations by making a request on something like Postman. But then I learned that with a same origin policy that's not possible.

The same-origin policy is something that browsers voluntarily implement to protect their users. It does not affect Postman because Postman does not implement this policy. Therefore, your original thought is correct.

Vincent
  • 329
  • 1
  • 5
10

Postman and same origin policy aren't obstacles. To understand this, I need to explain why, as a developer, you virtually never trust the client/front end.

Front and back end trust

If someone controls a computer, they control what it sends the server. That's literal: every last byte of it, every last header or request, every last POST field in a form or GET parameter in a URI, every web socket and connection, every last timing (within broad timing limits that aren't an issue here).

They can make that computer send the back end literally, anything whatsoever they want, on demand. Any GET. Any POST. Any header field. Anything. They can include any header. Any origin. Any cookie information they choose and know. Literally, anything. Common exceptions for most use-cases are perhaps physical "black box" encryption cards/keys, and the client's IP address, both of which are trickier barriers if checked during a session - and even the IP can usually be spoofed in various ways, especially if they don't care about a reply.

The upshot is that from a security perspective you can't trust anything a client sends. You can raise the bar quote a lot, enough for most everyday uses. Secure transport (TLS/HTTPS) to make it extremely difficult to modify or intercept or change the traffic if someone controls an intermediate computer it's routed through. A well implemented OS and browser, that stop scripts or malware outside that specific web page from interfering locally. Certificate checking one or both ends. Secured networks that authenticate what may attach.

But every last one of those is raising a bar, not an absolute defence. Every one of those has been broken before, gets broken now at times, will be broken in future. None is guaranteed bug and loophole free either. None can defend against a user, malware, or rootkitted remote access at the client end that deliberately, or ignorantly subverts the defences on the client PC, because such a user can typically change or bypass anything the OS or browser are programmed to do. None is a true perfect defence.

So if you have any software or web based system with a back-end and front-end, its a golden rule that you don't trust the data the client provides. You recheck it when received at the back.end. If the request is to access or send a web page or file, is that okay, should that session be allowed access to that file, is the filename a valid one? If the request is some data or a form to process, are all the fields reasonable and containing valid data, is that session allowed to make those changes.

You don't trust a thing that isn't under your own secured known control.

Server, you'll trust by and large (you manage the OS and security, or have trusted partners who do). But client and wider network no trust at all. And even for the server, you have security checks on it, be it malware and behaviour detection, access controls, or network scanning software, because you could be wrong about that, too.

So you validate at the client browser/app (front-end) for convenience of the client, because most clients are honest and many mistakes can quickly be detected in the browser or app.

But you validate at the server (back-end) to actually do your real checking if the request or data is valid and should be processed or rejected.

That said, your answer is...

You asked how its done. The software to do it can be done many ways - malware, deliberate user act, misconfigured client system/software, intercepting computer/proxy.

But however it's done, this is the basic process that exploits these issues within a client, makes any client packet fundamentally untrustworthy (including origin and referrer fields), and makes it impossible to trust them. It excludes external matters such as certificate misuse, which are outside the OP scope.

  1. Study what a genuine reply/packet/request/post looks like in the app.
  2. Modify the packet using built in browser tools, browser extensions, transparent proxies/proxy apps, or create a hand crafted request based on it, with the different headers needed.
    (After all, the back-end doesn't actually know what the "real" values should be, or the "real" origin or referrer is, it only knows what the packet * says * they are, or the origin or referrer are. Which takes between 15 seconds and 2 minutes to modify to anything on earth I want it to be, or well under a millisecond if its done by software.)
  3. Modify or fake anything else needed, or craft custom versions, of any packets, and send those instead (wither prepared in advance or modified at the time)
  4. Done.
Stilez
  • 1,664
  • 8
  • 13
  • 7
    I think your statement "But every last one of those is raising a bar, not an absolute defence." is muddying the waters. Https, and all those other things you mention in that paragraph are to protect the client from hostile third parties. Not the server from hostile clients. They don't raise the bar at all. And is malware relevant with a hostile client – Richard Tingle Jan 30 '21 at 16:49
  • They raise the bar on your app - front * and * back end combined, not doing as it should. They will reduce the attack surface, and the risk and ease of innocuous or deliberate misuse/subversion, and thereby reassure and add security, but they aren't perfect. Every security measure you are likely to use, has been breached elsewhere at some time or another.. That's the meaning of the sentence. It's making a general point that security is about raising the bar, but there will virtually always be attack surface left, there isn't perfection. – Stilez Jan 30 '21 at 16:57
  • 2
    "Hostile client due to malware" is extremely common. The client here, is the system (browser, JS, app, system, whatever) that contacts the back end. Its not the human user of the device. A browser on a compromised platform will potentially be a hostile client, in the sense that the client itself is seeking to subvert your security ornplatform. The * user * at the desk might be innocent, but the user isn't the client. When a banking trojan watches for the user to login to their bank, and subverts it to manipulate requests or get data/credentials, the browser is a hostile client for a developer – Stilez Jan 30 '21 at 17:07
  • Isn't that mixing your attackers though. The first step in a cybersecurity analysis is to determine who the attacker is and what they want. There is [Malware doing things the end user is allowed to do but doesn't want to, e.g. transfer all their money to someone else] which client side validation doesn't even *try* to deal with. And [Hostile user trying to do unauthorised things, e.g. withdraw more money than they have] which the OP thinks client side validation might help with (but of course doesn't) – Richard Tingle Jan 31 '21 at 16:11
  • 1
    The question just isn't that nuanced. The point to.make is that the "client" here is implied by the question - why do you verify back end not front end. In the context of the question, the client is the browser or app that receives user input/responses and sends it to the server - why doesn't the client verify it instead? Answer, because the client (from the servers perspective) can't be trusted to not have been interfered with - innocently, by the user, by 3rd party malware, by someone on the wire doesn't matter ...... . (Cont) – Stilez Jan 31 '21 at 19:31
  • 1
    ......They are **all** reasons why as a dev, you don't trust the client to check data validity or extend more trust than you can help. And that's the point one needs to make, and show how broadly it can need to be considered. – Stilez Jan 31 '21 at 19:31
9
  1. Your backend is accessible via the network. That means I don't need to use your frontend. I can find out what endpoints it uses, and and what format the request looks like, and use my own tools to send requests that your frontend would never allow. You must never assume that a request hitting your backend actually originated from your own application. It could be literally anything.

  2. Your frontend runs in my browser. You don't own my browser, I do. You have a text field with a maximum width? I can edit that in inspector. You have a 500-line isValid() function that you call on submit? I can open the console and remove the handler, or do isValid = function() { return true }. Or alter your entire JavaScript and visit your site through a transparent proxy that replaces your version of the code with my version (but passes other requests straight through). Or one of a million other things. You can't trust any computing device that isn't under your direct control.

  3. Same-origin policy protects me from a site A that wants to try to access my data on site B by using my browser (which is authenticated to site B) to do the work. It doesn't protect you from anything. I can violate same-origin policy all I want. Third-party code shouldn't be able to violate it without my permission.

hobbs
  • 471
  • 3
  • 7
6

How do hackers trick frontend validation?

They don't. They simply don't do it.

The question is based on a simple flaw of thought. For attackers there is no frontend validation as they simply do not use the frontend your normal users use (unless they want to for some reason). One core idea of backend and frontend is to separate both. In most scenarios that separation means both parts can be replaced transparently. So any other piece of software can in the end send the requests your frontend sends and in doing so also ignore all the validations your frontend would do.

Frank Hopkins
  • 637
  • 3
  • 6
4

Same-origin policies are needed in the browser because the browser has a state (sessions) and is able to access or manipulate private data automatically (without having to log in again, or accept anything, even in the background). So if you are logged in on Facebook the browser is able to read your messages, but if you visit the website malicious-example.com, the browser should not let that website access your data on Facebook.

Outside of a browser there is usually no need to have a same-origin policy, because there is no state. If you open up your terminal and type curl https://www.facebook.com, you are just going to get Facebook's default home page, like it's seen by everybody else. To see your private data, profile, messages, etc. you would have to add some options to the curl command, and add your valid cookies, tokens, etc.

Browsers talk to servers by making HTTP requests. You don't need a browser to make HTTP requests, and you can make them directly with lots of other software (for example curl), which can be much more flexible and efficient for their needs. So it doesn't matter if you check a parameter in the browser, because an attacker will send that parameter directly using other software, making the appropriate HTTP request directly to your server. The attacker might still need to execute or check out your front-end code in order to get all the necessary parameters (captcha codes, stuff loaded by AJAX, etc.), but after that, once all the parameters are known, they will just make a direct request to your server. So you can't expect valid parameters on your server, you always need to validate them, and you should never rely on client-side validation.

reed
  • 15,398
  • 6
  • 43
  • 64
2

Anything you set up your frontend to send is essentially on the client pc, and if you want to try circumventing your own frontend validations, just do the following:

  1. Go to the dev tools (F12 in most modern browsers)
  2. Navigate to the network tab
  3. Press F5 to update the page, or click the button you want to emulate
  4. Select the item you want to emulate (select doc or xhr if the list is long)
  5. Right-click the item
  6. Select Copy in the context menu
  7. Select an appropriate option in the list (e.g. curl (bash))
  8. Paste it in your terminal
  9. Press Enter

Which results in something like:


curl 'https://security.stackexchange.com/posts/244030/ivc/7855?_=1612165453217' \
  -H 'Connection: keep-alive' \
  -H 'Accept: */*' \
  -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36' \
  -H 'X-Requested-With: XMLHttpRequest' \
  -H 'Sec-Fetch-Site: same-origin' \
  -H 'Sec-Fetch-Mode: cors' \
  -H 'Sec-Fetch-Dest: empty' \
  -H 'Referer: https://security.stackexchange.com/questions/244030/how-do-hackers-trick-frontend-validation' \
  -H 'Accept-Language: da-DK,da;q=0.9,en-US;q=0.8,en;q=0.7' \
  -H 'Cookie: __qca=***censored***; _ga=***censored***; __utma=***censored***; __utmz=***censored***(referral)|utmcmd=referral|utmcct=/questions/244030/how-do-hackers-trick-frontend-validation; prov=***censored***; _gid=***censored***; acct=t=***censored***; _gat=1' \
  --compressed

Basically, anything you code in the frontend is being executed in an environment you ought to consider unsafe. Therefore, you should always validate on the backend.

Locks keep honest people honest

In my opinion, the front end validation is there to help the user. Informing the user that an email address is not in the correct format, allows them to correct it without having to wait for the form submission, and return. It will also in some cases prevent people from "just" writing (e.g.) SQL injection attacks from your form, but the devtools can also be used to turn off browser validation pretty easily.

Trying to break your own applicaion is actually a valuable lesson for a developer.

Dev tools

JoSSte
  • 123
  • 6