6

I am trying to better understand and determine the impact and implications of a web app where data tamping is possible.

When discussing data tampering, I am referring to when you are able to use a tool such as BurpSuite or Tamper Data to intercept a request and modify the data and submit it successfully.

I understand that this will allow an attacker to evade Client-side validations. For example, if the client does not allow certain symbols e.g. (!#[]) etc, an attack can input the correct details which the client will validate and then intercept the request and modify the data to include those symbols. But I'm thinking there is more than just the evasion of client-side validation that this vulnerability allows.

I am also thinking it perhaps opens the door to allow Dictionary-attacks using BurpSuite or Brute-force logins to user accounts since data can be intercepted and modified which can be used to test username and password combinations.

Would appreciate any insight regarding the implications of a Data Tampering vulnerability.

blahdiblah
  • 113
  • 3
Krellex
  • 197
  • 1
  • 4
  • 54
    All client-side validations are for user convenience, not for security. – schroeder Apr 05 '22 at 13:12
  • 1
    You have 2 different issues here that you need to separate: 1) the impact on the client and user, and 2) the impact on the service. But a properly designed service does not depend on client-side controls. So the impact should be on the user-side alone. The impact to the user will depend *entirely* on the app, which only you know. So, given all this, the question is too broad and undefined to answer. – schroeder Apr 05 '22 at 13:15
  • 5
    Anything your web app can do, can be done with curl. Thinking about intercept and modify is missing the point; arbitrary data can be sent to the server. – prosfilaes Apr 07 '22 at 14:29
  • 2
    Even Javascript code, *anything* you send to a client should be considered at best a polite request. There's nothing stopping a client from not executing, selectively executing, or executing modified code. If you need to make sure some data is valid, you need to do it yourself on a computer you trust (your server). – Different55 Apr 07 '22 at 15:13
  • @schroeder That's what we hope :-) – gnasher729 Apr 08 '22 at 15:08

4 Answers4

45

This "Data Tamper Vulnerability" is not a vulnerability. It's like "Door without lock vulnerability."

Client-side validation is not validation. Is a convenience tool: better let the user know instantly that he cannot have # as username then waiting for the form to be submitted, the server reject the username and send back an error message stating that the username was not accepted AND he has to fill out the entire form again.

If your threat model does not include "user submitting data without validation," you are doing it wrong. When an attacker sees your javascript stripping # from a field, one of the first things he will try is to send # on a field, and your server must deal with it.

Do proper validation on the server. Never trust any data from the client: form fields, URL, GET parameters, cookies, JWT, filenames, everything coming from the client is untrusted until validated on the server.

If the client is sending malicious input and the server is not validating, several bad things can happen:

  • SQL Injection
  • Remote Code Execution
  • Cross Site Scripting
  • Server side request forgery
  • Remote file inclusion

... to name a few.

kenlukas
  • 835
  • 6
  • 18
ThoriumBR
  • 50,648
  • 13
  • 127
  • 142
  • So, given all this, the "impact" should only be unexpected, but valid, data saved in the user account. – schroeder Apr 05 '22 at 13:18
  • So there is nothing malicious an attacker could do if data can be modified before it reaches a server if it's just input? I was thinking also that if data tampering is possible on the site and the web-app allows image uploads, an attacker can upload a file: image.php.png intercept the request and change it to image.php and upload a php file from which they can establish a reverse connection, compromising the entire web-app. I think is possible if it allows file-upload but can nothing malicious occur from tampering text input? – Krellex Apr 05 '22 at 14:01
  • 7
    @Krellex Yes that is possible, but only if no validation is done on the server. This is exactly why we say "never trust user input/client-side validation, always validate data server side before using or storing it" – nobody Apr 05 '22 at 17:56
  • @nobody Thanks for the reply! Would encrypting the input data also be a good way to prevent something like this? This can also prevent Brute-force attacks? – Krellex Apr 05 '22 at 18:06
  • 13
    @Krellex No. The only way to stop it is server side validation. Brute force attacks can be mitigated by server side rate limiting. – nobody Apr 05 '22 at 18:28
  • 3
    @Krellex Client-side validation can be bypassed using just the tools that pop up if you hit the F12 key while using Chrome. A user who knows what they're doing is fully able to tamper with their own data as they see fit. Nothing you do on the client side can change that, because users can modify or bypass any of the client-side code too. They can change the data before it's encrypted. – Robyn Apr 06 '22 at 04:05
  • 3
    I'd argue that SQL Injection, Remote Code Execution, and Cross Site Scripting should not be prevented via validation, but by escaping. In that case the server should accept all inputs without performing special validation, but escape them when passing to a dangerous function. – BoppreH Apr 06 '22 at 11:56
  • 8
    "Validation" have a broad sense here, is to make sure that the input is valid for the expected purpose. Taking the input as is and trusting it is valid is wrong. – ThoriumBR Apr 06 '22 at 12:08
  • @BoppreH agree. More generally, the front-end should rather _parse_ the user input into a format where illegal input aren't representable in the first place. (E.g. sanitize the username, but never send it to the server as a string but instead treat it as base36 and then send that number to the server in binary encoding.) – leftaroundabout Apr 06 '22 at 15:33
  • 7
    @leftaroundabout That's a tedious solution. Working, but convoluted and difficult. Simply checking that it doesn't contain illegal characters on the server side is much simpler. It can be done in 2 lines of code and will be obvious to whoever works on the code in the future. As and added bonus it's what everyone does, so your future code maintainers will EXPECT this. The base36 option will just be confusing when they see it the first time. Also it's a lot easier to debug your application when the data is in plaintext, rather than obfuscated like that. – Vilx- Apr 06 '22 at 20:09
  • 1
    @Vilx- with well-designed strongly typed frameworks, this is _easier_ than anything based on manual checking. You just need to invoke a parser-combinator that can be defined and tested to death in a separate library, and the typechecker will automatically put any checks in place whereever they're necessary. The maintainers don't need to _expect_ anything, they can just ask the compiler what's the type at this or that spot, and when they make a change that would invalidate the protocol they'd get a clear compile-time error. Much easier to debug that than scrambling through plaintext logs. – leftaroundabout Apr 06 '22 at 20:23
  • 1
    @leftaroundabout While base36 might prevent some attacks like SQLi, it won't prevent others (like XSS) where the data has to be converted back to plain text before being used. And personally, I'd find debugging much easier in an environment where I can use a debugger to inspect variables without having to manually convert the data to and from base36 just to make sense of it. – nobody Apr 07 '22 at 05:14
  • 1
    Adopt a Zero Trust policy. Do not trust anything outside of the environment YOU control. – Davidw Apr 07 '22 at 05:16
  • @leftaroundabout Yes, [parse don't validate](http://langsec.org), but why do you say that the *frontend* should parse the input? It's the backend that will be attacked. (I mean, it should be both, everytime you exchange data with another system, there's some protocol and you need to parse its messages into the appropriate data structures) – Bergi Apr 07 '22 at 19:32
  • @Bergi exactly, it's the backend that will be attacked, and that's why all the pesky plaintext parsing with its many intricate and bug-prone special cases should happen in the frontend where you don't care if things go wrong. The backend will instead deal with a simple, regular, compact, ideally _formally verifiable_ binary input format where literally every possible bit pattern is valid. XSS (e.g.) will be impossible, because the binary format does not even contain an encoding for `<`. – leftaroundabout Apr 07 '22 at 21:11
  • 1
    @leftaroundabout It's unlikely that you can design a simple protocol where every bit pattern is a valid input in your business domain. The special cases still need to be accounted for by the backend. And no, like @ nobody wrote above, a binary format will not prevent XSS, as soon as you decode the data to display it you still have the same problem. – Bergi Apr 07 '22 at 21:38
  • 2
    My approach serverside: validate the input. Have a whitelist for each valid field, with valid ranges. Nobody is 300 years old. Nobody lives on a street with a 4k name. Nobody have `~` in the phone number, or `\` on the email. On the client, escape the output. – ThoriumBR Apr 07 '22 at 21:49
  • @Bergi well, it depends of course on the business domain – sure some applications are too dynamic for this approach. But an awful lot of real-world cases _can_ perfectly well be covered with only finite algebraic data type communication, and for those it it possible to automatically derive a provably-correct binary encoding. — Regarding XSS: as I said, you'd use a format that _doesn't contain_ an encoding of something like a script. The base36 user names are an extreme example, but the approach also extends to much more interesting data. – leftaroundabout Apr 07 '22 at 21:55
  • 1
    @leftaroundabout I wouldn't call a derived protocol "simple" - and a practical implementation also won't exhaust the bit pattern space. But yes, ADTs are a good start. But I still don't get your point about XSS. Did you mean to specify the ADT `User { name: base36 }`, where usernames in your data model cannot contain brackets, or did you mean that the protocol derivation would encode `User { name: string }` with base36? I'd say that the binary encoding of strings does not matter - the XSS problem is caused by doing `el.innerHTML = decode(userData).name`, regardless of what `decode` does! – Bergi Apr 07 '22 at 22:53
  • @Bergi this is getting off-topic here, but I mean `name: base36`, or indeed `name: word256`, with `decode` being _encoding_ of the numerical data to base36. The data model does preferrably not use any strings at all, but rather types like “static HTML” or “annotated plaintext” or “verbatim source code”. – leftaroundabout Apr 08 '22 at 09:17
  • “**Never trust any data from the client**” — this needs stressing, highlighting, emboldening, and showing in massive 72-point type! – gidds Apr 08 '22 at 18:05
18

Proper web site/app security MUST assume that the client may sometimes actually be a custom made malicious tool, designed and built from the ground up by an attacker for the express purpose of defeating your security. If your server-side security cannot protect against such a tool, then you don't actually have security at all.

If you do have actual server-side security, then client-side data tampering simply is not an issue. Anything that client-side data tampering could do, a hypothetical custom attacker's tool could also do, so if your server is secure against a custom malicious tool then it is secure against client-side data tampering.

Douglas
  • 281
  • 1
  • 2
7

I understand that this will allow an attacker to evade Client-side validations.

You're thinking of the roles backwards. If you're trying to make software running on an end user's computer enforce some rule of your own against what they want to do, you are the attacker and they are the defender. Don't be in that role.

Where you are the defender is on your own server, processing the data the user sent you via the software you provided to them to help them submit it in a manner most useful to them. That's where you have both the technical capability and the standing to do so. Get it right and you don't have to worry about trying to make the user's own computer police them.

6

Other answers correct you on client side vulnerabilities.

I want to add this rather direct answer to be very clear.

Every attack is indeed possible, unless your own server deliberately prevents it.

While we can nitpick that (not every attack, and not all attacks are literally server prevented), the broad picture that paint is both true and a useful way to think about it.

That means, for example...

  • Brute force attacks are easy ... unless your server limits the number of attempts allowed in some way.
  • User accounts and passwords can be stolen or intercepted, if the clients machines or networking is weak on security. Since you don't control that, you only have limited ways to prevent or mitigate it.
  • User forms, text, web requests and values returned,and local storage/cookies, can be faked or targeted maliciously. Again not in your control, you can only do the things you can do. You don't control clients. You do control your own systems. And you can hope the clients have some decent security.
Stilez
  • 1,664
  • 8
  • 13
  • "*Every attack is indeed possible, unless your own server deliberately prevents it*.". I could not have stated better. – ThoriumBR Apr 07 '22 at 21:50