10

A reasonably white-hatted hacker has demonstrated the ability to insert text of his own choosing into the communication between a java applet and a web based server. Not a simple MITM attack, but using a tool like "JavaSnoop" to tap into the communications within the java applet class structure.

I believe that there is ultimately no defense against this kind of attack (and here I would welcome being contradicted) - he could in principle substitute an entire client of his own choosing; and of course the server has to be armored against all manner of malicious inputs.

The canonical example of this would be cheating in an online game by altering the client or the communications between client and server.

Is there anything better than security by obscurity to make this process as difficult as possible?

ddyer
  • 1,974
  • 1
  • 12
  • 20
  • 1
    Hi ddyer, I'm not sure where the "command injection" is coming in here? It seems like you're basically asking about a malicious user replacing your thick client, in effect allowing him to avoid any client-side enforcements you implemented on the client. Could you have just misuunderstood what ["command injection"](https://www.owasp.org/index.php/Command_Injection) is? – AviD Aug 14 '13 at 09:40
  • "Command Injection" is adding his own items to the communications stream from client to server. – ddyer Aug 14 '13 at 15:48
  • No, that's not what "Command Injection" means. Perhaps you can call that "parameter tampering", in a generic form... "Command injection" is when you get the server to run your own shell commands for you, on the server. Perhaps this confusion is why the question hasn't had more activity... – AviD Aug 14 '13 at 16:07
  • Not forms, discreet commands in a continuous TCP steam – ddyer Aug 14 '13 at 17:34
  • I meant as a generic term, not specifically web forms. Sending "commands" over TCP is not what command injection means, those are parameters. What the server decides to do with it, is a different question. *Does* the server simply take these parameters, and execute it on the shell as a command line? – AviD Aug 14 '13 at 18:46
  • Absolutely not. The server interprets the command stream as a private stream of requests, and as noted in the original post, it's very careful about invalid requests. This is about the client attempting to inject extra requests. Imagine in a shooter game the client adding extra bullets. – ddyer Aug 14 '13 at 19:41
  • Ah, so that would be something more along the lines of a replay attack? As in, resending the same request multiple times? In any event it is definitely not "command injection"... – AviD Aug 14 '13 at 20:16
  • This is not IT Security, but what about code obfuscation? There are some "nice" tools out there that will change identifiers to keywords in Java and so on. It is a lot harder to cheat if you don't understand the code you're cheating with. Of course this will not stop a dedicated attacker, but maybe it will alter the cost benefit considerations of a cheater. Also be sure to use TLS for your connection to the server (to prevent custom proxies) and check the certificate fingerprint so the user cannot install his own root certificate. – Perseids Aug 14 '13 at 20:40

7 Answers7

10

JavaSnoop is a tool for exploiting vulnerabilities such as CWE-602: Client-Side Enforcement Of Server-Side Security. Even thick clients cannot be trusted, and if a distributed system exposes sensitive functions to thick clients, then that is a vulnerability. There is no point in defending against "JavaSnoop" or even "Firebug" or "burp", these are just tools. The system as a whole needs to be taken into consideration, and a server must be vetted for remotely exploitable vulnerabilities that are accessible though any means.

The process for enumerating these types of vulnerabilities is no different than any other vulnerability. An assessment team, or penetration tester will look at exposed functionality and test this functionality for common vulnerabilities such as SQL Injection, or trust boundary issues.

rook
  • 46,916
  • 10
  • 92
  • 181
  • I think I adequately indicated awareness of all that in the original post. I'm interested in detecting such exploits and making them harder to do and less likely to be successful and/or undetected. – ddyer Aug 10 '13 at 01:57
  • @ddyer Making a vulnerability like this harder to exploit is entirely incorrect. Often when vulnerabilities like this are found dev teams freak out, because it requires a complete re-write. There is no difference between this type of mistake, and a SQL Injection vulnerability, they should be treated in the same regard and the process for enumerating such flaws is identical. – rook Aug 10 '13 at 02:02
  • We're not on the same page. If the client is a game, and the client could use the exploit to cheat, for example. – ddyer Aug 10 '13 at 02:09
  • @ddyer Yeah, you can cheat in **every online game**, so what? Wall hacks, lag hacks, item duplication. This is a perpetually unsolved problem. Perhaps you are looking for a license for "punkbuster" or some other vaporware security system. – rook Aug 10 '13 at 02:13
  • the "so what" is, as an alternative to finding a different line of work, to make cheating more trouble than it is worth. – ddyer Aug 10 '13 at 02:15
  • @ddyer you should probably re-post, with an emphasis on making it difficult to cheat in online games. Related: http://security.stackexchange.com/questions/31303/preventing-artificial-latency-or-lag-hacking-in-multiplayer-games – rook Aug 10 '13 at 04:17
5

Let's get more specific about your example, and say this is an online poker game. The server contains data that represents the center of the table (including the pot, the face-down deck and the "community" of cards), but the client software controls their "corner" of the table (the player's stash, their hand, and their decisions).

The assumption is that the client software is the same software released by the game's author, with no modifications of any kind, and so the client software has been made responsible for accurately tracking their bankroll and their hand; the server "deals" cards to the client, and much as the actual casino dealer would, "forgets" (or never knows) what card was dealt.

This is not a safe assumption; someone who can manipulate the client program, or even just the messages being sent to and from it, can choose their hand by modifying the messages about the cards the server has sent, and can similarly multiply their actual winnings (or even ignore losses by turning "you lose" into "you win $1000").

The solution is not to let the client software have anything approaching this level of control. The model to follow is that of a "dumb terminal"; treat the client software as nothing more than a really long cable connecting their keyboard and monitor to the server computer. The client knows nothing but what it's told by the server, and does nothing but relay the user's input to the server and vice versa. It has no "business logic" of its own, it just displays the game to the user.

Given such a model, manipulating communications does the attacker no good; the communications from the server and the numbers and cards on the screen can be changed to the attacker's heart's content, but any action based on the client's incorrect data is brought back to reality with a thud by the server. The client can't say "I raise 50 grand"; the server will simply reply "you only have $20 in your stack; try again". The client can't say "I'm Bob and I call"; the server, seeing that the request came over a secure session belonging to Bill, will say "No, you're Bill, sit down and shut up until Bob's actually taken his turn". Even replay attacks, where one client can listen to the secure conversation between another client and the server, and repeat the communication to perform any command contained in it, is very easily detected and ignored. Given enough of these harebrained communications, the server may eventually say "You're wasting my time; goodbye" and kick the client out of the game.

The downside, as was mentioned, is latency. The strategy in the ideal works for a poker game, where everyone acts in turn and so there's a lot of waiting anyway, and it's trivial for the server to keep track of everything going on at once. It doesn't work so well for a FPS or RTS, where interaction between all players must be real-time or darn close, and there's a lot of calculation of projectiles and bodies moving, flying, colliding, etc. It causes problems when latency is more than a few milliseconds (regardless of data rate); if everyone's got a 150ms ping to a game server, then everyone's seeing where everyone else was 300ms ago (at least) and if someone pulls the trigger when an opponent's head is in the crosshairs, the server thinks they're shooting at where the person actually was up to a half a second ago and says "you missed". That requires "lag leading" by the players, shooting in front of their targets by a distance based on their combined latency, even when the physics of the game dictate that bullet travel is instantaneous.

To compensate for this, the server necessarily gives up some control, and lets the clients say "I shot Bob in the head" when the player pulls the trigger while Bob's head is showing in their crosshairs on their screen. But, a player with a game mod that can strategically "ignore" incoming data about other players' positions can manipulate this amount of client trust to perform the "freeze frame" hack; turn off incoming datagrams, and everyone else freezes in place, allowing the attacker a nice easy headshot. If the server believes the client's claim, because nobody else claimed they shot that guy first, the other guy's dead even if his own client shows him safely out of the line of fire.

For this kind of thing, there really is no best answer; anywhere you place the control over making game-changing "referee"-type decisions, players will accuse others of cheating because they emptied a clip at the guy at point-blank range and the server says they hit air, or because the server said "Bill's dead, Bob shot him" two full seconds after Bill thought he'd cleared Bob's line of fire behind an obstacle.

KeithS
  • 6,678
  • 1
  • 22
  • 38
3

As far as I can see, you're highlighting a perennial issue in online gaming: trusting the client. To brutally summarise years of research and experience:

  • You, the server's operator, cannot control the user, their computer, their network or what they choose to do to your client software.
  • Thus, you cannot hide your communications, or the client's responses to input.
  • Therefore you must assume that at some point, someone will work out how to spoof user input somehow
  • Therefore, you must validate everything on the server.

When I say validate everything, I mean everything. You gave the example of a shooter game, where a malicious user might add more bullets. The server should know what gun the player has, how fast it can fire, how many rounds it can fire before reloading, etc. Given that, and the time at which the user pressed the fire button, the server can tell how many bullets there should be. The client doesn't control the bullets; it tells the server that the player is firing, and the server spawns bullets. The client is free to simulate bullets on the assumption that the server will do this, but when it comes down to it, in any disagreement it is the server's version of things that stands.

The general lesson is that the client never determines the outcomes of anything; it merely informs the server of the user's actions, and displays the results sent back by the server. If there is any decision that affects how things turn out, the server makes that decision. This comes right down to the basics - the client doesn't say "gun is firing", it says "trigger is pulled". Your client might have a delay when reloading, but a malicious client can ignore that and send the "firing" message immediately. Your server should not allow this.

  • User makes an input
  • Client sends the input to the server
  • Client displays the assumed outcome of the action

Meanwhile, the message reaches the server...

  • Server validates that input - check limits on how many times and how frequently this action can be performed, and whether it can be performed in the current situation.
  • Server triggers the action
  • Server determines the outcome of the action
  • Server informs the client of the outcome

When the message gets back to the client...

  • Client discards whatever assumed outcomes it was working with
  • Client displays the server's outcome

This is why you see lag causing weird effects - for example, when you appear to teleport around, it's because the client received the "walk forward" command, and displayed the assumed outcome of you moving forward a bit. The server didn't get the command, so it tells the client "no, you're over here", and because The Server Is Always Right, the client has to obey, and you find yourself suddenly somewhere else. Generally, though, the server and client will agree, so this assumed-outcome simulation by the client lets the game run smoothly; for example you might be running the game at 80 frames per second when the updates from the server are only coming in 30 times per second. For those couple of frames in between updates, the client's assumptions are close enough that the player won't notice the difference.

The validation stage in that process is where you catch cheaters. Anyone repeatedly submitting commands that shouldn't be allowed is probably trying to cheat. It's possible that they have a really terrible connection and the server is simply receiving things out of order, or with bits missing, but there are ways of determining this (TCP does this out of the box, but games often use UDP because TCP can be slower, so you may have to implement this stuff yourself).

anaximander
  • 1,531
  • 1
  • 10
  • 14
2

The proper solution is going to be application-dependent: how you prevent misuse depends on what misuse means in your context.

But in the general sense, the solution to client-side misbehavior is server-side validation. This can mean many things, so let me give some examples:

  • With web apps, validation using Javascript is not considered a true security measure; any security checks done client side must be done again after the response is submitted.

  • Again with web apps, you don't assume that your input will be produced by a well-behaved browser. For example, a browser won't send a URL containing /../, but you filter for it anyway.

  • With online FPS games such as Team Fortress et.al, the server carefully watches metrics such as how fast the player moves, how high they jump, how carefully they aim, and so forth to pick out behavior that would not be produced by a properly-behaving client.

It is absolutely, completely, and otherwise entirely impossible to ensure that the client on the other end of the line is your software, properly configured, unmodified, and free from other forms of tampering.

You absolutely must validate server-side everything that is important. You can be safe, that's not in question. You can prevent many kinds of cheating. But you don't do so by ensuring that the client is unmodified. Instead, you have to protect the server.

tylerl
  • 82,225
  • 25
  • 148
  • 226
1

JavaSnoop can be viewed as a "debugger for applets". It allows a user to manipulate the (Java) code which runs on his machine, inside his browser. The possibility of doing that has always been known; JavaSnoop is just a tool which makes it a bit easier.

The generic issue is what @Rook points out: code which runs on the attacker's machine cannot be trusted. That's his machine, he can make it do what he wants. Therefore, applications which rely on code running on the user's system to enforce security properties against the user cannot be ultimately robust. At best, code obfuscation can slow down the attacker a bit (by, say, a few hours or days), and there can be some mitigations which will try to prune out some low-power attackers.

In the context of online games, where the client-enforced rules are very common (because server-side enforcement would lead to intolerable latency, or other similar technical reasons), evicting 90% of the cheaters is already a net gain, and game vendors and operators are already accustomed to the idea that they will never eradicate cheating; they can only hope for maintaining cheat activity down to tolerable levels. The same cannot be said of every other context.


As explained above, the whole point is that the code runs on the attacker's machine (i.e. the user, seen as the potential attacker). This points out to a "simple" solution: let the machine not be the attacker's machine anymore. This is what game consoles do: the console runs an operating system which is signed by the console vendor, and won't accept to upgrade or boot to an OS version which is not signed. The OS will also refuse to run unsigned applications. Though a game console is really a computer, the ability to run user-provided code is closed.

Of course, any single security hole in the OS or in a "trusted application" (a game) can be abused to bypass these protections. This has been done several times. For the PS3 console, there was the "PS Jailbreak", a USB dongle which exploits a bug in how the OS handles the information from USB devices. Then there was a fake OS which had been signed because Sony completely failed to use ECDSA signatures correctly, and revealed their private key in the process. Note, though, that Sony, the console vendor, still has a powerful lever to set things "right" (from their point of view), and they used it: they can force firmware updates, lest all the "online" goodies become inaccessible (and these include the simple reading of recent Blu-ray discs...).

Another example is how an iPhone or iPad will execute only duly approved applications (approved by Apple, of course), unless the device is jailbroken. Apple and jailbreakers are locked in an endless race, Apple producing new versions of devices and accompanying OS at neckbreak speed, while reverse-engineers are hard at work finding holes to abuse. Sometimes the OS is broken the day it gets out; sometimes it holds the line for several months.

Apart from software exploits, hardware attacks can be used. To counter these, the hardware must be tamper-resistant. The generic name for this kind of device is Trusted Platform Module. A TPM can be used as extra protection for the user, to block some kinds of illicit entries (e.g. virus); but it can also be used to protect against the user, locking him out of his own machine.

None of this, however, is really applicable to Java applets. The Java applet, by definition, is abstracted away from the actual hardware. It runs in a virtual machine which is implemented by the Java plugin -- plugin of which there exists an open-source version which is easy to modify to include JavaSnoop-like debugging abilities.

The real "solution" to this problem is a no-solution: simply don't do this. Don't design your application so that code on the client's computer must be trusted. It will simply not work in the long run; unless you are in a context which is trivial enough (e.g. online games) that occasional "cheating" is not a big issue, and mitigation measures can be sufficient, from an economic point of view. If you must have trusted client-side code, then be prepared to have to produce your own tamper-resistant hardware.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • Thanks, you've added a lengthy exposition of what I took as a given, and omitted in the interests of keeping the posting brief. The ideas in the "mitigation" link you provided are the kind of thing I was looking for. – ddyer Aug 14 '13 at 19:50
1

I think that the other answers have covered the ultimate principle that it's impossible to fully trust a client application, so that said is there anything you can do to raise the bar a bit away from trivial hacks?

I'd say that there's a couple of approaches you can take to this. For the traffic stream itself a common approach to reverse engineering is to use a proxy (e.g. burp) to intercept traffic. One way to make this harder is to only allow one specific certificate to be used as opposed to trusting any certificate with the correct CN which was issued by a "trusted" CA. This is commonly known as certificate pinning

A larger problem is decompilation of the client application, which allows the attacker to see and modify it's behaviour. As @thomaspornin says obfuscation isn't a complete protection by any means but could weed out some attackers. Software like proguard could be used.

Beyond the basics, I'd say look at your architecture, is it possible to move things more to the server-side to reduce the potential risks of client-side attacks? Another (imperfect) option would be to have a client that's written in a language more friendly to obfuscation and anti-decompilation techniques.

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
1

Here's an example of the kind of suggestion I was hoping to elicit.

A: use a covert channel to signal from client to server that it has been hacked. For example, in a shooter, impose an "unwritten law" that every tenth movement message will be to the left. If the enforcement of this law by the client is sufficiently natural and distributed, any of the users hacks that attempt to change, add or remove movements, will probably violate the law. The server learns it is dealing with a rogue and takes appropriate action (after a random delay).

The problem with this kind of security hack is that it violates good program design, making it harder to keep the intended client running correctly.

ddyer
  • 1,974
  • 1
  • 12
  • 20
  • This is a poor idea, at best it is "security by obscurity". The problem with that is that the malicious user can eventually (probably not too difficult) discover your covert channel, either by analyzing the traffic in different situations, or by decompiling your client. – AviD Aug 17 '13 at 21:44
  • then how would you recommend detecting that a client is attempting to tamper? I'm soliciting better ideas. – ddyer Aug 18 '13 at 02:41
  • You can't, in any meaningful way. You have no real control over the client. As the other answers already point out, you can only control the server, and what the server does. Shift your thinking and mitigations to the server side - it shouldn't matter to the server who or what the client is. – AviD Aug 18 '13 at 07:13
  • I'm not in the "nothing is better than something" camp. Short of completely abandoning smart clients, something is better than nothing. – ddyer Aug 18 '13 at 19:38
  • Sure, do what you can on the client side, including obfuscation, code signing, protocol checking, and so on. Just realize what benefits it can give you, and what it can't - and what you still need to implement on the server-side, and what you just need to chalk up to "acceptable losses" from advanced attackers... – AviD Aug 18 '13 at 20:33