10

These are the following objectives I have in mind:

  1. Make the app hard to crack, as the binary will hold some secret tokens.
  2. If it still can be cracked, is there any way the app can tell someone or its own self that it has been cracked (like checking against some checksum or certificate or the OS doing it for the app) and take some action? The app binary can get infected or bad code can get injected while the program is in memory as well, so suggested methods should be able to deal with both these cases.

I'm using these platforms:

  • iPhone [OS: iOS, Language: Objective-C]
  • Android [OS: Android, Language: Java]

Please don't factor in costs and development times, as this is a critical app for its audience. It will be great if I can get the questions answered for both the platforms. Thanks for your time and attention already.

Edit 1: Are these ProtectionClasses undocumented? I don't see any documentation under the Apple docs. Anyone has any idea on if these are of any use?

Edit 2: codesign for Mac OS X came with a -o kill option for signing binaries such that whenever a binary doesn't match with its checksum in the certificate, the OS would kill it. Detailed here [Ctrl-F 'option flags']. Is there anything like this for iOS?

Edit 3: I'm thinking of using a combination of these ideas. So in case someone comes looking here, this might help. One-way function & Oblivious transfer.

techraf
  • 9,141
  • 11
  • 44
  • 62
kumar
  • 211
  • 3
  • 11
  • 2
    If this app is critical to its audience, and they're willing to invest a lot of money and time, chances are your adversaries will be willing to spend much on breaking it too. Anything that runs on a platform outside your control can be analysed and modified in such a way that the app itself won't be able to detect it. There might be ways to make it harder to do so, but it's only a question of resources/effort. – Yoav Aner Feb 01 '12 at 22:12
  • What you mention is a real concern and hence the madness. Still, I wanted to find on my own if there is anyway to help prevent it. I got to know iOS has some feature to let an app check with its certificates or something and can destroy itself if its positive on its infection. Researching on that one. – kumar Feb 01 '12 at 22:19

4 Answers4

8

Basically the objective '1' will not work; reverse engineering just works too well. It will be especially easy with the Android version, because Java is really simple to decompile (on Android, a specific bytecode format is used because the Java implementation is Dalvik, not the "true" Java, but the principle remains). However, it will not be that hard with the iPhone version either (Objective-C compiled to ARM code, not the hardest thing ever to disassemble and figure out).

Objective '2' will not work either. First of all, reverse engineering and data extraction are passive only, so the application is unaltered -- therefore, there is nothing which the application can do to detect that such reverse engineering took place. Also, it is easy to modify some application code to bypass any internal test; that's the number 1 cracking technique employed to work around copy protection of games since the early 1980s (at least).

The best you can hope for, is to split users into two categories: those who have jailbroken their phone, and those who did not. On an non-jailbroken iPhone, only apps dutifully approved by Apple can be installed and executed, so this prevents installation of a modified, non-official app which would use your "secret tokens" in a way you do not want. But jailbreaking is exactly what people do to install unapproved apps, and it seems to be relatively easy.

You can also try to make life harder for attackers by renewing your secret tokens very often, and constantly publishing new versions of your app. That's what game console vendors do, with constant new "firmware versions" that users must download to be able to play with the newer games, view the newer movies, or connect to a gamer network. This annoys users, too, so that's not exactly cheap to do.

Things would be much easier for you if you could arrange for the "secret tokens" to be user-specific -- this way, if a token value is unduly extracted, you can still blacklist it on your server (I assume that your app uses the tokens to connect to a server somewhere).

Note that keeping data secret in a deployed application, or, failing that, reliably detecting fraudulent usage of said data, is the Holy Grail of all game editors and movie publishers around the World. Last I heard, the big players like Sony and Warner have not found it yet, and that's not for lack of trying.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • We are actively considering keeping out jailbroken phones. Also, the tokens are designed to expire every half an hour or less, but isn't it possible for the code loaded from the binary to be infected itself. If an app can grant itself the OS-level privilege, ain't that a real scenario? The tokens are user-specific as well, but the server wouldn't know if incoming requests are impersonation or from the real client app, unless a user actively detects and reports any violation. – kumar Feb 01 '12 at 23:05
  • 2
    @kumar: Reversers can and will get around any of those such protections. Best to leave that problem alone and move everything of value to the server-side – atdre Feb 02 '12 at 02:48
  • 1
    @kumar - You might not be able to stop people from using your application on a phone they have jailbroken or rooted. – Ramhound Feb 02 '12 at 14:43
  • @Ramhound Yes, it indeed is mighty tough limiting installation strictly to unrooted devices, but so far that seems like our second best bet. – kumar Feb 02 '12 at 15:09
  • the "splitting users into two categories" part of this answer is poor advice. i do not suggest counting on non-jailbroken devices. people will have jailbroken devices and you can't stop them if they are savvy enough to prevent MDM, EMM, MAM, or MCM policies by reverse engineering those apps before installing them, potentially patching the binaries or patching the runtime with method swizzling or the like. – atdre Feb 17 '13 at 10:29
5

How to harden an iPhone/Android app so its tough to reverse-engineer it?

Preventing the reverse engineering of software is a difficult problem.

Preventing the reverse engineering of application software on a general computing platform with weak hardware assistance (PC, iPhone, or Android phone) for a significant period of time (months) is an exceptionally difficult problem.

Generally instead of protecting the entire application a common alternative is to attempt protection of the critical data used by the application and give up on protecting the entire application.

[1] Make the app hard to crack, as the binary will hold some secret tokens.

Your though here is exposing the 'protect the data' concept. What you really want the application to protect is a few pieces of critical data: the secret tokens.

How you protect data when the application can not protect itself?

You pass the responsibility to the operating system. There are two types of protection that operating systems can offer: access control and cryptography. Most of the protections we use when we allow or deny reading data or performing a specific action are access controls.

An access control takes as input an action and an identifier of who wants to do the action. Its output is either approve or deny for permission to perform the action. An access control uses a set or rules or a table to determine if a identifier should be allowed to perform an action.

For example: Alice has a game FunGame on a computer. The computer's access control has a rule that only Alice may run FunGame. Bob sits down at the computer and attempts to run FunGame. The computer's access control takes as input 'run FunGame' and 'Bob'. Based on the rule 'only Alice may run FunGame' the access control's output is deny.

Access control provides a direct and general way to make decisions about allow or deny, while cryptography provides a more indirect method of allow or deny. With encryption and decryption we can transform data so that the data makes sense or makes no sense.

Encryption and decryption require at least one cryptographic key. The value of the key becomes an access control with the two rules 'allow anyone with the key to take comprehensible data and make it incomprehensible' and 'allow anyone with the key to take incomprehensible data and make it comprehensible'. Making data incomprehensible only protects the data from bring understood. A individual may still be able to read the data, but it will not be in a form they comprehend.

Notice that the key instead of the identifier of an actor becomes the critical component. Anyone who has the key, no matter how they obtain it, can access the data. This makes storing the key a problem. If you store the key on the same system as the data you are providing anyone with the means to access your data. This is why passphrases require memorization. Memorizing a passphrase stores the key (passpharase) in your head instead of on the system.

So, to protect your tokens you can use the operating system's access controls, or you can encrypt the tokens, or both. If someone has a operating system that does not enforces it's own access controls (jailbroken iPhone or rooted Android phone) then you can not rely on the operating system's access controls. However, as long as the key used to encrypt your tokens does not reside on the same system as the the encrypted tokens, your tokens will still be protected even if the operating system is compromised.

However keeping the key off the system in question is another difficult problem called key management.

[2] If it still can be cracked, is there any way the app can tell someone or its own self that it has been cracked(like checking against some checksum or certificate or the OS doing it for the app) and take some action?

The app binary can get infected or bad code can get injected while the program is in memory as well, so suggested methods should be able to deal with both these cases

The concept of examining a piece of data to see if it is unaltered since it was last inspected is called integrity checking. An efficient way of evaluating the state of a chunk of data is called cryptographic hashing. To perform integrity checking you must first calculate the cryptographic hash of a piece of data and then protect the resulting hash value. At a later time recalculate the cryptographic hash on the same piece of data. Compare the recalculated hash value to the protected hash value. If the two vales are identical then the data is unaltered. If the two hash values differ then the data has been altered.

You may treat the application binary in storage as a piece of data and perform integrity checking on it. You may also treat the application binary in memory as a piece of data and perform integrity check on it.

The concept of examining a running program to see if it is processing as designed is called attestation. Checking all the memory used by the running code is typically infeasible so attestation usually checks something a little simpler such as the values of certain data sets or the state of the running program and its transition between states. Attestation is difficult to achieve so it is not widely used in commercial software.

techraf
  • 9,141
  • 11
  • 44
  • 62
this.josh
  • 8,843
  • 2
  • 29
  • 51
4

While they are some really intelligent self-modifying (e.g. packing) and self-checking code methods for Obj-C, Java/Dalvik, IPA, JAR, and APK -- it is my belief that they are all as easily as subverted as their ASM, C, C++, PE, and ELF cousins. The capabilities of reverse engineers are unworldly in the year 2012.

My suggestion is to keep all of your intellectual property on the server-side and carefully authenticate and authorize every action or inaction, similar to how non-rootkit MMORPGs catch cheaters (e.g. something like, but not quite, PunkBuster).

Mobile apps should be overly simple and should not EVER trust the user -- just like every client-side app.

atdre
  • 18,885
  • 6
  • 58
  • 107
  • true - but mobile communications aren't all that great a lot of the time - the problem comes back as the code is forced to migrate to the client side to compensate for performance and communications issues – Mark Mullin Oct 06 '12 at 03:27
3

It might appear as a fruitless endeavour to try and truly prevent reverse-engineering but I think the notion here is to make it more difficult for would be attackers. Below are some precautions you can take as a developer to better defend against decompilation on Android:

  1. Write some of the core features in native code (Android NDK)
  2. Use encryption
  3. Shift logic to server-side
  4. Employ various forms of obfuscation (split variables, promote scalars to objects, change variable lifetimes, change encoding, reorder instance variables, scramble identifiers, split/fold/merge arrays, modify inheritance relations) these are just data obfuscation examples. Have a look at control obfuscations. You'll notice these methods promote sloppy code and increase overhead with maintenance -- people reversing your app are going to hate you though.

These are just a few examples, I would highly encourage you read this paper regarding Java obfuscation techniques. The techniques can be adapted to any language. Good luck.

1 ‘‘A Taxonomy of Obfuscating Transformations,’’ Computer Science Technical Reports 148 (1997), https://researchspace.auckland.ac.nz/handle/2292/3491.
techraf
  • 9,141
  • 11
  • 44
  • 62