24

I have android applications (Mobile banking) that connect to my server and do online transactions (via Internet/USSD/SMS), I want to make sure those clients are not tampered with and are the original ones distributed by me.

Keep in mind that not all of my customers download the application via google play, some of them use 3rd party markets or download the apk from elsewhere.

Is there a way I can validate the integrity of the application (using a checksum or a signature) on the server side to make sure its not tampered with. (e.g a trojan is not implanted in the application and then redistributed)

For suggested solutions:

  • Can they be implemented over all 3 communication channels (SMS/USSD/Internet) or are the solutions proprietary to one/some channels?

(I'm looking exactly for the technique that's been referred to in this page: https://samsclass.info/android/chase.htm) :

Chase's servers don't check the integrity of their Android app when it connects to their servers. It is therefore easy to modify the app, adding trojan code that does malicious things. An attacker who can trick people into using the trojaned app can exploit them.

This vulnerability does not affect people who are using the genuine app from the Google Play Store. It would only harm people who are tricked into installing a modified app from a Web site, email, etc.

Silverfox
  • 3,369
  • 2
  • 19
  • 39
  • 5
    Not readily, as your proof of integrity can itself be spoofed. For example, you could try some sort of challenge-and-response protocol, one where the response is not possible to generate without having access to the unmodified app. One approach would be to send a salt from the server to the app, which then has to generate a cryptographically-secure hash of the APK using that salt and send it back. However, this can be broken by having the hacked app pass the salt to its own server, which generates the hash on an unmodified APK, sends it to the app, which returns it to the server. – CommonsWare Jan 31 '16 at 18:10
  • 3
    It's called *remote attestation* and only works if the device contains treacherous hardware. There have been multiple attempts to introduce such features (TCPA, TPM, Intel SGX,...) but I don't know if typical android devices contain such a feature. – CodesInChaos Jan 31 '16 at 21:30
  • @CodesInChaos "Treacherous hardware" is loaded language. If I were you I'd try to avoid using that in objective discussions. – user253751 Jan 31 '16 at 21:41
  • @CommonsWare Or just let the hacked app include a copy of the original app, and use that to calculate the checksum. – user253751 Jan 31 '16 at 21:42
  • 2
    @immibis I consider acting in the interests of its owner the most important duty of a computer. Thus I consider it justified to call a piece of hardware that's willing to testify against its owner *treacherous*. But I'm open to other descriptions, if you have any good ideas. – CodesInChaos Jan 31 '16 at 21:46
  • 1
    @CodesInChaos Consider that the inability to load new code onto a device may be in the owner's interests in some cases. For example, their banking app can't be tampered with so that it will wire all their money to Nigeria when they open it. (Of course there are still other concerns, like making sure that the app the user is opening really is the banking app) – user253751 Jan 31 '16 at 21:52

4 Answers4

34

Use Android SafetyNet. This is how Android Pay validates itself.

The basic flow is:

  • Your server generates a nonce that it sends to the client app.
  • The app sends a verification request with the nonce via Google Play Services.
  • SafetyNet verifies that the local device is unmodified and passed the CTS.
  • A Google-signed response ("attestation") is returned to your app with a pass/fail result and information about your app's APK (hash and sigining certificate).
  • Your app sends the attestation to your server.
  • Your server validates the nonce and APK signature, and then submits the attestation to a Google server for verification. Google checks the attestation signature and tells you if it is genuine.

If this passes, you can be fairly confident that the user is running a genuine version of your app on an unmodified system. The app should get an attestation when it starts up and send it along to your sever with every transaction request.

Note, however, this means:

  • Users who have rooted their phone will not pass these checks
  • Users who have installed custom or third-party ROM/firmware/OS (eg Cyanogen) will not pass these checks
  • Users who do not have access to Google Play Services (eg Amazon devices, people in China) will not pass these checks

...and therefore will be unable to use your app. Your company needs to make a business decision as to whether or not these restrictions (and the accompanying upset users) are acceptable.

Finally, realize that this is not an entirely airtight solution. With root access and perhaps Xposed, it is possible to modify the SafetyNet library to lie to Google's servers, telling them the "right" answers to get a verification pass result that Google signs. In reality, SafetyNet just moves the goalposts and makes it harder for malicious actors. Since these checks ultimately have to run on a device out of your control, it is indeed impossible to design an entirely secure system.

Read an excellent analysis of how the internals of SafetyNet work here.

josh3736
  • 2,185
  • 2
  • 17
  • 22
  • 8
    +1 for "*In reality, SafetyNet just moves the goalposts and makes it harder for malicious actors.*" – Bergi Feb 01 '16 at 09:17
  • Is there any equivalent method for iOS ? – Kaizer Sozay Jul 14 '17 at 10:46
  • @KaizerSozay: No, Apple does not have an equivalent OS API. This is likely because SafetyNet is intended not only to find user modifications (whether intentional rooting or unintentional malware), but it's also intended to certify that the original manufacturer of the device met Google's security standards (a problem Apple does not have). There are of course [things you can do as an iOS app developer](https://www.google.com/search?q=ios+detect+jailbreak) to check for the presence of common integrity problems, but it's just as easy for the jailbreaker to lie to your app about their presence. – josh3736 Jul 14 '17 at 20:09
  • ...and this is an excellent time to point out that since this answer was originally posted, SafetyNet's security has been defeated; it is possible for an Android user to pass SafetyNet while still having root access. This is entirely unsurprising -- the goalposts were moved and the other team caught up. – josh3736 Jul 14 '17 at 20:13
  • Presumably you must go through this flow with every single API request, or your protection is worthless – Conor Mancone Sep 03 '20 at 12:08
  • Can we check the integrity of a particular app with safetyNet? Isn't it for checking the integrity of the OS in general? – b4da May 27 '21 at 15:02
  • The attestation response gives you `apkCertificateDigestSha256`, which is the hash of the certificate used to sign the APK that made the attestation request (ie your app). You can use this hash to verify that the APK was one you signed, since presumably the system verifies the integrity of the APK signature as part of the SafetyNet validation process. (Again, this all assumes you can trust the OS and SafetyNet implementation, which you can't.) – josh3736 Jun 01 '21 at 20:08
23

The only thing the server can reliably determine about a device is it's behaviour towards the server (the data received, and in what time patterns). Assuming an attacker has knowledge and control of all elements that influence the behaviour, the attacker can create a malicious clone and the server will never know.

So, technically, this is impossible. Unless you employ support of the devices hardware the integer APK file is sufficient to create malicious clone (decompilation is easy enough, proguard won't help much against an experienced reverse engineer).

As has been mentioned in the comments by @CodesInChaos and @OrenMilman:

You can include elements into that behaviour that are very hard for an attacker to get a hold of, e.g. a TPM/TEE and implement a remote attestation. Assuming the TPM has no vulnerabilities (which is unlikely, but lets just assume) this would indeed be a solution. Also, security is pointless without a threat-model. So if your threat-model excludes attackers with full-time dedication, lots of money and possibly access to 0-days you can consider such a mechanism secure.
I don't know which android devices have a TPM and support such measures; i'll leave that research and amending this answer to someone else.

marstato
  • 2,237
  • 14
  • 11
  • The thing is, under the link I mentioned above, it said that chase fixed this vulnerability somehow. So I assume there has to be a way. – Silverfox Jan 31 '16 at 18:31
  • 19
    No @Silverfox. They just say that they fixed it. It's hogwash. They can't even tell if they're talking to their app or some hand-written client running on a botnet or something. There's no security without physical security. – Neil Smithline Jan 31 '16 at 19:58
  • If you read the entire thing, they fixed the apk no longer decodes using apktool, so they only obfuscated the code. Nothing about verifying the app server-side – AmazingDreams Feb 01 '16 at 13:30
  • Why "impossible"? What if the server only accepted client apps that send a valid [Knox Attestation](https://docs.samsungknox.com/whitepapers/knox-platform/attestation.htm) ([Here is a tutorial](https://seap.samsung.com/tutorial/get-started-knox-attestation))? Assuming Knox has no vulnerabilities (improbable, but not impossible, i think), the *Remote Attestation* it provides could be trusted, couldn't it? (Knox is just the example that came to mind for a system that can provide *Remote Attestation*.) – Oren Milman Feb 09 '19 at 09:12
  • @OrenMilman No, it can't be trusted. Again: the server that verifies the attestation can only verify the data it receives from the phone. The server has no way to tell whether that data is a result of an honest security check or something made up by a malicious app. Like literally: the exact same sequence of 1s and 0s could have been produced by both a valid, clean client and a malicious one. It's all eyewash. At best, it gets harder for an attacker. But there is **always** a way. – marstato Feb 09 '19 at 15:18
  • In the scenario I mentioned, a malicious app shouldn't have access to the Samsung Attestation Key. Only code that runs on the secure OS (which runs in TrustZone's secure world) should have access to that key, and only signed trusted apps are allowed to run on the secure OS. The key becomes inaccessible to everyone forever if an unapproved state was ever detected in the device. (I elaborated about it a bit [here](https://security.stackexchange.com/q/202055/188882).) Could you please point out where exactly this mechanism fails? – Oren Milman Feb 09 '19 at 15:41
  • @OrenMilman see [the 10 immutable laws of security](https://uptakedigital.zendesk.com/hc/en-us/articles/115000412533-10-Immutable-Laws-Of-Security-Version-2-0-): #1 "If a bad guy can persuade you to run his program on your computer, it's not solely your computer anymore.". Lets assume there is a plethora of root-level exploits for android devices, especially unpatched older ones (e.g. stagefright). I don't see how android root would not get access to that key. – marstato Feb 09 '19 at 16:06
  • An app that a bad guy wrote is not signed, so it won't run inside TrustZone's secure world, and won't have access to the Samsung Attestation Key. Similarly, almost all of the root-level exploits out there would allow malicious code to run in TrustZone's normal world, but the key is accessible only in secure world, so it should be out of their reach. I agree that Samsung Knox (or any similar mechanism) is probably not perfect, but theoretically it could be, if we keep it as minimal as possible, and keep fixing bugs in it. IIUC, this is the idea behind TEE. Please point out what I am missing. – Oren Milman Feb 09 '19 at 16:22
  • I have no detailed knowledge of ARM TrustZone. AFAIK TrustZone programs can be read and updated. Even if the signature key is inside a TPM, the problem shifts from between phone and server to between processor and tpm. – marstato Feb 09 '19 at 16:28
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/89489/discussion-between-oren-milman-and-marstato). – Oren Milman Feb 09 '19 at 16:51
3

I am arriving a little late to the party, but I think I can add some useful insights.

Is there a way I can validate the integrity of the application (using a checksum or a signature) on the server side to make sure its not tampered with. (e.g a trojan is not implanted in the application and then redistributed)

For ultimately security between your App and the API server you should use a Mobile App Integrity Attestation service alongside the SafetyNet solution, OAUTH2 service. Also important is to use Certificate Pinning to secure the communication channel between the API server and the Mobile App as covered in this series of articles about Mobile API Techniques.

The role of a Mobile App Integrity Attestation service is to guarantee at run-time that your App was not tampered or is not running in a rooted device by using an SDK integrated in your App and a service running in the cloud.

On successful attestation of the App Integrity a JWT token is issued and signed with a secret that only the API server of your App and the Mobile App Integrity Attestation service in the cloud are aware.

In the case of failure on the App Integrity Attestation the JWT is signed with a secret that the API server does not know.

Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature in the JWT token and refuse them when it fails the verification.

Once the secret used by the Mobile App Integrity Attestation service is not known by the App, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack. This is where this type of service shines in relation to the SafetyNet solution.

You can find such a service in Approov that have SDKs for several platforms, including Android. The integration will also need a small check in the API server code to verify the JWT token in order to protect itself against fraudulent use.

Keep in mind that not all of my customers download the application via google play, some of them use 3rd party markets or download the apk from elsewhere.

With the use of a Mobile API Integrity Attestation service like Approov it will not matter from where the App was installed.

I have android applications (Mobile banking) that connect to my server and do online transactions (via Internet/USSD/SMS), I want to make sure those clients are not tampered with and are the original ones distributed by me.

and

For suggested solutions:

Can they be implemented over all 3 communication channels (SMS/USSD/Internet) or are the solutions proprietary to one/some channels?

So assuming that your App talks directly with the third parts services, then I suggest that you delegate that responsibility to the API server, that will prevent unauthorized use of your third part services in your behalf, once it only serves now authentic requests from the Mobile App's that passed the Integrity challenges.

SafetyNet

The SafetyNet Attestation API helps you assess the security and compatibility of the Android environments in which your apps run. You can use this API to analyze devices that have installed your app.

OAUTH2

The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. This specification replaces and obsoletes the OAuth 1.0 protocol described in RFC 5849.

Certificate Pinning

Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.

JWT Token

Token Based Authentication

JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties.

Disclaimer: I work at Approov.

Exadra37
  • 156
  • 3
  • Welcome to Information Security Stack Exchange! And thank you for disclosing your affiliation with Approov. I still think the answer is somewhat centered around your company, you may want to be careful with that if you write more answers. That said, this looks like a useful contribution to the Q&A. – S.L. Barth Sep 25 '18 at 13:46
  • 1
    Thanks for the advice... I would like to point other vendors doing Mobile App Integrity Attestation but I am not aware of any. The article I pointed out about the Mobile Api Techniques as a lot of recommendations that are vendor agnostic. – Exadra37 Sep 25 '18 at 15:51
1

This problem is something that mobile games have to deal with for income reasons, and from what I can tell, they deal with this by constantly updating the app, requiring the user to download and install a fresh patch every time before they start the game. usually this is a small amount. These patches also add new content to the game. The patches also handle the updating themselves, so if an app is modified, the patch fails.

So in theory (not sure how accurate this is), you basically write an app within an app. The internal app is the app that does the heavy lifting. The external app, every time it's started, downloads a patch/integrity validator, which then first verifies that the apps haven't been tampered with (usually through a checksum), then patches the internal app if necessary. It's similar to a bootstrap.

As mentioned by marstato, this still isn't perfect. For example, an attacker can redirect the requests to their own server and install customized patches that way. A possible way around that is to do this integrity validation before each transaction, but that would be rather slow.

Nzall
  • 7,313
  • 6
  • 29
  • 45