1

I am trying to reason about how native apps can avoid the problems web apps have in dealing with the "Browser Cryptography Chicken and Egg" problem, which has been discussed numerous times on this site, perhaps most notably here: Solution to the ‘Browser Crypto Chicken-and-Egg Problem’?

Particularly, I am trying to determine how native apps deal with "targeted attacks", where different web content is served to different users based on what is most benefitial to the attacker. While unlike web apps, the native app itself is not served to the user from a remote source each time the application is accessed, there are critical elements involved in verifying the integrity of the app which usually are served from a remote source, these being:

  1. the app binary itself, when first downloaded
  2. the app source code
  3. the publisher's digital signature for the app binary, and associated information for verifying the signature

note: I am (possibly incorrectly) using the term "integrity" here to refer to an app which not only has not been altered since last signed, but which also has been proven to be built from particular publicly auditable source code, rather than other possibly-secret source code

There is a section of the publication An Analysis of the ProtonMail Cryptographic Architecture by Nadim Kobeissi which addresses this subject (emphasis added).

Note that in this application scenario, releases are cryptographically authenticated and tracked through an incremental version number, and that the delivery of client code is restricted purely to software update instances. Users are therefore able to audit whether they received the same binary for some version of the application as everyone else. Furthermore, the application distributor (Apple, Google) adds a second layer of authentication and its separate governance from ProtonMail renders targeted delivery of malicious code even more difficult for a malicious [server].

In order for the emboldened text to be true, it seems to me at least one of the following must also be true:

  1. the app binary has been published to a platform which is not controlled by the attacker
  2. the source code has been published to a platform which is not controlled by the attacker
  3. the publisher's digital signature which signs the application executable is published on a platform not controlled by the attacker

Kobeissi describes the 1st point as a secondary advantage of publishing apps to the Apple App Store or Google Play Store, and I am trying to reason whether Kobeissi is implying the first layer of security is provided by my 2nd or 3rd points.

Whatever the practicality of this, if the untrusted app publisher hosts all 3 of those elements on a platform they control, would the following scenario not hypothetically be possible?:

  • a "security specialist" receives from the app publisher website a binary, a digital signature and the app source code; the specialist reviews the source code and determines it is secure; the specialist builds the binary from source and determines it matches the publisher provided binary and the provided digital signature; by all apearances, all is well: the app binary can be trusted and for more paranoid users the digital signature can be trusted to verify the binary
  • meanwhile, a less technical user visits the same website and receives a different binary, digital signature and set of source code, all of which purports to be the same as that received by the previous user; this user does not have the knowledge required to audit the source code themselves, however, they are still capable of building the app from source; they proceed to do this, compare this built binary to the published binary and provided digital signature, and determine everything matches; again, by all appearances, all seems to be well; however, what this 2nd user does not realize is, the source code they were provided is compromised, does not match the code evaluated by the security-specialist and consequently the downloaded binary is also totally insecure; yet the user remains totally unaware of the attack

On the otherhand, if even one of the 3 enumerated elements is published to a platform which is NOT controlled by the attacking publisher, then the attack described above can be thwarted.

Source Code is published independently

This may be prohibitive to non-technical users, but otherwise in this threat model any user could assume they have received the same source code as every other user, rather than targeted source code; therefore, any user can assume they have access to the same source code as was reviewed by a security-specialist. So, by building from source and checking the digest against the publisher hosted signature, the validity of that signature can be determined. Likewise, the validity of the publisher hosted binary can be verified independently for all users (again, assuming the technical skill to build the binary from source).

App binary is published independently

Similar to the previous case, the binary can be checked against the publisher hosted digital signature. The user would again need to build from source, this time verifying the publisher provided source code matches the separately published binary.

This situation probably makes less sense given the goal of the attack is to get users to execute a compromised binary, but in any case the attack can be thwarted here.

Digital signature is published independently

Unlike with the previous two cases, in this case I believe a single user, ideally the security-specialist who evaluates the source code, can verify the validity of the independenlty hosted publisher-signature with respect to the publisher-provided source code. Then, any user can check their publisher-provided binary against this same independenlty hosted publisher-signature, and easily determine whether the binary has been compromised.

Digital signatures from independent sources

The assumption up to this point was that only the publisher's digital signature was available, but obviously it would be ideal to have a trusted 3rd party who's signature on the binary could be independently published. For this analysis I am interested in the cases where this layer of verification has not been provided.


Given the assumptions I explicitly and implicitly laid out, am I correct in claiming a native app where the binary, source code and digital signature are all hosted by the attacker is no more secure than a web application with respect to the Browser Cryptography Chicken and Egg problem?

tyhdev
  • 13
  • 3
  • WRT the `

    `, 'meanwhile, a less technical user...' - The digital signature is on the file(s) reviewed by TR, and was created using TR's private key. If the file(s) downloaded by user were changed/compromised by attacker, then the signature made by TR would fail verification. Attacker could create his own signature, but presumably attacker does not have TR's private key. The user trusts TR's public key, not the attacker's public key. So, even though the attacker's signature would verify using attacker's pub key, user would know something is up, because it was not made using TR's pub key.

    – mti2935 Jun 12 '22 at 00:54
  • @mti2935 But otherwise, without a trusted reviewer, is it correct to say relying on the attacker to serve a digital signature (as well as the binary and source code) is the same as trusting the attacker full-stop? – tyhdev Jun 12 '22 at 02:18
  • It doesn't matter who serves the digital signature. What matters is who creates the digital signature. If you trust the singer, then it doesn't matter who serves the signature. If you don't trust the signer, then you can't trust the application, regardless of who serves the signature. – mti2935 Jun 12 '22 at 10:33

1 Answers1

1

TL;DR: It's not a perfect zero trust model, but a native app is still better than a web app.


While it is certainly less than ideal, I wouldn't say it is as bad as a web application.

You are implicitly assuming the attacker is the service operator. If we go with this assumption, then the only significant difference between a native and web app I can see is that a web app has a much larger attack window. The attacker can serve malicious code any of the numerous times the web app will be (re)loaded. With a native app though, if the user manages to download a clean version the first time and then turns off automatic updates, the attacker will have a tough time compromising them.

However, the advantage of a native app is clearer when the attacker is not the service operator. With a web app, the server's TLS keys have to remain on the server, which means anybody who compromises the server, whether by hacking it or seizing it with a warrant, will have the ability to perform targeted MITM attacks at will.

Contrast that with a native app, where the private key can be kept on an air-gapped machine, and potentially even split between developers in different jurisdictions using Shamir's Secret Sharing. The difficulty for the attacker is significantly higher. While they can replace the public key on the webpage, this only works if they are targeting the victim on their first install, and also requires the attacker to be able to positively identify their target based on IP address and fingerprinting data alone.

nobody
  • 11,251
  • 1
  • 41
  • 60
  • Excellent answer (+1) explaining the threat model in a web app environment as compared with the threat model in a native app. See https://pageintegrity.net/ for an attempt at a solution to verify the integrity of web pages in a web app environment (FD, I am the developer). – mti2935 Jun 12 '22 at 10:40