I am trying to reason about how native apps can avoid the problems web apps have in dealing with the "Browser Cryptography Chicken and Egg" problem, which has been discussed numerous times on this site, perhaps most notably here: Solution to the ‘Browser Crypto Chicken-and-Egg Problem’?
Particularly, I am trying to determine how native apps deal with "targeted attacks", where different web content is served to different users based on what is most benefitial to the attacker. While unlike web apps, the native app itself is not served to the user from a remote source each time the application is accessed, there are critical elements involved in verifying the integrity of the app which usually are served from a remote source, these being:
- the app binary itself, when first downloaded
- the app source code
- the publisher's digital signature for the app binary, and associated information for verifying the signature
note: I am (possibly incorrectly) using the term "integrity" here to refer to an app which not only has not been altered since last signed, but which also has been proven to be built from particular publicly auditable source code, rather than other possibly-secret source code
There is a section of the publication An Analysis of the ProtonMail Cryptographic Architecture by Nadim Kobeissi which addresses this subject (emphasis added).
Note that in this application scenario, releases are cryptographically authenticated and tracked through an incremental version number, and that the delivery of client code is restricted purely to software update instances. Users are therefore able to audit whether they received the same binary for some version of the application as everyone else. Furthermore, the application distributor (Apple, Google) adds a second layer of authentication and its separate governance from ProtonMail renders targeted delivery of malicious code even more difficult for a malicious [server].
In order for the emboldened text to be true, it seems to me at least one of the following must also be true:
- the app binary has been published to a platform which is not controlled by the attacker
- the source code has been published to a platform which is not controlled by the attacker
- the publisher's digital signature which signs the application executable is published on a platform not controlled by the attacker
Kobeissi describes the 1st point as a secondary advantage of publishing apps to the Apple App Store or Google Play Store, and I am trying to reason whether Kobeissi is implying the first layer of security is provided by my 2nd or 3rd points.
Whatever the practicality of this, if the untrusted app publisher hosts all 3 of those elements on a platform they control, would the following scenario not hypothetically be possible?:
- a "security specialist" receives from the app publisher website a binary, a digital signature and the app source code; the specialist reviews the source code and determines it is secure; the specialist builds the binary from source and determines it matches the publisher provided binary and the provided digital signature; by all apearances, all is well: the app binary can be trusted and for more paranoid users the digital signature can be trusted to verify the binary
- meanwhile, a less technical user visits the same website and receives a different binary, digital signature and set of source code, all of which purports to be the same as that received by the previous user; this user does not have the knowledge required to audit the source code themselves, however, they are still capable of building the app from source; they proceed to do this, compare this built binary to the published binary and provided digital signature, and determine everything matches; again, by all appearances, all seems to be well; however, what this 2nd user does not realize is, the source code they were provided is compromised, does not match the code evaluated by the security-specialist and consequently the downloaded binary is also totally insecure; yet the user remains totally unaware of the attack
On the otherhand, if even one of the 3 enumerated elements is published to a platform which is NOT controlled by the attacking publisher, then the attack described above can be thwarted.
Source Code is published independently
This may be prohibitive to non-technical users, but otherwise in this threat model any user could assume they have received the same source code as every other user, rather than targeted source code; therefore, any user can assume they have access to the same source code as was reviewed by a security-specialist. So, by building from source and checking the digest against the publisher hosted signature, the validity of that signature can be determined. Likewise, the validity of the publisher hosted binary can be verified independently for all users (again, assuming the technical skill to build the binary from source).
App binary is published independently
Similar to the previous case, the binary can be checked against the publisher hosted digital signature. The user would again need to build from source, this time verifying the publisher provided source code matches the separately published binary.
This situation probably makes less sense given the goal of the attack is to get users to execute a compromised binary, but in any case the attack can be thwarted here.
Digital signature is published independently
Unlike with the previous two cases, in this case I believe a single user, ideally the security-specialist who evaluates the source code, can verify the validity of the independenlty hosted publisher-signature with respect to the publisher-provided source code. Then, any user can check their publisher-provided binary against this same independenlty hosted publisher-signature, and easily determine whether the binary has been compromised.
Digital signatures from independent sources
The assumption up to this point was that only the publisher's digital signature was available, but obviously it would be ideal to have a trusted 3rd party who's signature on the binary could be independently published. For this analysis I am interested in the cases where this layer of verification has not been provided.
Given the assumptions I explicitly and implicitly laid out, am I correct in claiming a native app where the binary, source code and digital signature are all hosted by the attacker is no more secure than a web application with respect to the Browser Cryptography Chicken and Egg problem?
`, 'meanwhile, a less technical user...' - The digital signature is on the file(s) reviewed by TR, and was created using TR's private key. If the file(s) downloaded by user were changed/compromised by attacker, then the signature made by TR would fail verification. Attacker could create his own signature, but presumably attacker does not have TR's private key. The user trusts TR's public key, not the attacker's public key. So, even though the attacker's signature would verify using attacker's pub key, user would know something is up, because it was not made using TR's pub key.
– mti2935 Jun 12 '22 at 00:54