52

I'm trying to get my head around how PKCE works in a mobile app and there's something I don't quite understand.

So from what I can gather the client app creates a random cryptographically secure string known as the code-verifier. This is then stored. From this, the app then generates a code challenge. The code challenge is then sent in an API request to a server along with how the challenge was generated e.g S256 or plain. The server stores this challenge along with an appropriate authorization_code for the request in question.

When the client then tries to exchange the code for an access token it also sends the original code-verifier in the request. The server then retrieves the stored challenge and the method originaly used to generate it for this particular code and generates the equivalent s256/plain hash and compares them. If they match, it returns an access token.

What I don't get is how this is supposed to replace a secret in a client app? Surely if you wanted to spoof this you would just take the client_id as normal and generate your own code-verifier and challenge and you're in the same position as if PKCE wasn't required in the first place. What is PKCE actually trying to solve here if the original idea was that it is basically a 'dynamic secret'? My assumption is it's only there if someone happens to be 'listening' into when the auth_code is returned, but if you're using SSL again is this needed? It’s billed as replacing the fact you shouldn’t store a secret in a public app but the fact the client is responsible for generating rather than a server feels like it’s not actually helping there.

TommyBs
  • 657
  • 1
  • 5
  • 7

2 Answers2

29

The reason PKCE is important is that on mobile OS, the OS allows apps to register to handle redirect URIs so a malicious app can register and receive redirects with the authorization code for legitimate apps. This is known as an Authorization Code Interception Attack.

Authorization Code Interception Attack

This is described by WSO2 here:

Since multiple applications can be registered as a handler for the specific redirect URI, the vulnerability of this flow, is that a malicious client could also register itself as a handler for the same URI scheme that a legitimate application handles. If this happens, it is a possibility that the operating system will parse the URI to the malicious client. The flow of this attack is illustrated in the following diagram.

In some operating systems such as Android, in step 5 of the flow, the user is prompted to select the application to handle the redirect URI before it is parsed using a "Complete Action Using" activity. This may avoid a malicious application from handling it, as the user can identify and select the legitimate application. However, some operating systems (such as iOS) do not have any such scheme.

To understand this better, here is a diagram and discussion from OpenID. You can see that the mobile System Browser has responsibility to receive the redirect URI and route it to the correct app.

Native app Redirect URI routing

However, because mobile OSes can allow many apps to register for the same redirect URI, a malicious app can register for and receive a legitimate authorization code as shown in this diagram, also by WSO2:

PKCE Malicious App

Attack Mitigation by PKCE

PKCE mitigates this by requiring shared knowledge between the app initiating the OAuth 2.0 request (request auth code) and the one exchanging the auth code for token. In the case of an Auth Code Interception Attack, the malicious app does not have the verifier to complete the token exchange.

Grokify
  • 549
  • 5
  • 7
  • 3
    This all makes sense for mobile apps. Is there any security benefit of Auth Code + PCKE over Implicit code flow when it comes to Single Page Applications running in the browser? – jmrah Sep 21 '20 at 01:55
  • 1
    This does not make any sense at all. You say 'mobile OSes can allow many apps to register for the same redirect URI'. Are there any requirements to this? From your explanation, nothing is required. The rogue app (installed on the user's phone) can be listening to the redirect uri for tokens regardless! – user1034912 Mar 17 '21 at 03:13
  • 1
    PKCE adds information so simply listening to the redirect uri doesn't provide enough information to compromise the user. – Grokify Mar 18 '21 at 20:54
  • @user1034912 With PKCE the access tokens are not transferred via a redirect uri, the legitimate app gets the access token by making an http request to the auth server and providing the auth code gained via the redirect uri and also the secure code verifier it generated at the start of the process. – tkburbidge Jun 23 '21 at 19:59
19

This write-up Okta has on this subject explains this pretty well IMHO.

I believe it's because PKCE is intended for native applications (e.g. Android, iOS, UWP, Electron, etc.) where you leave the security context of your application and go to the browser to authenticate, and rely on the secure return to your application from the browser. You don't necessarily have TLS on the redirect back to your application (in the case of custom schemes, you are relying on the OS to bring the response back to your application) so in the event your authorization code goes somewhere malicious, the receiving app wouldn't be able to get an access token without the dynamic secret.

The merits of a dynamic secret on a public client are obvious here - and the assumption for PKCE is that it is not difficult to intercept the response from the browser to your application.

someone1
  • 686
  • 1
  • 7
  • 10
  • 4
    What's the point of hashing the code verifier? – Eric Eskildsen Nov 28 '18 at 19:31
  • 4
    If I had to guess I'd say because you need to pass the code to the authorization endpoint securely, and since you're directing the user to a new application context, you cannot be sure that the code is securely transferred from your application to the browser. So you hash it, and after receiving the authorization code, your application can securely send both codes plainly over TLS directly to the authorization endpoint and the server can compute the hash and compare it against what was initially sent. I hope that makes sense! Read more from the [rfc](https://tools.ietf.org/html/rfc7636) – someone1 Nov 29 '18 at 14:39
  • 2
    Hashing the code verifier makes sense. Why does the spec support the "plain" code challenge mode though, where code challenge == code verifier. Wouldn't the attacker just intercept the code challenge in this case, defeating its purpose? – Dmitry Pashkevich Jan 21 '19 at 15:23
  • 1
    From the RFC: If the client is capable of using "S256", it MUST use "S256", as "S256" is Mandatory To Implement (MTI) on the server. Clients are permitted to use "plain" only if they cannot support "S256" for some technical reason and know via out-of-band configuration that the server supports "plain". The plain transformation is for compatibility with existing deployments and for constrained environments that can't use the S256 transformation. – someone1 Jan 22 '19 at 00:23
  • 6
    Couldn't a malicious interceptor simply redirect the client with a hashed version of their own dynamic secret? Then capture the authorisation code and make a token request of their own? – Raiden616 Apr 03 '19 at 07:04
  • 3
    I think the only assumption here was that another app could intercept the authorization code, not that another app could read the outgoing requests and redirect the user on it's own dependent on that – Randy Nov 07 '19 at 13:11
  • 3
    Code verifier may help identifying the original requestor, BUT, why is the OAuth server issuing tokens without client secret? Client ID is considered as public knowledge, so any malicious application can use an authorized client's clientID to get access token using pkce? – Jimm Dec 19 '20 at 21:54
  • @Jimm because for public clients you cant have a secret stored, since anyone (namely a hacker) can download the app and dissassemble the code to dig up the secret. As far as a malicious app redirecting a client with a flow using their own malicious secret, in this case the malicious app would need to override the original redirection, which cant be done unless the hacker app has a level of access too priviliged (in which case you have bigger problems). – J3STER Aug 31 '22 at 23:31
  • @Raiden616 Continuing my last answer, and also adressing you (cause I cant tag more than one person per comment). A hacker app with normal app priviliges can only redirect you under its own context, which means, either it randomly redirects you to an authentication page (susspicious and no smart person should ever put their credentials on a random popup), or the hacker app could be impersonating a legit app (with similar looking UI) what is a phishing attack, and it only happens if you download apps from sketchy places and not check the file hash or store certificate – J3STER Aug 31 '22 at 23:36
  • Finally, to a I want to thank @someone1 because your last sentence clarified my doubt, which was: "if a hacker app can intercept the auth_code, why would it be any different for the access_token?", and the answer is: the auth_code is returned from the auth server to the browser with an http redirect to your app and that redirection from browser to app is insecure. But the access_token travels directly from your app to the auth server through TLS so no intercepting happens here – J3STER Aug 31 '22 at 23:41