3

We have a current microservice architecture where we secure communication between microservices via Machine-To-Machine access tokens (these tokens are obtained using the Client Credentials grant flow).

We do this for all communications between microservices, whether triggered by a user action or otherwise (e.g. scheduled task).

The issue we are now facing is that downstream microservices need some user context for actions which have been triggered by a user.

Our current plan is to pass along a user's ID token in a separate, bespoke header. This will allow downstream services to validate the user ID token and extract out user information.

Authorization example

However, we have been warned that this is opposite to how ID tokens are expected to be used.

Cons-

  • ID token is being used in a non-standard way. It should be restricted to the SPA.

Pros-

  • Simple and convenient. ID token is already available and is signed.
  • User Access Token is not propagated, keeping it secure between SPA and Gateway.
  • Blast radius is reduced - tokens are limited in their use, so if a service is compromised access is only to the next level of services.

I feel like we are missing something obvious. We are securing our communication between microservices with M2M access tokens, but the alternative seems to be to replace this with a user access token which follows the request to all downstream services.

(We previously used network rules to limit communication between microservices, but this is no longer feasible due to using multiple VPCs and the large number of microservices we now have.)

My questions-

  1. What alternatives options do we have?
  2. Have I missed any major downsides with our current approach?
Spongeboy
  • 151
  • 3

1 Answers1

1

I think the advice you got to not send the ID token is good. In general, an ID token should never be forwarded to some other entity besides the client that received it. Besides just breaking this cardinal rule, there's no cryptographic or physical relationship between the two tokens. As a result, a substitution attack vector is opened up. In other words, the micro-service receiving the two tokens has no way of knowing that the non-standard presentation of the ID token was replaced by an attacker with their own since that token is not related in some way to the access token it finds in the normal Authorization header of the same request.

When working in a micro-services environment where you need user context info into the bowels of the mesh, you have three options:

  1. Share access tokens among the micro-services
  2. Embed other access tokens in the one initially issued to the client (or the phantom equivalent)
  3. Exchange one token for another

You'll use option one when multiple micro-services make up the same security context. If they belong to the same bounded security context, then it's OK to share them. If this is the case, they will have the same "audience" and the access token will be audienced to that set of micro-services. An example of a shared context might be a handful of "payments" micro-services that work together to perform payments functionality. These all come into contact with the same product data, user data, etc., so they are effectively the same entity from a security modeling perspective, one could imagine.

When a micro-service calls another in some other security context, it will use one of the other two approaches. The first is the easiest but requires knowledge of a priori of the call from one context to another. In other words, when the token is issued at the authorization server, it must be known that the receiver of that token is going to call a service in a foreign domain (i.e., some other security context). With this knowledge, the authorization server can embed a token for that purpose in the one it issues to the originating micro-service. This is very typical even if it sounds outlandish at first. Think again of that payments service: it's easy to foresee that it will always call an accounts service which may be in a different security context. For this reason, the authorization server may always issue a separate token scoped for accounts and embed that in another that is scoped to payments. It will issue this token within a token to any client that requests an access token with the "payment" scope.

The other choice is to exchange one token for another. This will be done when the calls from one service to another are not known at the moment of token issuance. This can be the case, for instance, when the number of hops is very large, or the route is contextual and user-dependent. When exchanging one token for another, it's important that the scope of access be controlled. You don't want to exchange a low-power token for a very powerful one, for instance. So, think about scope enlargement and try to forbid or greatly control that during an exchange. A good article about batch processing that discusses exchange more can be found on Nordic APIs.

(These three options are orthogonal to whether or not you use the phantom token or split token approaches. One of those may be used in conjunction with these token propagation techniques. If you use those techniques, which you probably should since you have a gateway in the mix, your front-end SPA won't be able to access the actual access token nor any others that may be embedded in it.)

Soufiane Tahiri
  • 2,667
  • 12
  • 27
  • I'm dubious about this statement: "the micro-service receiving the two tokens has no way of knowing that the non-standard presentation of the ID token was replaced". The whole point of having Microservice A authenticate itself to Microservice B is that A is the intended client of B, and therefore B should be happy to operate on whatever user A requests. – Conor Mancone Jan 05 '21 at 17:02
  • 1
    For a specific example, imagine that B is responsible for sending an email to a given user, and A calls B along with the `user_id_token` (which B uses to get the email) and the email body. Since A is using a M2M authorization token, it makes perfect sense that the auth token presented by A is completely unrelated to the `user_id_token` it passes along - this is both intentional and the only way it can operate, because B is explicitly trusting A in the first place. How else could B possibly validate the request from A anyway? – Conor Mancone Jan 05 '21 at 17:04
  • The more I consider our situation, the more I realise that 2 big decisions have brought us to this point. 1 - Service to service security Because of our risk profile, we want an explicit whitelist of which microservice can talk to which microservice. We are using OAuth for this. 2 - Client side simplicity - to keep things simple, we get a single JWT bearer per session for a user. This is a powerful token, we have been reticent to pass it along, as it could be used to impersonate the user.I think we need to narrow the scope of our tokens, so there is less risk in passing them along. – Spongeboy Jan 06 '21 at 02:44
  • "there's no cryptographic or physical relationship between the two tokens" Our current architecture doesn't require this. The M2M token signifies that Service A can talk to Service B. We use the same M2M token for requests from multiple users. (This is not to excuse our overall approach...) – Spongeboy Jan 06 '21 at 02:48
  • Thanks for the link to the batch processing article. I think we should explore the idea of "bound tokens" more. We do utilise message queues. – Spongeboy Jan 06 '21 at 02:51
  • 1
    If the two tokens are not not bound together, @ConorMancone, then a hacker need only pop service A to pop B. Because the ID tokens are bearer tokens, it's easy to substitute them for other bearer tokens. " How else could B... validate the request from A"? Ideally, service B will have ZERO trust in A. It should get a token from an authorization server that it trusts and that should include info about the user with a claim or scope that authorizes it to send email to that user. If you want to bind this to a certain requester, you should do an exchange and embed a proof made by A in that token. – Travis Spencer Jan 06 '21 at 07:37
  • You seem to be approaching this from the perspective of user-initiated requests, but that won't cover everything. There are any number of cases where action may be taken on a user's account that isn't initiated by an end user. As a result, there are countless cases where a service has no choice but to trust another service. In fact, it's impossible to build a real world application without extending trust. For example, if someone "pops" the auth server, they then pop *everything*. Trust is absolutely unavoidable. You can't build an application without it – Conor Mancone Jan 06 '21 at 15:35
  • ¯|_(ツ)_/¯ could be – Travis Spencer Jan 06 '21 at 19:47
  • @ConorMancone - the "batch processing article" addresses this. It suggests exchanging a short-lived bearer user JWT for a long-live, narrow scope, "bound" token. Their example is a scheduled payment. The bound token lasts for a year, but its scope is very narrow, down to the parameters being call (in this case the amount of money). – Spongeboy Jan 07 '21 at 02:51