I'm trying to understand how to best apply the OAuth 2.0 grant types to a microservice architecture I am working on. Here's the situatation...
I have a Single-Page Application/Mobile App acting as a client running in a web browser (browser acting as the user agent) or mobile phone. I use the Implicit Grant defined in RFC 6749, section 4.1 to authenticate a user and acquire an access token that the app uses to access some externally exposed API.
The architecture I am dealing with is a collection of microservices that call on one another. For example, consider an externally exposed API serviceA and internal APIs serviceB and serviceC. Let's say serviceA depends on serviceB which subsequently depends on serviceC (A --> B --> C).
My question is, what is the typical authorization flow for this situation? Is it standard to use Implicit Grant for the SPA to acquire an access token and then use the Client Credentials Grant defined in RFC 6749, section 4.4 to acquire an access token for the machine to machine interaction between serviceB and serviceC?
For your single page application, use the Implicit grant, which is designed for browser applications - they cannot hold any secrets and with the Implicit grant, the tokens stay in the browser (because it's in the hash part of the redirect URL).
The mobile app, take a look at the OAuth 2.0 for Native Apps, it recommends the use of the Auth code grant. It also describes implementation details for common platforms and security considerations.
There is a new grant described in the OAuth 2.0 Token Exchange RFC that would suit your needs for chained calls between services:
... An OAuth resource server, for example, might assume
the role of the client during token exchange in order to trade an
access token, which it received in a protected resource request, for
a new token that is appropriate to include in a call to a backend
service. The new token might be an access token that is more
narrowly scoped for the downstream service or it could be an entirely
different kind of token.
But I don't know whether Auth0 supports it. If it doesn't, I would probably pass the original access token from serviceA to serviceB and serviceC. The internal services could be secured at the network level too (e.g. they could called just from other services).
If serviceB and serviceC are internal and will never be called from an external client then the Client Credentials Grant would be a good candidate. As the client is also a resource server too.
You could also look at passing the same bearer token between services, providing the SPA (which requests the token initially) obtains consent for all scopes which may be used by the other services and the "audience" of the token must allow for all the possible resource servers (services).
I don't think either are best practice and there are tradeoffs with both ways.
I would honestly recommend for each backend service to implement the Authorization Grant. That is have an endpoint exposing the redirect to your provider. Then for each frontend app go to that endpoint to trigger the OAuth flow. After the flow has been completed handle the Authorization part in the callback url and return a token which will be stored on the frontend somewhere.
Hope this helps.
Related
We have a microservices environment using Identity Server 3. Identity is provided to http microservices via bearer tokens in the authorisation http header, where the token is a JWT. That JWT usually represents a logged in end user, but it can also sometimes represent a system user that has authenticated via client credentials flow.
Messages are published on a queue (RabbitMQ) by these microservices, to be processed asynchronously. Currently, we have a windows service which consumes those messages. It authenticates as a system user with client credentials, and sends that JWT in the auth header to other http microservices.
We would like to maintain the identity of the user that publishes the messages throughout the flow, including within the machine-to-machine (m2m) communication when a message is consumed from the queue and when that consumer calls other microservices. Ie, when Service A (which was provided with a JWT) publishes a message to the queue, then the windows service should be able to impersonate the user represented in Service A's JWT, and should be able to provide a JWT representing that same user when calling Service B.
Service A (running as alice) --> RMQ
RMQ <-- Win Service (running as alice) --> Service B (running as alice)
Only clients with the correct claim should be able to impersonate a user in this way.
Which flow should I use in order to return the JWT to the Windows Service and how should this be achieved in Identity Server 3? I've managed to generate the JWT using Resource Owner flow, passing in a dummy username and password (overriding AuthenticateLocalAsync), although I've not yet attempted to check that the Win Service's client has a valid claim to impersonate. Or should this be a custom flow, implementing ICustomGrantValidator? Perhaps client credentials flow can be used?
Note that the original JWT can be provided with the message, or just the user id itself. The original JWT may have expired, which is why the windows service has to re-authenticate in some way.
My understanding is you want to propagate authenticated identities through a distributed architecture that includes async messaging via RabbitMQ message broker.
When you send/publish messages to RabbitMQ you might consider including the JWT in the message headers i.e. similar to how JWTs are included in HTTP headers for calls to protected HTTP routes. Alternatively if you're feeling a bit lazy, you could just have the JWT directly on the message payload. Your async (Windows service) consumer could validate the JWT on it's way through or it might just pass it through on the subsequent HTTP requests to protected routes of 'other http microservices'.
I'm not sure if you're question about 'which flow should I use' is relevant as presumably the user is already authenticated (via one of the OIDC/authN flows e.g. authentication code grant, implicit, ROPC...) and you're just looking to propagate the JWT through the distributed architecture for authZ purposes...
In terms of sending custom message headers, I have done this with RabbitMQ and MassTransit, but it was for (OpenTracing) trace Id propagation between asynchronous message broker operations. Repo is on GitHub here - might give you some ideas about how to achieve this...
[edit] following clarification below:
Below are some options I can think of - each one comes with some security implications:
Give the async (windows service) consumer the JWT signing key. If
you go down this path it probably makes more sense to use symmetric
signing of JWTs - any service with the symmetric key would be able
to re-create JWTs. You could probably still achieve the same result
with asymmetric signing by sharing the private key, but (IMO) the
private key should be only be known to the authorization server (when using asymmetric signing).
When the user authenticates, request a refresh token by adding
offline_access to the list of scopes specified on the /token
endpoint. You could then pass the refresh token through to the async
(windows service) consumer, which would be able to use the refresh
token to obtain a new access token if the previous one has expired (or just get a new one each time).
There's probably some security considerations that you'd need to
think about before going down this path.
Increase the timeout
duration of the access tokens so that there's enough time for the
async (windows service) consumer to handle the requests. Not sure if
this is viable for your scenario, but would be the easiest option.
I have a microservices solution with an API gateway in front of the microservices. I want authorization to happen at the microservice level. I have a SPA client using implicit flow to acquire a token and call the API. At the API i use the on-behalf-of flow to acquire a token to call the related microservice as the user. So to the microserice it appears to be the user and authorization can be performed.
My question is if I don't use on-behalf-of flow I would need to give the API gateway an app-level permission to access the microservice. If I give the API app-level service-to-service (client credential grant flow) capabilities, then I would have to rely on the gateway API to perform authorization, which seems like a really bad idea. It is also a really bad idea if another API needs to call mine in a similar scenario.
My question is this: what are some other approaches that would allow me to flow the user identity through to the microservice so proper authorization can occur?
I've seen examples of another header being used in Identity Server, but I'm not if that is done due to some lack of on-behalf-of options in Identity Server. Btw, I realize I could pass the user identity as a string, but I would be losing all the signed token benefits that provide a level of certainty that the token hasn't been tampered.
I'm trying to implement authentication/authorization in my solution. I have a bunch of backend services(including identity service) under API Gateway, "backend for frontend" service, and SPA (React + Redux). I have read about OAuth2.0/OpenIdConnect, and I can't understand, why I shouldn't use Resource owner password flow?
A client ( my backend for frontend server ) is absolutely trusted, I can simply send users login/password to the server, then it forwards them to Identity server, receives the access token && refresh token and stores refresh token in memory(session, Redis, etc), and send the access token to SPA, which stores it in local storage. If SPA will send a request with the expired access token, the server will request a new one using refresh token and forwards the request to API Gateway with the new access token.
I think in my case a flows with redirects can provide worth user experience, and are too complicated.
What have I misunderstood? What potholes I'll hit if I'll implement authentication/authorization as I described above?
OAuth 2.0 specification's introduction section gives one key information on the problem it tries to solve. I have highlighted a section below,
In the traditional client-server authentication model, the client
requests an access-restricted resource (protected resource) on the
server by authenticating with the server using the resource owner's
credentials. In order to provide third-party applications access to
restricted resources, the resource owner shares its credentials with
the third party
As a summary what OAuth wants to provide is an authorization layer which removes the requirement of exposing end user credentials to a third party. To achieve this it presents several flows (ex:- Authorization code flow, Implicit flow etc.) to obtain tokens which are good enough to access protected resources.
But not all clients may able to adopt those flows. And this is the reason OAuth spec introduce ROPF. This is highlighted from following extraction,
The resource owner password credentials grant type is suitable in
cases where the resource owner has a trust relationship with the
client, such as the device operating system or a highly privileged
application.The authorization server should take special care when
enabling this grant type and only allow it when other flows are not
viable.
According to your explanation, you have a trust relationship with client. And your flow seems to be work fine. But from my end I see following issues.
Trust
The trust is between end user and the client application. When you release and use this as a product, will your end users trust your client and share their credentials.? For example, if your identity server is Azure AD, will end users share Azure credentials with your client.?
Trust may be not an issue if you are using a single identity server and it will be the only one you will ever use. Which brings us the next problem,
Support for multiple identity servers
One advantage you get with OAuth 2 and OpenID Connect is the ability to use multiple identity servers. For example, you may move between Azure AD, Identityserver or other identity servers which of customer's choice (ex:- they already use on internally and they want your app to use it). Now if your application wants to consume such identity servers, end users will have to share credentials with your client. Sometimes, these identity servers may not even support ROPF flow. And yet again TRUST become an issue.!
A solution ?
Well I see one good flow you can use. You have one front end server and a back-end server. I believe your client is the combination of both. If that's the case you could try to adopt authorization code flow. It's true your front end is a SPA. But you have a backend you can utilise to obtain tokens. Only challenge is to connect front end SPA with back end for token response (pass access token to SPA and store other tokens in back-end). With that approach, you avoid above mentioned issues.
Our apis are being consumed by 3rd party deamon applications as well as client applications. For third party deamon application we can expose the api via the client credential oauth flow and for the client application(S) we use the implicit grant outh flow.
The challenge we are facing is that in case of the implicit grant flow the user details are fetched from the ACCESS TOKEN. But when the same api is used for the client credential flow the user details can not be fetched from the ACCESS token as it has only application specific details.
What is the the best api design approach to handle the above challenge ?
Do I need two set of api(s) one for integrating with client application and one for integrating with server application ?
Will the usage of any alternative oauth flow help ?
Refer to Authentication scenarios for Azure AD documentation, as you stated correctly user interaction is not possible with a daemon application, which requires the application to have its own identity. This type of application requests an access token by using its application identity and presenting its Application ID, credential (password or certificate), and application ID URI to Azure AD. After successful authentication, the daemon receives an access token from Azure AD, which is then used to call the web API.
The quintessential OAuth2 authorization code grant is the authorization grant that uses two separate endpoints. The authorization endpoint is used for the user interaction phase, which results in an authorization code. The token endpoint is then used by the client for exchanging the code for an access token, and often a refresh token as well. Web applications are required to present their own application credentials to the token endpoint, so that the authorization server can authenticate the client.
Hence, the recommended way is two have two version of api implemented with two different type of authentication based on your scenario.
Reference -
Daemon or server application to web API
Understanding the OAuth2 implicit grant flow in Azure Active Directory (AD)
I'd like to authenticate a legacy java (6) application against a node-js one currently secured using keycloak OIDC bearer only (both apps belonging to same realm).
I've been told to use keycloak-authz-client library resolving a keycloak OIDC JSON as below
{
"realm": "xxx",
"realm-public-key": "fnzejhbfbhafbazhfzafazbfgeuizrgyez...",
"bearer-only": true,
"auth-server-url": "http://xxx:80/auth",
"ssl-required": "external",
"resource": "resourceName"
}
However, the keycloak java client required java 8 and my current runtime is a jre6. Recompiling the lib including transitive dependencies does not looks like a good idea and I end up so using keycloak oauth2 REST endpoint.
As far as I know oauth2 I would go with a client_credentials flows exchanging a client secret against an access_token once at application initialization and refreshing / renewing when expired.
Coming to keycloak documentation :
Access Type
This defines the type of the OIDC client.
confidential
Confidential access type is for server-side clients that need to perform a browser login and require a client secret when they turn an
access code into an access token, (see Access Token Request in the
OAuth 2.0 spec for more details). This type should be used for
server-side applications. public
Public access type is for client-side clients that need to perform a browser login. With a client-side application there is no way to
keep a secret safe. Instead it is very important to restrict access by
configuring correct redirect URIs for the client. bearer-only
Bearer-only access type means that the application only allows bearer token requests. If this is turned on, this application cannot
participate in browser logins.
It seems that confidential access type is the one suitable for my needs (should be used for server-side applications) however I don't get how it is related to browser login (which is my mind is related to authenticating using third parties identity providers as facebook and co).
The confidential client settings also require a valid redirect uri the browser will redirect to after successful login or lagout. As the client I want to authenticate is an application I don't see the point.
Generally speaking I don't get the whole access type things. Is it related only to the client or to the resource owner also (Is my node.js application stuck to bearer-only as existing clients use this access type ? will it accept the bearer authentication using the access_token obtained with client_credentials flow ? I suppose it will).
Can someone clarify keycloak OIDC access type and where I went wrong if I did ?
What is the proper way to delegate access for my legacy application to some resources (not limited to a specific user ones) of another application using keycloak ?
You are mixing up the OAuth 2.0 concepts of Client Types and Grants. Those are different, albeit interconnected, concepts. The former refers to the application architecture, whereas the latter to the appropriate grant to handle a particular Authorization/Authentication use-case.
One chooses and combines those options; first one choses the client type (e.g., public, confidential), and then the grant (e.g., Authorization code flow). Both client types share some of the same grants, with the caviar that the confidential client will require also a client secret to be provided during the execution of the Authentication/Authorization grant.
From the Oauth 2.0 specification:
OAuth defines two client types, based on their ability to
authenticate securely with the authorization server (i.e., ability to
maintain the confidentiality of their client credentials):
confidential
Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
restricted access to the client credentials), or capable of secure
client authentication using other means.
public
Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the
resource owner, such as an installed native application or a web
browser-based application), and incapable of secure client
authentication via any other means.
As one can read the client type refers to the type of the application architecture. Why do you need those types? The answer is to add an extra layer of security.
Let us look at the example of the Authorization Code Grant. Typically the flow is as follows:
The user goes to an application;
The user gets redirect to the Keycloak login page;
The user authenticates itself;
Keycloak check the username and password, and if correct, sends back to the application an authorization code;
The application receives that code and calls Keycloak in order to exchange the code for tokens.
One of the "security issue" with that flow is that the exchange of code for token happens on the frontend channel which due to the nature of browsers it is susceptible to a hacker intercepting that code and exchange it for the tokens before the real application does it. There are ways of mitigating this but it is out of the scope of this question.
Now, If your application is a single-page application, then it cannot safely store a secret, therefore we have to use a public client type. However, if the application has a backend where the client secret can be safely stored, then we could use a confidential client.
So for the same flow (i.e., Authorization Code Grant), one can make it more secure by using a confidential client. This is because the application will now have to send to Keycloak a client secret as well, and this happens on the backend channel, which it is more secure than the frontend channel.
What is the proper way to delegate access for my legacy application to
some resources (not limited to a specific user ones) of another
application using keycloak ?
The proper grant is to use the so called Client Credential Grant:
4.4. Client Credentials Grant
The client can request an access token using only its client
credentials (or other supported means of authentication) when the
client is requesting access to the protected resources under its
control, or those of another resource owner that have been previously
arranged with the authorization server (the method of which is beyond
the scope of this specification).
Since this grant uses the client credentials (e.g., client secret) you can only use it if you have selected confidential as the client type.