Keycloak: OAUTH2, SSO and two different access tokens for two different rest-api backends with different JWT claims requirements - oauth-2.0

My JS React SPA web application (which is running in the browser) needs to securely access two different rest-api backends which are written in two different programming languages, deployed to two different physical servers and which parse/interpret JWT claims differently.
I want my user to enter username/password only once!
We use OpenId/PKCE - usuall stuff.
Access tokens 1 and 2 should be different in the following way:
'Rest Api 1' is ok to have 'keycloak user_id' in the 'sub' claim of 'JWT access token 1' while 'Rest Api 2' expects to have 'keycloak user_name' in the 'sub' claim of 'JWT access token 2'.
'Rest Api 1' is ok with any expiration time in the 'exp' claim of 'JWT access token 1' while 'Rest Api 2' expects to have maximum 10 minutes from the current time in the 'exp' claim of 'JWT access token 2'.
'aud' and 'scp' claims should have different values, etc.
I cannot change 'Rest Api 1' and 'Rest Api 2' so they could accept JWTs of the same format.
I assume that I need to create two different Clients for 'Rest Api 1' and 'Rest Api 2' under the same Realm in Keycloak, am I right? But then how to Sign in into both clients at once and get two different access tokens from them?
Another suitable solution, I suspect, is using 'token exchange': https://www.keycloak.org/docs/latest/securing_apps/#_token-exchange
Can somebody advise something?
keycloak-js is used: https://www.keycloak.org/docs/latest/securing_apps/index.html#_javascript_adapter

At the time of writing this answer keycloak-js doesn't support 'token exchange' or 'signing into multiple clients at once'.
But, AxaGuilDEv/react-oidc (https://github.com/AxaGuilDEv/react-oidc) does support 'signing into multiple clients at once'. They have a demo app which shows 'Multi auth' implementation. When you are logged in into one client then you can silently additionaly login into another one within the same realm. You enter username/password only once.

Related

Oauth2 authorization code flow: where do I store authorization code?

Good day! I am trying to implement my own authorization server using oauth2 standards. Upon reading into its specifications on authorization code flow, a 3rd party application requesting for API access needs an authorization code from the authorization server, which will then be used to exchange for an access token. My question is, once I generate an authorization code from my authorization server, by concept, where do I store it so that when a client app requests an exchange for access token, I can verify that the authorization code is valid?
You can store the code anywhere you want - in your server memory (as an object in a map), in a database or in any other safe storage. If your server is just a single application (having just one RAM), you can store the codes in memory if you don't mind losing them during application restarts. But if you want to run multiple instances of your application (e.g. in Kubernetes) or server is composed of multiple applications, you will need to use some external storage (database, Hazelcast, Redis).
With the code, you will need to keep metadata such as client_id, validity, PKCE attributes (code_challenge_method, code_challenge) and such. When you receive a request to your token endpoint wanting to exchange the code for tokens, you need to find the code in your storage, compare the relevant metadata (client_id, PKCE code_verifier, client_secret) and issue tokens.
But you should keep the code with a timestamp saying when the tokens were issued. And you should be able to find what tokens were issued from the code. Because if you receive another /token exchange request with the same code, you should invalidate all the tokens issued - the code was probably stolen.
It's good to read OAuth2 Security RFC for all the considerations.
You can create a global data structure map and map the client_id to the auth codes and delete them after the access token is exchanged, this is a very simple a valid solution as long as it is properly implemented and the auth code and deleted correctly.
Since the exchange happens directly, you don't need to worry about the heap filling up since the auth code is created and deleted in a very short period of time making space. Say 1000 users log in every minute, a data structure of 1000 elements is very acceptable in most cases assuming there is a timeout of the exchange of 1 min (which should be the case)

OpenID Connect, oAuth2 - Where to start?

I am not sure which approach I should be taking in our implementation and need some guidance.
I have a REST API (api.mysite.com) built in the Yii2 Framework (PHP) that accesses data from mysite.com (database). On mysite.com our users will be able to create Connected Apps that will provision a client id + secret - granting access to their account (full scope?).
Based on my research, the next step seems to be setting up something to actually provide the bearer tokens to be passed to the api - I have been leaning towards oAuth2, but then I read that oAuth2 does not provide authentication. Based on this I think I need OpenID Connect in order to also provide user tokens because my API needs to restrict data based on the user context.
In this approach, it is my understanding that I need to have an Authentication Server - so a few questions:
Is there software I can install to act as an OpenID Connect/oAuth2 authentication server?
Are there specific Amazon Web Services that will act as an OpenID Connect/oAuth2 Authentication Server?
I am assuming the flow will be: App makes a request to the auth server with client id + secret and receives an access token. Access token can be used to make API calls. Where are these tokens stored (I am assuming a database specific to the service/software I am using?)
When making API calls would I pass a bearer token AND a user token?
Any insight is greatly appreciated.
your understanding is not very far from reality.
Imagine you have two servers one for Authentication, this one is responsible for generating the tokens based on a Authorization Basic and base64 encoded CLientID / ClientSecret combo. This is application authentication basically. If you want to add user data as well, simply pass username / password in the post body, authenticate on the server side and then add some more data to the tokens, like the usernames, claims, roles, etc
You can control what you put in these tokens, if you use something like JWT ( Json Web Tokens ) then they are simply json bits of data.
then you have a Resource server, you hit it with a Authorization Bearer and the token you obtained from the Authorization one.
Initially the tokens are not stored anywhere, they are issued for a period of time you control. You can however do something else and store them in a db if you really want to. The expiration is much safer though, even if someone gets their hands on them they won't be available for long! In my case I used 30 minutes for token validity.
Now, you haven't specified what languages/frameworks you are looking into. If you use something like dot net then look into IdentityServer, version 4 is for Dot net core, 3 for anything below.
I also have a pretty long article on this subject if you are interested:
https://eidand.com/2015/03/28/authorization-system-with-owin-web-api-json-web-tokens/
Hopefully all this clarifies some of the questions you have.
-- Added to answer a question in comments.
The tokens contain all the information they need to be authenticated by the resource server correctly, you don't need to store them in a database for that. As I already said, you can store them but in my mind this makes them less secure. Don't forget you control what goes into a token so you can add usernames if that's what you need.
Imagine this scenario, you want to authenticate the application and the user in the same call to the Authorization Server. Do the OAuth2 in the standard way, which means authenticate the application first based on the client id / client secret. If that passes then next do the user authentication. Add the username or userid to the token you generate and any other bits of information you need. What this means that the resource server can safely assume that the username passed to it in the token has already been validated by the authentication server otherwise no token would have been generated in the the first place.
I prefer to keep these two separate myself, meaning let the AS ( Authorization Server) to deal with the application level security. Then on the RS (Resource Server) side you have an endpoint point like ValidateUser for example, which takes care of the user validation, after which you can do whatever you need. Pick whichever feels more appropriate for your project I'd say.
One final point, ALWAYS make sure all your api calls ( both AS and RS are just apis really ) are made over HTTPS and never ever have any important information transmitted via a GET call which means the URL can be intercepted. Both Headers and POST body are encrypted and secure over HTTPS.
This should address both your questions, I believe.

Queries Regarding OAuth2

A simple scenario I have a typical architecture a client,authorization server(OAuth Server) and Resource Server.Client gets token from authorization server with client_credentials and sends token to resource server and it serves the request.So if I have 2 API's either the logged in user can access all or none based on valid or invalid token.
Is there a mechanism to grant access to 1 API ? The question is can token be API specific like it give access to 1 API and not the other.
The scope mechanism can be used to differentiate between permissions that are associated with the access token. E.g. there could be a scope for API A and one for API B. The client could ask for one of those scopes or both and the token would be valid for respectively calling both APIs or just one of them.
See also: https://www.rfc-editor.org/rfc/rfc6749#section-3.3 which doesn't say much about the semantics of scope but in practice a scope is almost always associated with a (set of) permission(s).

When using Implicit Flow with a SPA, where do we actually create the account in our Database?

I'm trying to understand how OAuth2.0 Implicit Flow (with OIDC) works with a pretty simple SPA/Mobile client (aka Client) and my REST Api (aka Resource Server) and creating new accounts.
I more or less understand how the Client can request a token from an Auth Service (Auth0/Stormpath/IdentityServer/etc). It then uses this token to access restricted API endpoints.
But all the examples I keep reading are that the 'accounts' are created on these Auth Services (which is required and I understand) but nothing is created on my service (my Resource Server).
I need to create an account in my DB because I have user data/settings I wish to store (eg. orders they make, etc). Of course, I do NOT want to store any security information .. because that's why I'm using an external Auth Service.
So, would anyone explain how they use implicit flow and .. when a token (or more specifically, when OpenID Connect is used to get the user information) is returned, you figure out if a user exists or not and creates one if it's new.
I also grok that the token issuer_id + sub are both required to determine a unique user from the point of an Auth Service.
Lastly, how do you prevent 'new account spam/abuse' ? I'm assuming that at some point in your Client (which checks for a local-storage token before each Rest API request because we need to stick some token in the bearer header) ...that when you decide to create a new user ... my REST Api (aka the Resource Server) will have an endpoint to create new users .. like POST /account/ .. so how do protect your server from getting spam'd new random POST's that create new accounts? IP+time-delay restriction?
where do we actually create the account in our Database?
Create a database table that includes an iss and sub column. Setup those columns as a unique compound key that represents a user. Insert user accounts in there.
So, can anyone explain how they use implicit flow and .. when a token (or more specifically, when OpenID Connect is used to get the user information) is returned, you figure out if a user exists or not and creates one if it's new?
It sounds like you already know the answer.
Parse the id_token.
Retrieve the iss and sub.
Check your app's table for that iss + sub key.
If it's there, then that user exists. If it isn't, then create the user.
From the spec:
Subject Identifier: Locally unique and never reassigned identifier within the Issuer for the End-User, which is intended to be consumed by the Client.
https://openid.net/specs/openid-connect-core-1_0.html
The iss and sub act as a unique compound key that represents the user. Here is an example id_token.
{
alg: "RS256",
kid: "1e9gdk7"
}.
{
iss: "http://server.example.com",
sub: "248289761001",
aud: "s6BhdRkqt3",
nonce: "n-0S6_WzA2Mj",
exp: 1311281970,
iat: 1311280970
}.
[signature]
Treat the iss + sub compound key in the same way that you would treat any other unique user identifier.
so how do protect your server from getting spam'd new random POST's that create new accounts?
Setup Cross-Origin Resource Sharing (CORS) restrictions, so that only your SPA's domain is allowed to POST to the /api/account endpoint.

OAuth and sticky sessions

I use OAuth 2 for authorization and need to implement it in a load balanced cluster. I've considered a few approaches, but it seems there is no way around a centralized approach. Here are my thoughts:
1. Balancing using source IP
Caching the tokens on one server and balancing by IP would be ideal, however, the IP can not assumed to be static. So when the same user tries to access services that require authorization from another IP with a valid token, it will fail, because it is not cached on this machine. Also other devices logged in with this user will not reach the same machine.
2. Balancing using a load balancing cookie
Also not really an option, since it cannot be assumed that every client implements cookie storage.
3. Balancing using the Authorization header
Balancing by hashing the Authorization: Bearer token header is problematic, because the first request (for requesting the authorization token) has no Authorization header, thus, the following request might not hit the same instance.
My current approach is to use a central Redis instance for authorization token storage.
Is there an option left, where a centralized approach can be avoided?
I think you still have two options to consider.
One is to balance by session ID. Application servers usually can be configured to manage sessions either by cookie or a GET parameter added to every link, so it does not definitely needs cookie storage. Additionally, there are very few HTTP clients left that still do not implement cookie storage, so you may want to reconsider item 2 of your list.
The other one is using self-contained tokens, e.g. JSON Web Tokens (JWT) with signatures (JWS). Validation of self-contained tokens may not need central database, each server instance can check token signatures alone and extract authorization details from the token itself. However, if you need support for revoking tokens, you may still need a central database to store at least a blacklist of revoked tokens.
Though I cannot provide you a full-fledged solution, hope this gives you some ideas.

Resources