Is it really required to validate the access token by decrypting it using the public key? I am asking this question related to the Azure AD. I am understanding that the JWT token can be validated to make sure it was in deed signed by Azure AD. If this is the case, are there any Azure AD endpoints where I can pass the token and get a response whether it was signed by it? All the articles over the internet explains the manual way of grabbing the public key from Azure AD endpoint and then do the decrypt steps by ourselves. Are there any automated way to validate the access tokens?
It would be great if someone can throw light on whether its a standard practice for the APIs to validate the access tokens before servicing the request.
It is considered best security for APIs to validate JWT access tokens on every request. This approach is fast and scales to a microservices architecture, where Service A can forward the access token to Service B and so on.
The result can be termed a 'zero trust architecture', since both calls from internet clients and clients running within the back end involve digital verification before the API logic runs.
You are right that a certain amount of plumbing code is needed to verify JWTs. This is typically coded once in a filter and then you don't need to revisit it. For some examples in various technologies, see the Curity API Guides.
I can confirm that this approach works fine for Azure AD - please post back if you run into any specific problems.
Some Authorization Servers also support token validation via OAuth Introspection, but Azure AD does not support this currently.
Introspection is most commonly used with opaque access tokens (also unsupported by Azure). See the Phantom Token Approach for further details on this pattern.
Yes, you should validate it every time. The reason to use JWT is to authorize request. If you don't care who sent the request and it doesn't matter if it was a hacker or your customer than don't use jwt and oauth. If you do care and use it you have to be sure that the jwt was not changed by some hacker, so signature has to be checked.
Related
Ok, user authenticates and client gets the JWT from my IS4 instance. All that works. Now, for reasons I still cry at nights after being tormented by people who authoritatively claim to know OAuth but do not, the client is sending me the identity token JWT over the wire to an action, and I need to do some work based on the subject in it. I want to minimize the fallout of this decision and prevent a situation where someone plants me a fake token, so I want to validate the JWT to make sure it came from me, that indeed I am the one who issued it. To simplify, I need to act as both the client and the server in the token validation process, while running on the IS4.
Since this is such a violation of OAuth protocol, I am not sure this is supported out of the box, but here comes: is there a way to do this? I even tried to introspect the token, but that requires an authentication context, and I can't seem to get the client credential flow working since I only use openid/profile scopes and they are not supported by the client credential flow (since the user is defined only in JWT).
The receiver of a token should always validate the signature of the token to make sure it came from your IdentityServer. This is usually automatically done by most proper JWT-libraries. The library will download the public-key from your IdentityServer and use it to verify the signature of the token.
If you are using ASP.NET, then the JwtBearer library will do that for you.
I'm looking for guidance and/or best practices in implementing step-up authentication. The generic scenario is as follows: user logs in to web app using some identity provider, user then goes to some specific area of web site which needs to be protected by additional MFA, for example OTP. All functionality for the website is via REST API, authenticating with JWT bearer token.
The best description of the flow I found is from Auth0 here. Basically, user acquires the access token with scope which is just enough to access APIs that do not require additional protection. When there is a need to access secure API the authorization handler on backend would check if the token has the scope indicating that the user has completed the additional MFA check, otherwise it's just HTTP 401.
Some sources, including the one from Auth0, suggest using amr claim as an indication of passed MFA check. That means that identity provider must be able to return this claim in response to access token request with acr_values parameter.
Now, the detail that is bugging me: should the frontend know in advance the list of API that might require MFA and request the elevated permissions beforehand, or should frontend treat the HTTP 401 response from backend as a signal to request elevated permissions and try again?
Should identity provider generate relatively additional short-lived token to access restricted APIs? Then, if frontend has 2 tokens it must definitely know which token to use with which API endpoint. Or maybe identity provider can re-issue the access token with normal lifespan but elevated permissions? Sounds less secure then the first approach.
Finally, do I understand the whole process right or not? Is there some well documented and time-tested flow at all?
CLIENTS
Clients can be made aware of authentication strength via ID tokens. Note that a client should never read access tokens - ideally they should use an opaque / reference token format. Most commonly the acr claim from the ID token is used.
This can be a little ugly and a better option can sometimes be to ask the API, eg 'can I make a payment'? The client sends an access token and gets a JSON response tailored to what the UI needs. The API's job is to serve the client after all.
APIs
APIs receive JWT access tokens containing scopes and claims. Often this occurs after an API gateway swaps an opaque token for a JWT.
The usual technique is that some logic in the Authorization Server omits high privilege ones, eg a payment scope or claim, unless strong authentication was used.
It is worth mentioning plain old API error responses here. Requests with an insufficient scope typically return 403, though a useful JSON error code in an error object can be useful for giving the client a more precise reason.
STEP UP
As you indicate, this involves the client running a new code flow, with different scope / claims and potentially also with an acr_values parameter. The Curity MFA Approaches article has some notes on this.
It should not be overused. The classic use case is for payments in banking scenarios. If the initial sign in / delegation used a password, the step up request might ask for a second factor, eg a PIN, but should not re-prompt for the password.
CLIENTS AND ACCESS TOKENS
It is awkward for a client to use multiple access tokens. If the original one had scope read then the stepped up one could use scope read write payment and completely replace the original one.
In this case you do not want the high privilege scope to hang around for long. This is yet another reason to use short lived access tokens (~ 15 minutes).
Interestingly also, different scopes the user has consented to can have different times to live, so that refreshed access tokens drop the payment scope.
ADVANCED EXAMPLE
Out of interest, here is an interesting but complicated article on payment workflows in Open Banking. A second OIDC redirect is used to get a payment scope after normal authentication but also to record consent in an audited manner. In a normal app this would be overkill however.
I'm considering a microservice architecture and I'm struggle with authorization and authentication. I found a lot of resources about oauth2 and openid connect that claim they solve the issue but it is not clear enough for me.
Let's consider we have a following architecture:
In my system I want to add a feature only for a certain group of users defined by role. I want to also know the name of the user, their email and id.
After my research I find the following solution to be a good start:
SPA application displays login form.
User fills in the form and sends POST request to authN&authZ server.
The server replies with access token (being a JWT) that contains name, email, id and role of the user. The response contains a refresh token as well.
SPA application stores the token and attaches it to every request it makes.
Microservice 1 and Microservice 2 check if the token is valid. If so, they check if the role is correct. If so, they take user info and process the request.
How far away from the good solution I am? The login flow looks like Implicit flow with form post described here but with implicit consents and I'm not sure if it's fine.
Moving forward, I find passing user data in JWT (such as name, email) to be not a good solution as it exposes sensitive data. I found resources that say it is recommended to expose only a reference to a user in token (such as ID) and replace such token with a classic access_token in reverser-proxy/api gateway when sending a request to a microservice. Considering such solution I think that following scenario is a good start:
SPA application displays login form.
User fills in the form and sends POST request to authN&authZ server.
The server replies with access token and refresh token. API gateway (in middle) replaces access token with ID token and stores claims from access token within its cache.
SPA application stores the token and attaches it to every request it makes.
Handling a request, API Gateway takes ID Token and based on the user ID generates a new access token. The access token is send to microservice 1 or microservice 2 that validate it as previous.
How do you find such solutions? Is this a secure approach? What should I improve proposed flow?
Thanks in advance!
You are on the right tracks:
ZERO TRUST
This is an emerging trend, where each microservice validates a JWT using a library - see this article. JWT validation is fast and designed to scale.
CONFIDENTIAL TOKENS FOR CLIENTS
Internet clients should not be able to read claims that APIs use. The swapping tokens in a gateway concept is correct, but the usual approach is to issue opaque access tokens here, rather than using ID tokens. At Curity we call this the Phantom Token Approach.
SECURE COOKIES IN THE BROWSER
One area to be careful about is using tokens in the browser. These days SameSite=strict HTTP Only cookies are preferred. This requires a more complex flow though. See the SPA Best Practices for some recommendations on security.
SPAs should use the code flow by the way - aim to avoid the implicit flow, since it can leak tokens in the browser history or web server logs.
SUMMARY
All of the above are general security design patterns to aim for, regardless of your Authorization Server, though of course it is common to get there one step at a time.
Don't use your own login form. As Garry Archer wrote, use the auth code flow with PKCE which is the recomended flow for applications running in a browser.
If you don't want to get an ID token, don't ask for the openid scope in the initial auth request. The type of issued access tokens (JWT or opaque) can often be configured on your OAuth2 server. So I see no need to issue new tokens at your gateway. Having more token issuers opens more ways of attacking your system.
Your backend modules can use the userinfo endpoint, which will give them info about the user and validate the token. This way, if the token was invalidated (e.g. user logged out), the request processing will not proceed. If you validate just a JWT signature, you will not know about the token being invalidated.
If you plan to make requests between your backend modules as part of of a user request processing, you can use the original access token received from your SPA as long as your modules are in a safe environment (e.g. one Kubernates).
I am wondering that in the OpenID Connect Auth Code Flow, whether there is still a need to validate the access_token and id_token given they are obtained by my web server rather than by browser (i.e. using back channel rather than front channel)?
By "auth code flow" I am referring to the flow where browser only receives an "authorization code" from the authorization server (i.e. no access_token, no id_token), and sends the auth code to my web server. My web server can therefore directly talk to the authorization server, presenting the auth code, and exchange it for the access_token and id_token. It looks like I can simply decode the access_token and id_token to get the information I want (mainly just user id etc.)
My understanding of the need for validating the access_token is that because access_token is not encrypted, and if it is transmitted through an insecure channel, there is a chance that my web server can get a forged token. Validating the token is basically to verify that the token has not been modified.
But what if the access_token is not transmitted on any insecure channel? In the auth code flow, web server directly retrieves the access_token from the auth server, and the token will never be sent to a browser. Do I still need to validate the token? What are the potential risks if I skip the validation in such flows?
You should always validate the tokens and apply the well known validation patterns for tokens. Because otherwise you open up your architecture for various vulnerabilities. For example you have the man-in-the-middle issue, if the hacker is intercepting your "private" communication with the token service.
Also most libraries will do the validation automatically for you, so the validation is not a problem.
When you develop identity systems you should follow best practices and the various best current practices, because that is what the users of the system expects from you.
As a client, you use HTTPS to get the public key from the IdentityServer, so you know what you got it from the right server. To add additional security layers, you could also use client side HTTPS certificates, so that the IdentityServer only issues tokens to clients that authenticates using a certificate.
In this way, man-in-the-middle is pretty impossible. However, in data-centers in the backend, you sometimes don't use HTTPS everywhere internally. Often TLS is terminated in a proxy, you can read more about that here
When you receive an ID-token, you typically create the user session from that. And yes, as the token is sent over the more secure back channel, you could be pretty secure with that. But still many attacks today occurs on the inside, and just to do a good job according to all best practices, you should validate them, both the signature and also the claims inside the token (expire, issuer, audience...).
For the access token, it is important that the API that receives it do validate it according to all best practices, because anyone can send requests with tokens to it.
Also, its fun to learn :-)
ps, this video is a good starting point
In a pursuit to develop a Open ID connect model for existing applications and back-end services, I am confused to choose whether Offline vs Online JSON token validation for ID Token & Access token.
My Open ID Provider : KeyCloak
My Question is around the idea about Token Validation, So I am not discussing the implementation details.
As per OIDC (Open ID Connect),
ID token will be issued to the service that is requesting resource
once authenticated
and now on the resource server side is it really necessary to verify the token with Open ID Provider (Keycloak) or offline validate the Token based on the public key.
If I go for Offline model of token validation - what are the potential implications / limitations i must face.
I am looking for ideal situations to choose the appropriate model & trade-offs discussed.
The only advantage of online validation is the possibility that user rights are revoked in the meantime. With offline validation you have proof that token is issued by your Keycloak and that nobody tampered it. Online validation for every request would be too much.
For example, a click in the frontend can result in many api calls and there is no benefit in creating dozen rest requests to Keycloak in the same second. Recommendation is to keep token lifetime shorter.
You could implement token caching and validate token online in some short periods, but what’s the point if you can just lower token lifetime in the Keycloak.
So to conclude, validate the token offline for the timeout duration ( say 5 minutes - should be configurable based on the use case) and beyond the period issue new token.
Token validation is one aspect but it is not a complete security solution. You will often find that you need data from both the token and other sources to authorize access to resources properly.
So your solution depends on how you want to authorize and also on non functional requirements such as availability and performance.
My personal preference is offline due to its separation of concerns - see my write up for further details.
on the resource server side is it really necessary to verify the token with Open ID Provider (Keycloak)
yes, you must validate ID token as defined by OpenID Connect(OIDC) protocol. Token validation have many steps, but mostly involved in signature validation, issuer validation. Once this is done you can say ID Token to valid and hence mark end user to be authenticated. This is the core principal in OpenID Connect.
Regarding offline, validation on public key is sufficient. For example, this means when your authorization server change key chain (which is rare and done when there's a security breach), you have to somehow update your key again.
Also, there exists JWT encryption (RFC7516), which adds an extra layer of security for token validation (if you are concerned about security). But if I am correct KeyCloak doesn't support this.
Advantage of online validations
You always rely on authorization server to verify the token validity.
Disadvantage of online validations
You create more traffic for authorization server. Also your application server require one more API calls.
Regardless you always need public key of authorization server.