When creating GitLab "Applications" for the purpose of 3rd party integrations, you also create access-tokens to enable said communication between GitLab and 3rd party software.
As of GitLab 15, access-tokens (and refresh-tokens) expire after 2hrs. GitLab seems to explain in their docs that it is now expected that access-tokens (or refresh-tokens) be refreshed every 2hrs in order to keep them alive.
This means that a lot of old 3rd party integrations are now breaking in my dev environments. Is it expected that the 3rd party software now implement some cron-job or other type of scheduled job to be able to keep tokens alive?
Or should 3rd party integrations request "Personal Access Tokens" that can have an extended lifetime, and avoid having to implement "refresh logic" for all potential customers that they support?
[should apps] implement some cron-job or other type of scheduled job to be able to keep tokens alive?
No. While this can work, you should only create live tokens when you actually need to use them. That's why you have a refresh token -- to get new tokens after the access token expires. There's no reason to constantly keep access tokens "fresh" and available if you can generate new tokens on-demand.
Your best bet is to build this "refresh logic" into your integrated applications. The fact that tokens do not expire by default in GitLab <15 is really more of a security bug/oversight than a feature.
Note also that refresh tokens remain valid, even after the access token expires. Per the docs (emphasis added):
To retrieve a new access_token, use the refresh_token parameter. Refresh tokens may be used even after the access_token itself expires. This request:
Invalidates the existing access_token and refresh_token.
Sends new tokens in the response.
Or should 3rd party integrations request "Personal Access Tokens"
This is an option, but it's a completely different approach compared to using an OAuth application and, depending on what your integration needs, might require each individual user of your app to create a token and provide it 'manually'. The available scopes for PATs are also different than the scopes available for OAuth apps. So, it's possible, but for a number of reasons, less desirable than an OAuth application.
This might be acceptable for integrations that don't necessarily need to access resources on behalf of users, but can get away with a ~single token (say, a Group Token or a PAT belonging to a member of the customer's top-level GitLab group) or solutions where you want to enroll project-by-project using a project token or similar.
However, if your integration needs to access resources on behalf of users or only needs access appropriate for a limited scope that is not available for access tokens (like email or profile scope), this probably wouldn't be a good approach. You also can't make "trusted" integrations this way.
Related
I'm looking for guidance and/or best practices in implementing step-up authentication. The generic scenario is as follows: user logs in to web app using some identity provider, user then goes to some specific area of web site which needs to be protected by additional MFA, for example OTP. All functionality for the website is via REST API, authenticating with JWT bearer token.
The best description of the flow I found is from Auth0 here. Basically, user acquires the access token with scope which is just enough to access APIs that do not require additional protection. When there is a need to access secure API the authorization handler on backend would check if the token has the scope indicating that the user has completed the additional MFA check, otherwise it's just HTTP 401.
Some sources, including the one from Auth0, suggest using amr claim as an indication of passed MFA check. That means that identity provider must be able to return this claim in response to access token request with acr_values parameter.
Now, the detail that is bugging me: should the frontend know in advance the list of API that might require MFA and request the elevated permissions beforehand, or should frontend treat the HTTP 401 response from backend as a signal to request elevated permissions and try again?
Should identity provider generate relatively additional short-lived token to access restricted APIs? Then, if frontend has 2 tokens it must definitely know which token to use with which API endpoint. Or maybe identity provider can re-issue the access token with normal lifespan but elevated permissions? Sounds less secure then the first approach.
Finally, do I understand the whole process right or not? Is there some well documented and time-tested flow at all?
CLIENTS
Clients can be made aware of authentication strength via ID tokens. Note that a client should never read access tokens - ideally they should use an opaque / reference token format. Most commonly the acr claim from the ID token is used.
This can be a little ugly and a better option can sometimes be to ask the API, eg 'can I make a payment'? The client sends an access token and gets a JSON response tailored to what the UI needs. The API's job is to serve the client after all.
APIs
APIs receive JWT access tokens containing scopes and claims. Often this occurs after an API gateway swaps an opaque token for a JWT.
The usual technique is that some logic in the Authorization Server omits high privilege ones, eg a payment scope or claim, unless strong authentication was used.
It is worth mentioning plain old API error responses here. Requests with an insufficient scope typically return 403, though a useful JSON error code in an error object can be useful for giving the client a more precise reason.
STEP UP
As you indicate, this involves the client running a new code flow, with different scope / claims and potentially also with an acr_values parameter. The Curity MFA Approaches article has some notes on this.
It should not be overused. The classic use case is for payments in banking scenarios. If the initial sign in / delegation used a password, the step up request might ask for a second factor, eg a PIN, but should not re-prompt for the password.
CLIENTS AND ACCESS TOKENS
It is awkward for a client to use multiple access tokens. If the original one had scope read then the stepped up one could use scope read write payment and completely replace the original one.
In this case you do not want the high privilege scope to hang around for long. This is yet another reason to use short lived access tokens (~ 15 minutes).
Interestingly also, different scopes the user has consented to can have different times to live, so that refreshed access tokens drop the payment scope.
ADVANCED EXAMPLE
Out of interest, here is an interesting but complicated article on payment workflows in Open Banking. A second OIDC redirect is used to get a payment scope after normal authentication but also to record consent in an audited manner. In a normal app this would be overkill however.
Is it really required to validate the access token by decrypting it using the public key? I am asking this question related to the Azure AD. I am understanding that the JWT token can be validated to make sure it was in deed signed by Azure AD. If this is the case, are there any Azure AD endpoints where I can pass the token and get a response whether it was signed by it? All the articles over the internet explains the manual way of grabbing the public key from Azure AD endpoint and then do the decrypt steps by ourselves. Are there any automated way to validate the access tokens?
It would be great if someone can throw light on whether its a standard practice for the APIs to validate the access tokens before servicing the request.
It is considered best security for APIs to validate JWT access tokens on every request. This approach is fast and scales to a microservices architecture, where Service A can forward the access token to Service B and so on.
The result can be termed a 'zero trust architecture', since both calls from internet clients and clients running within the back end involve digital verification before the API logic runs.
You are right that a certain amount of plumbing code is needed to verify JWTs. This is typically coded once in a filter and then you don't need to revisit it. For some examples in various technologies, see the Curity API Guides.
I can confirm that this approach works fine for Azure AD - please post back if you run into any specific problems.
Some Authorization Servers also support token validation via OAuth Introspection, but Azure AD does not support this currently.
Introspection is most commonly used with opaque access tokens (also unsupported by Azure). See the Phantom Token Approach for further details on this pattern.
Yes, you should validate it every time. The reason to use JWT is to authorize request. If you don't care who sent the request and it doesn't matter if it was a hacker or your customer than don't use jwt and oauth. If you do care and use it you have to be sure that the jwt was not changed by some hacker, so signature has to be checked.
I am implementing an App Store for my application where third-party developers can build there own apps based on my API. I have everything working and understand the concepts of OAuth 2.0, but I don't see how an external app can have timeless access with an access code that expires after one hour. Now you can use a refresh token to request a new one, but that one expires after some time too.
So how can an external app continuously connect to my API when the user of that app allows it only once?
My authorization codes expire after 10 minutes, the access tokens after 1 hour and the refresh tokens after 2 weeks.
I don't see how the app can retrieve data after those periods of time without the user re-allowing/re-installing the application through oauth.
How are bigger companies like Facebook etc. approaching this? Do they have an access token that never expires?
Expanding on my comment, the general recommendation when using bearer tokens is that their lifetime should be reduced in order to mitigate the impact of an access token being compromised.
On the other hand, asking the user credentials every hour or so would be an UX nightmare so OAuth 2.0 has the notion of refresh tokens which will normally have a longer lifetime allowing the application to request a new access token without requiring user intervention.
I'm unfamiliar with the implementation details around Facebook persistent tokens so I won't comment on that, but they are most likely safe. However, you're not Facebook, so my recommendation would be for you to follow public standards like OAuth 2.0/OpenID Connect instead of trying to provide a customized approach.
Regarding your comment about refresh tokens that never expire, it's an acceptable solution, but their lifetime is just one part of the equation. You should consider if they are multi-use or single-use, they can only be used by the client application that they were issued to, they should not be used by browser-based applications due to the difficulties of ensuring secure storage, etc.
As a security best practice, when an engineer/intern leaves the team, I want to reset the client secret of my Google API console project.
The project has OAuth2 access granted by a bunch of people, and I need to ensure that those (grants as well as refresh tokens) will not stop working. Unfortunately, I've not been able to find documentation that explicitly states this.
Yes. Client Secret reset will immediately (in Google OAuth 2.0, there may be a few minutes delay) invalidate any authorization "code" or refresh token issued to the client.
Client secret reset is a countermeasure against abuse of revealed client secrets for private clients. So it makes sense to require re-grant once the secret is reset.
I did not find any Google document states this explicitly either. But my practice proves that reset will impact users, also you can do a test on it.
And in our work, we programmers do not touch product's secret, we have test clients. Only a very few product ops guys can touch that. So I think you need to try your best to narrow down the visibility of the secret in your team. Rest is not a good way.
For a B2B REST API servicing Enterprise clients who may have multiple applications using a Client ID/Secret:
If you send a request for an oAuth2 access token for a specific Client ID and Client Secret and receive an access token then later on send another request for a token with that same Client ID/Secret, should that invalidate the previous access token?
In other words, in this case, should a Client ID/Secret be able to request and use multiple valid access tokens? Are there different cases where this should be implemented or not?
OAuth2 is generally about a user delegating access to a client, so in the case where a client has many users (as it usually will), it will most definitely be using multiple access tokens since they will apply to different users.
Consider the situation where you grant access to your Google account to another online application (the client). Google issues an access token which might allow the client to read your contacts, for example, using Google's OAuth2 APIs (with your prior approval). Obviously it can only access your contacts with this token, not other people's. Google may issue many different access tokens to the same client, but each may correspond to a different user and/or resource.
The same authorization server may issue tokens for many different resources, so even in the case where there is no interaction with a user (as in the "client credentials" grant), a client may still need to manage multiple tokens.
Whether the authorization server invalidates a token when another is requested for the same user, audience, scope etc., would be implementation dependent. A client wouldn't usually need to do this and would normally use a refresh token to obtain a new token when its existing one was about to expire. I'd say it's generally more important that a user can invalidate existing tokens they have authorized, and that tokens can be invalidated for a particular client. Of course, this also requires that the resource server has some way of checking for token revocation before granting access.
Yes, a client can have several access tokens. It's meaningful, we're actually using.
Consider that tokens may have different scopes, so a client may have a token with scope "res1" for a resource and a another token with scope "res2" for a different resource.
Another use case may be to request a refresh token with several scopes, e.g. "read write" and use it to get a "read" scoped access token to initialize a management GUI, then get a new access token for each write transaction.
You can argue whether it's good design/implementation or not but it's definitely technically possible and not forbidden by the standard.