OAuth: what information should I store in tokens - oauth-2.0

I am creating an OAuth server which issues tokens to users. However, I don't want to store tokens in the database, and I want the processing to be fast.
What I'm thinking is to include various information in the token, so that when I decrypt it, the information is already enough to check for permissions and scopes.
I'm a little worried about the token's length growing as I add more scopes.
Is this a good idea? If not, what can you recommend?

What you are talking here is about InMemoryTokenStore which is the default implementation. Also Oauth2 already maintains the information required to check permissions and scopes in token in form of different access roles provided by authorization server to various clients. I think you don't need to explicitly store anything in token.

Related

Securing URL with user owned resources in OAuth2

I'm aware of how OAuth2 and OIDC can use custom scopes and token introspection to secure an URL like this:
/users/me/documents
I can give this URL the documents:view scope and when receiving the token from the authenticated user, I can ask the authorization server if this user has the correct permissions. Then I can use the preferred_username claim or similar to see who /me actually is.
But what if I have a resource which is accessible by multiple users? Let's say a user has documents but they can be viewed by his direct manager. To retrieve the employee's documents as a manager, I'd need to have an url like this:
/users/${userId}/documents
How could I enforce it in a way that only the resource owner and direct manager can view this resource? I don't want everyone to access everyone's documents by knowing the userId. I could grant access as a whole to all users with the manager role, but that's not specific enough.
I'm aware there's the UMA extension where users can grant access to resources on his behalf to other users, but it's not the user who grants permission. It's the system who states in this case that managers can access their employees documents.
Would it make sense to write a custom policy which extracts the ${userId} and performs the check? Or should this not be done by the authorization server at all and be done by the resource server instead? Perhaps a different approach to reach the same goal?
Finer grained authorization like this is done with claims rather than scopes. There may be business rules around which docs a user can see, eg:
A user can access their own docs
An admin has view access to all docs
A manager can view docs for people they manage
In an access token this might be represented by these claims:
userId
role
Claims are often domain specific like this and the preferred option is to add them to tokens during token issuance. At Curity we have some good resources on this topic:
Claims Best Practices
CLAIMS AND AUTHORIZATION
The Authorization Server issues access tokens and then APIs (resource servers) verify the access token and use the token data to apply authorization rules (which are often domain specific) on every single request.
Claims are often used when dynamic behaviour is needed - they are runtime values that derive from the user identity, whereas scopes are fixed design time values. In your example an API might also need to vary SQL to retrieve documents based on the user identity.
There are more complex variations on this theme, such as an API calling a system such as Open Policy Agent, so that documents returned are determined by rules configured by a security administrator. That policy would still involve using claims from the access token though.
EXAMPLE CODE
If it helps, here is some sample code of mine that show the type of approach when enforcing domain specific authorization rules. Typically you need to filter collections and check access to individual items.

OIDC generalized scopes

Currently building up a microservice for handling auth-related stuff using OIDC.
Now, we think about access control and how to be as future-proof as possible. This auth server is only used for first-party applications (3x SPA, 2x native mobile App). On mobile, we use the authorization_code grant. Each resource server validates the supplied token (access token as JWT) itself. But what happens when (in future), we add a service which needs its own scope to check (e.g. notifications:read)? As mobile app users are not used to logging in and out everytime (when we would update the requested scopes via an app update -> bad solution) is there any sweet solution to manage this scenario?
Per specification, it's possible to change the required scopes when refreshing a token but this is limited to require less scopes than originally requested and not more so that's not an option.
For example, Facebook is providing only four scopes for Instagram e.g. instagram_basic or instagram_content_publish. Zalando for example includes only a scope NORMAL_USER in their tokens whereas Wolt includes the users roles as a claim.
I think there is some confusion as this scenario is not covered directly by OAuth2 or OIDC. What are your thoughts about this?
There is one standard for doing OAuth 2.0 Incremental Authorization, but your challenge is to find a token provider that supports it or similar standards.
In a micro service architecture, the question is if you should use the access token from the authorization code flow everywhere, or if for service-to-service communication, you should use client credentials flow instead.
See also https://www.gmass.co/blog/oauth-incremental-authorization-is-useless/
It seems like you could use Token Exchange for that.
For example it could work like that - whenever your resource server gets a token without the notifications:read scope, issued before date x (so before you were issuing that scope) it performs an exchange with the authorization server and (if the server decides that it can perform the exchange) it gets a new access token that contains that scope. If the token was issued after date x you can safely assume that the token was not granted this scope.
Another solution would be to use claims for authorization decisions instead of scopes. The authorization server can issue e.g. a claim notifications_read: true, or something like that. Whenever you refresh tokens you can issue more claims in the new access token, the spec does not prevent that. Then your resource servers should check claims in the access token, they can ignore scopes. The resource server would just check whether the token contains the notifications_read claim with value true, or not. This solution gets a bit more complicated if you require your users to give consent to scopes, but as you said you're using this only for 1st-party, so I assume you're not using consent screens.
You can have a look at this free course - this topic of using claims in authorization is covered in part 4.

Oauth2, client_credentials, and scope

I've been using Oauth2 for quite some time as a developer but still feel at times I need to go back to the basics. I have a general question around this topic, specifically when it comes to using scopes.
As I understand, scopes are a kind of filter that can be applied to certain APIs and the bearer token is required to possess any scope the API requires. In that sense, they offer an extra 'layer' of protection after the token itself. However, what I would like some feedback on is how best to protect an API that requires scopes. If a consumer obtains a valid bearer token from the issuer and discovers what scope is required for a service, what's the best way to prevent them from making a POST using a token obtained through client_credentials, for example, on a restricted resource?
I believe I know a few solutions: require some sort of user token as well and further lock down resources via a authorization in your business logic (e.g. customers can only modify their own data). Another option might be to require api registration and an approval workflow so only 'approved' clients can invoke a service. Similar to how Akana and other API Gateway products operate. But are there other things I'm missing? Are Scopes really meant to serve as an authorization mechanism at all, or am I misinterpreting their real purpose, because to me they don't seem like they offer much.
Thanks for feedback in advance.

JWT and server side token storage

Every article I've read vouching for the advantages of JWT state that one of these advantages is its ability for an auth system to be distributed across multiple servers. i.e . You aren't relying on a central repository of user auth details to do a lookup on every request.
However when it comes to implementation, I've read in many places that for added security you shouldn't just rely on the JWT signature verification itself, and that you should maintain a list of black or white list tokens generated by the server.
Doesn't this defeat the advantage I've listed above, as this list of tokens would need to be stored centrally where all servers can access it and it would require a lookup on each request?
How have people implemented this on their end?
You are making very good points in your question. Actually it would make sense to store an OAuth token at a central location in order to make it easier to implement signout/revoke functionality. If you just relied on the token signature you couldn't have possibly implemented such feature. Suppose that a user wanted to revoke an access token. In this case if you didn't have a central location/datastore for those tokens where you would have invalidated it and only relied on token signature, then the token would still have been valid.
So indeed, when you want to build more advanced systems that are dependent on OAuth tokens, a central store for those tokens is more than a must.

Using rails secret to salt authentication keys in devise

I am creating an ember app that has devise worked in for authentication. I'm really getting stuck with how all these different tokens come into play.
I'm reimplementing the recently deprecated :token_authenticatable devise "strategy" using the method described here. I'd like to add token authentication to my API and sign requests to that with the user's token.
What I'm wondering is, even though it's using Devise.secure_compare to thwart timing attacks, it's still storing the authentication_token in plain text, so if anyone were to gain access to the database, those tokens could potentially used to steal a session, no?
Devise seems to use two different types of "tokens" in the modules:
Creating a token with Devise.friendly_token and storing it as plain text. Then doing a look up by this token (as used in :rememberable).
Creating a salted token with Devise.token_generator (as seem in :confirmable).
The second method looks to me like the token is salted using Devise.secret_key which is derived from the Rails secret in config/secrets.yml. That way the token is encrypted and if the database was exposed for some reason, the tokens couldn't be used, right? Would be the equivalent of having a private key (rails secret) and a public key (authentication_token).
I have quite a few concerns:
Should I use Devise.token_generator to create my authentication_tokens?
What is the word on security for these type of tokens?
How does the CSRF token factor into Devise?
Devise does a lot of things, and not necessarily the things your particular application needs or in the way your applications needs. I found it wasn't a good fit for my application. The lack of support/removal of api token authentication provided enough motivation to move on and implement what I needed. I was able to implement token auth from scratch fairly easily. I also gained full flexibility for managing user signup/workflows/invitations and so on without the constraints and contortions required of Devise. I still use Warden which Devise also uses for its Rack middleware integration.
I've provided an example of implementing token authentication/authorisation on another stackoverflow question. You should be able to use that code as a starting point for your token authentication, and implement any additional token protection you require. I'm also using my oAuth token approach with Ember.js.
Also consider if encrypting tokens is just hand-waving because depending on your deployment environment and how you manage your master key/secret, it may be giving a false sense of security. Also remember that encryption says NOTHING about the integrity/validity of the token or related authentication/authorisation information, unless you also have a MAC/signature that encompasses everything used in your access decision. So whilst you may go to the trouble of protecting tokens from attackers whom have access to your database, it may be trivial for those same attackers to inject bogus tokens or elevate privileges for existing users in your database, or just steal or modify the real data which may be what they really want to achieve!
I've made some large comments in respect of providing integrity and confidentiality controls for ALL authentication/authorisation information (which tokens are part of) on the Doorkeeper gem. I would suggest reading the full issue to get an idea of the scope of the problem and things to consider because none of the gems currently do what should be done. I've provided an overview on how to avoid storing tokens on the server altogether and I've provided some sample token generation and authentication code in a gist which also deals with timing based attacks.

Resources