Keycloak, sharing resources between clients - oauth-2.0

We're developing an application with microservice-based architecture where users can be members of organizations, and within each organization, they may have resource-based access restrictions. An example can be a recruiter who's a member of several organizations on the platform; in organization A they may see the list of all job postings and interviewers while in organization B they can only see job-posting that they are directly allowed to see.
Structure wise this becomes something like this:
All this seems easy to do with Keycloak, we create confidential clients(one for each microservice) and enable resource management on them. However, there are quite some cases when different microservices (i.e., Keycloak clients) need to validate user's access scopes to the same resource. An example would be a setup where we have 2 microservices one for posting & managing job announcements the other for managing applications and interviews, so job-manager and application-manager. Now, when a new application is submitted, or an interviewer tries to access an application application-manager has to make sure that the user has access to the job posting(resource) configured in the job-manager Keycloak client. Which, I think, is not something Keycloak supports.
Scale wise, we're speaking about X00k users, 4-5 times that organization users connections, and tens of millions of resources. So to minimize the number of objects we're creating in Keycloak, we've decided to make use of attributes on resources in which we store JSON structures.
So, how one microservice, can verify a user's access to a resource manager by another microservice?

Try to enable fine-grained authorization:
https://www.keycloak.org/docs/latest/authorization_services/#_resource_server_enable_authorization
This allows for resource based authorization. The resource does not have to necessarily be each resource you have, but an abstraction like an org_manager etc...
Alternatively you can take the json you already have and ask an OPA agent with a policy you defined.
https://www.openpolicyagent.org/

Related

Usage of Keycloak Authorization Services (or User-Managed Access in general)

For example, there is a record management application (blog, task manager, document manager like SharePoint or Google Docs, a generic CRUD application, ...). Users can create resources (objects, records, tasks, documents, ...). Users can be owners of the resources. And they can give permissions to other users to view, edit, delete resources.
It is a trivial authorization schema implemented in many applications. And it seems that we could use an existing authorization service theoretically. However all of them seems unusable:
For example, there are millions of resources in my application. How to transfer them from the application to Keycloak Authorization Service? And how to synchronize resource lists between the application and Keycloak?
Will Keycloak handle millions of resources?
Resource management UI doesn't look user-friendly. Is it usable by a non-expert?
Are there any real world examples of Keycloak Authorization Services usage, or User-Managed Access usage, ...?

How to secure an API with OIDC/OAuth

I'm trying to better understand how to make use of OIDC/OAuth in securing a restful API, but I keep getting lost in terminology. Also when I research this question most of the answers are for Single Page Apps, so for purposes of this question assume the API will not be used for an SPA.
Assumptions:
Customers will access a restful API to interact with <Service>.
It is expected that customers will create automated scripts, or custom application in their own system to call the API.
Once setup it is not expected that there will be a real person who can provide credentials every time the API is called.
<Service> uses a 3rd party IDP to store and manage users.
3rd part IDP implements OIDC/Oauth and that is how it should be integrated into <Service>
Questions:
What OIDC/OAuth flow should be used in this situation?
What credentials should be provided to the customer? client-id/client-secret or something else?
What tokens can/should be used to communicate information about the "user"? E.g. Who they are/what they can do.
How should those tokens be validated?
Can you point me to any good diagrams/resources that explain this specific use case?
Am I missing anything important in the workflow?
It sounds like these are the requirements, if I am not misunderstanding you. The solution contains not just your own code and is more of a data modelling question than an OAuth one.
R1: Your company provides an API to business partners
R2. Business partners call it from their own applications, which they can develop however they see fit
R3. User authentication will be managed by each business partner, resulting in a unique ID per user
R4. You need to map these user IDs to users + resources in your own system
OAUTH
Partner applications should use the client credentials flow to get an access token to call the API. Each business partner would use a different credential for their set of users.
Using your own IDP to store users does not seem to make sense, since you do not seem to have an authentication relationship with the actual end users.
Access tokens issued to business partners would not be user specific by default. It is possible that a custom claim to identify the user could be included in access tokens - this would have to be developed in a custom manner such as via a custom header, since it is not part of the client credentials flow.
Access tokens would be verified in a standard OAuth manner to identify the partner - and possibly the end user.
DATA
Model users in your own system to have these fields, then store resources (such as orders) mapped against the User ID:
User ID (your generated value)
Partner ID (company the user is from)
External User ID (an ID that is easy for partners to supply)
Typically each partner would also have an entry in one of your database tables that includes a Client ID, name etc.
If you can't include a custom User ID claim in access tokens, partners have to tell you what user they are operating on when they call the API, supplying the external user ID:
POST /users/2569/orders
Your API authorization needs to ensure that calls from Partner A cannot access any resources from Partner B. In the above data you have all the fields you need to enable this.
SUMMARY
So it feels like you need to define the interface for your own APIs, based on how they will be called from the back end of partner apps. Hopefully the above hints help with this.

How many app registrations do I need in a microserice architecture

I have a microservice architecture, where one Single Page Application accesses three different APIs:
I am securing those APIs via the Microsoft Identity Platform and therefore I also need service principals.
My first approach matches with all the examples I found on blogs or in the MS docs.
In this case I have one app registration for the client app and three additional ones for the APIs:
This has the following impact:
Each API has its own audience.
I get four service principals for each application.
I get three different places where I have to administrate the user assignments to roles. (for example: User A can read assets from API A etc...)
This works, but comes also with some problems:
The other admins that are managing which user is allowed to do what are confused about three different places they have to assign roles. It would be nicer to have one central place.
The roles of the users are not placed in the ID tokens, because only roles of the client application would go there... but I do not want to assign permissions in the client app again.
If API A wants to call API B or C, I need two access tokens for other APIs.
This lead me to a second idea:
Here I have one registration for all 3 APIs. This already solves problem 1 and problem 2. But it also gives me a strange feeling, because I never found other people doing so.
Also my ID tokens are not telling me the roles, so to fix this, I could even go another step further to a single app registration for everything:
Now one registration exposes an API and consumes this API also. Something what is possible and seems to solve my problems. I even get all roles for the users in my ID tokens AND in my access tokens now.
However, this is contradictive to all other examples I found.
Which disadvantages does the last solution have?
Which of the three approaches should I chose?
Which disadvantages does the last solution have?
One thing that comes to my mind is that you want API A to be able to edit data in e.g. MS Graph API, so you give it the app permission to Read/Write Directory data.
Now with the shared app registration this permission has also been given to API B and API C.
So the principle of least privilege may be violated in the second and third options.
But it does make it easier to manage those APIs as you noticed.
The third option does open up the door for the user to acquire access tokens to any APIs that you might want to call on behalf of the current user from your APIs.
So if you wanted to API A to edit a user through MS Graph API on behalf of the user, you'd have to require the read/write users delegated permission (scope) for your app.
This would allow the user to acquire this token from your front-end as well, even though that is not intended.
Now they would not be able to do anything they wouldn't otherwise be able to do since the token's permissions are limited based on the user's permissions, so this might not be a significant disadvantage.
Which of the three approaches should I chose?
As with many things, it depends :)
If you want absolute least privilege for your services, option 1.
If you want easier management, I'd go with option 3 instead of 2.
There was that one thing I mentioned above about option 3 but that does not allow privilege escalation.

OAuth client implementation w/ multiple resources, multiple auth servers

I'm trying to understand OAuth best practice implementation strategies for systems requiring access to protected resources backed by different authorization servers. The default answer is to use the access tokens provided by each authorization service and write the logic to store them on an as-needed basis, but the use case of systems requiring multiple, federated protected resources seems common enough that there might be a protocol/framework-level solution. If so, I haven't been able to find it.
Here's a hypothetical example to clarify:
I'm a user with an account on Dropbox, Google Drive, and Boxx. I'd like to make a backend API to report the total number of files I own across all three systems, i.e., Result = FileCount(Dropbox) + FileCount(Drive) + FileCount(Boxx). How to I organize the system in such a way that I can easily manage authorizations? A few cases:
Single-account: If I only have, say, a Drive account, the setup is easy. There's one protected resource (my folders), one authorization server (Google), and so I only have one token to think about. By changing the authorization endpoints and redefining the FileCount function, I can make this app work for any storage client I care about (Dropbox, Google, Boxx).
Multi-account: If I want to aggregate data from each protected resource, I now need three separate authorizations, because each protected resource is managed by a separate authorization server. AFAIK, I can't "link" these clients to use a single authorization server. As a result, if I have N protected resources backed by N authorization servers, I'll have N access tokens to manage for a given request/session. Assuming this is true, what provisions do software frameworks provide to handle this (any example in any language is fine)? It just seems too common of a problem not to be abstracted.
The closest related question I can find is probably this one. The accepted answer seems completely reasonable: one application should not be able to masquerade as another without explicit consent. What I'm looking for is (I think) slightly different: some standard methodology/framework/approach to managing multiple simultaneous access tokens per session. I've also wondered about the possibility of an independent authorization server that can proxy the others as needed and manage the token bookkeeping (still requiring user consent for each), but I think this amounts to the same thing.
Thanks in advance.

OAuth2 and role-based access control

I have a Rails app acting as an OAuth 2.0 provider (using the oauth2-provider gem). It stores all the information related to users (accounts, personal information, and roles). There are 2 client apps that both authenticate through this app. The client apps can use the client_credentials grant type to find users by email and do other things that don't require an authorization code. Users can also log in to the client apps using the password grant type.
Now the issue we're facing is that the users' roles are defined globally on the resource host. So if a user is given an admin role on the resource host, that user is admin on both clients. My question is: what should we do to have more fine-grained access control? I.e. a user can be an editor for app1 but not for app2.
I guess the easy way to do this would be to change the role names like so: app1-admin, app2-admin, app1-editor, app2-editor, etc. The bigger question is: are we implementing this whole system correctly; that is, should we be storing so much info on the resource host, or should we denormalize the data onto the client apps?
A denormalized architecture would look like this: all user data on the resource host, localized user data on each client host. So user#example.com would have his personal info on the resource host and have his editor role stored on client app1. If he never uses it, app2 could be completely oblivious of his existence.
The drawback to the denormalized model is that there would be a lot of duplication of data (account ids, roles) and code (User and Role models on each client, separate management interfaces, etc.).
Are there any drawbacks to keeping the data separate? The client apps are both highly trusted--we made them both--but we are likely to add additional client apps, which are not under our control, in the future.
The most proper way to use oAuth and other similar external Authorization methods as I see it, is for strictly Authentication purposes. All the business/authorization logic should be handled on your server part at all times, and you should always keep a central record of the user, linking to the external info per external type of auth service.
Having a multilevel/multipart set of access, is also a must, if you want your setup to be scalable and future-proof. This is a standard design that is separate from any authorization logic and always in direct relation to business rules.
Stackoverflow does something like this, asking you to create an actual account on the site after you login using an external method.
Update: If the sites are really similar you can subset this design to an object per application that keeps the application specific access rules. This object has to also inherit from a global object that has global rules (thus you can for example impose a ban application-wide or enterprise-wide).
I would go for objects that contain acess settings, and roles that can be related to instances of both application level settings and global settings only for automating/compacting the assignment of access.
Actually you can use this design even if they are not too similar. This will help you avoid redundant settings and meaningless (business-wise) roles. You can identify a role purely by the job title/purpose, and then impose your restrictions by linking to an appropriate acess settings setup.

Resources