Restricting api access by client - oauth

I have an identity server built with Identity server 4. There is one main API with several angular web applications build by third party customers that access this API endpoint.
Now I would like to create a second API but its only for internal use with OUR official plugin. I am trying to figure out how to lock it down so that only our app can access it. I am not a fan security by obscurity and assuming that the third party's dont know its there so wont try and access it.
My first thought was to add a new scope for this API but by doing that its going to popup and ask the users for access to the data which isnt really need.
The only thing i can think of would be to check the client id some how in the API and add a policy for it. This really isnt right ether as to my understanding polciy should be checking stats of the user and not the client itself.
services.AddAuthorization(
options => {
options.AddPolicy("DevConsole", policy => IsClientId(xxxx)
}
);
Is it possible to lock down an API based upon a single client id? or am i going at this in the wrong way.
Another idea i had was to add another claim if they login with this client id which seams like overkill to me.
Example:
Lets say that I have an API endpoint that allows you to update the usersname. All users have access to their name this isnt a scope issue. However only our official app has access to update the usersname. Any app created by third party developers do not have access to the endpoint to update a usersname.
So our official plugin has a client id of 123 and yours has a client id of 321. A user logged though client id 321 can not access this endpoint. User logged in though client id 123 can.
I am starting to think this isnt possible because Oauth and Openid are completely user based. There is no way to validate the user based upon the client they authenticated with.

if I understand the problem correctly, I would create a new Client on Identity Server, for the "main API" and a new Resource for the "internal API"
This would allow the "main API" to also be a client, with client credentials grant type, therefore it has a id+secret and is allowed to request a token for itself. In this case, you will now request the newly created scope for in "internal API" and the users will have no knowledge that this entity evens exists.

After going back and forth with this with this it occurred to me that the client id is returned as a claim. So when i got in this morning i checked.
This should enable me to add a policy for only our official plugin.
services.AddAuthorization(options =>
{
options.AddPolicy("IsOfficalApp", policy => IsCheckOfficalClient());
});
This should enable me to lock down the API endpoints in question without requiring additional authorization from the users.

Related

Google oauth2Client.getToken is not returning id_token for other users

I'm implementing Google's 'code model' of Oauth2 and having trouble getting users' email - I wonder if this is a scopes problem or my misunderstanding about how to set up the code model. This sequence of events is already working:
Client loads https://accounts.google.com/gsi/client
Client starts call to google.accounts.oauth2.initCodeClient
Client gets code
Client passes code to one of my server endpoints
Server has an oauth2Client set up using the config with client_id, client_secret, and redirect URL = 'postmessage'
Server exchanges the code from the client for tokens
Server does oauth2Client.setCredentials(tokens) - this contains an access_token, which is enough for the client to make API calls to, e.g., retrieve the user's Google Calendar
Server is able to do oauth2Client.getTokenInfo(tokens.access_token);
There are various places along the way that involve scopes; I am probably getting something confused here. The client's initial call (step 2 above) uses
scope: 'https://www.googleapis.com/auth/calendar',
My code path on the server does define scopes anywhere.
In GCP, my project is set up with scopes
calendar.calendarlist.readonly, calendar.readonly and calendar.events.readonly
openid
/auth/userinfo.email
Here's the problem I'm encountering: when I go through this flow as a user and oauth with the account that owns the GCP project (this is a Google Workspace email, in case that matters), the tokens object that the server receives (step 6 above) has access_token, refresh_token and id_token - the id_token can be decoded to yield the user's email, and the user's email is also in the response to oauth2Client.getTokenInfo(token.access_token).
However, when I go through the flow with my other (personal) Gmail account, the tokens object that the server receives is missing the id_token but has the access and refresh tokens. Question 1: why are the responses different?
Question 2: How can I get the email of the user on the server in the personal Gmail account case? I've tried having the server make a call to https://www.googleapis.com/oauth2/v2/userinfo?fields=id,email,name,picture with the access_token, but this fails. I am not sure if I'm supposed to declare scopes for oauth2Client somehow, or tap a Google API using a different method on the server.
I think I've had a breakthrough: in step 2 in my original post, when I did "Client starts call to google.accounts.oauth2.initCodeClient", I had set the scope of initCodeClient to just the calendar scope. When I changed it instead to scope: 'https://www.googleapis.com/auth/calendar https://www.googleapis.com/auth/userinfo.email openid', (scope takes a space-delimited list in this case), it allowed my server call to get the id_token for this user and oauth2Client.getTokenInfo to get a response with the user's email in it.
When I updated the scopes like that, the popup asking for authorization also updated to request all the scopes I wanted - previously, it was only asking for the Calendar scope, so it makes sense Google didn't want to return the email.
What I still don't understand is why my previous setup was working for the account that owns the GCP project. In other words, when I was first building it out with that owner account, the client was only noting the Calendar scope while the server was asking for all three scopes (ie there was a mismatch), and the server was still able to get an id_token and the user's email in getTokenInfo. Maybe the owner account has some special privilege?

How does keycloak determine which signature algorithm to use?

I'm writing an application that uses keycloak as its user authentication service. I have normal users, who log in to keycloak from the frontend (web browsers), and service users, who log in from the backend (PHP on IIS). However, when I log in from the backend, keycloak uses HS256 as its signature algorithm for the access token, and thus rejects it for further communication because RS256 is set in the realm and client settings. To get around this issue, I would like to "pretend to be the frontend" to get RS256 signed access tokens for my service users.
For security reasons, I cannot give the HS256 key to the application server, as it's symmetrical and too many people can access the server's code.
I am currently debugging the issue using the same user/pw/client id/grant type both on the frontend and the backend, so that cannot be the issue.
So far I have tried these with no luck:
copying the user agent
copying every single HTTP header (Host, Accept, Content-Type, User-Agent, Accept-Encoding, Connection, even Content-Length is the same as the form data is the same)
double checking if the keycloak login is successful or not - it is, it's just that it uses the wrong signature algorithm
So how does keycloak determine which algorithm to sign tokens with? If it's different from version to version, where should I look in keycloak's code for the answer?
EDIT: clarification of the flow of login and reasons why backend handles it.
If a user logs in, this is what happens:
client --[login data]--> keycloak server
keycloak server --[access and refresh token with direct token granting]--> client
client --[access token]--> app server
(app server validates access token)
app server --[data]--> client
But in some occasions the fifth step's data is the list of users that exist in my realm. The problem with this is that keycloak requires one to have the view-users role to list users, which only exists in the master realm, so I cannot use the logged in user's token to retrieve it.
For this case, I created a special service user in the master realm that has the view-users role, and gets the data like this:
client --[asks for list of users]--> app server
app server --[login data of service user]--> keycloak server
keycloak server --[access token with direct granting]-->app server
app server --[access token]--> keycloak server's get user list API endpoint
(app server filters detailed user data to just a list of usernames)
app server --[list of users]--> client
This makes the the list of usernames effectively public, but all other data remains hidden from the clients - and for security/privacy reasons, I want to keep it this way, so I can't just put the service user's login data in a JS variable on the frontend.
In the latter list, step 4 is the one that fails, as step 3 returns a HS256 signed access token. In the former list, step 2 correctly returns an RS256 signed access token.
Thank you for the clarification. If I may, I will answer your question maybe differently than expected. While you focus on the token signature algorithm, I think there are either mistakes within your OAuth2 flows regarding their usage, or you are facing some misunderstanding.
The fact that both the backend and frontend use "Direct Access Granting" which refers to the OAuth2 flow Resources Owner Credentials Grant is either a false claim or is a mistake in your architecture.
As stated by Keycloak's own documentation (but also slightly differently in official OAuth.2 references):
Resource Owner Password Credentials Grant (Direct Access Grants) ... is used by REST clients that want to obtain a token on behalf of a
user. It is one HTTP POST request that contains the credentials of the
user as well as the id of the client and the client’s secret (if it is
a confidential client). The user’s credentials are sent within form
parameters. The HTTP response contains identity, access, and refresh
tokens.
As far as I can see the application(s) and use case(s) you've described do NOT need this flow.
My proposal
Instead what I'd have seen in your case for flow (1) is Authorization Code flow ...
assuming that "Client" refers to normal users in Browser (redirected to Keycloak auth. from your front app)
and assuming you do not actually need the id and access tokens back in your client, unless you have a valid reasonable reason. As the flows allowing that are considered legacy/deprecated and no more recommended. In this case, we would be speaking of Implicit Flow (and Password Grant flow is also discouraged now).
So I think that the presented exchange (first sequence with points 1 to 5 in your post) is invalid at some point.
For the second flow (backend -> list users), I'd propose two modifications:
Allow users to poll the front end application for the list of users and in turn the front-end will ask the backend to return it. The backend having a service account to a client with view-roles will be able to get the required data:
Client (logged) --> Request list.users to FRONTEND app --> Get list.users from BACKEND app
(<--> Keycloak Server)
<----------------------------------------- Return data.
Use Client Credentials Grant (flow) for Backend <> Keycloak exchanges for this use case. The app will have a service account to which you can assign specific scopes+roles. It will not work on-behalf of any user (even though you could retrieve the original requester another way!) but will do its work in a perfectly safe manner and kept simple. You can even define a specific Client for these exchanges that would be bearer-only.
After all if you go that way you don't have to worry about tokens signature or anything like that. This is handled automatically according to the scheme, flow and parties involved. I believe that by incorrectly making use of the flows you end up having to deal with tricky token issues. According to me that is the root cause and it will be more helpful than focusing on the signature problem. What do you think?
Did I miss something or am I completely wrong...?
You tell me.

OAuth2 Implementation

This is a bit of a theoretical question, however I'll try to be as detailed as possible. I've read a bunch of documentation on oath2/SSO implementation(I know they're not the same)- so I need to get beyond hand-wavy to actual system design. So here's what I think an Oauth2 implementation should look like.
The core design involves a bunch of micro services(which I'm calling app here) that all use the same authorization server.
To my understanding these are the end-points an auth server is supposed to provide.
Authorization Server
End point for an app to register -> Once registered the app is provided a client Id and client secret(these are essentially permanent
and don't change.
Endpoint for an user to register -> This request should come with the client Id and client secret so that the authorization server can
associate an user with an app.
Endpoint for an user to login -> If the user is an authorized user then he/she is provided an access token.
Endpoint with user details -> If an authorized app(correct client ID and secret) makes a request with an authorized user(correct access
token) then an user blob is returned.
Resource server(App)
Now that the resource server has this basic user data it can
deserialize the JSON object into its own user class and then have
one-to-one mapping to things like user_address/user_location etc.
This is my understanding of Oauth2-SSO. I'd highly appreciate some help around the rough edges. TIA !!!
I haven't implemented oath2 myself but the system I work on does use it, what you describe seems to be the same as what we use;
We initialize the client with an endpoint and the client secret and ID, then use our user's credentials to get a token (or an error message if the user/client credentials are invalid). From there we use the app's endpoints to call our applications. From what I can see our Oauth2 methods seem to do what you describe in your question, it should be correct.

Using immediate=true in Salesforce OAuth flow

Previously asked this question in the Salesforce StackExchange which they considered off-topic so asking here to see if I can get an answer.
Background
I am attempting to use the immediate parameter to check if a Salesforce user has already approved access when going through the Web Server OAuth Flow as documented on OAuth 2.0 Web Server Authentication Flow. My reasoning for this is that I do not want the login or consent prompts to appear so I can reject access if they have not already approved.
Once the callback page is hit, I am always receiving the parameter error=immediate_unsuccessful even if the user has approved the application before and is logged in.
I have attempted to check this via a customised Google OAuth 2 Playground and setting immediate=true or immediate=false to the end of the authorize endpoint. On =false, the consent prompt shows and then you can grant access. On =true, this returns the same error as listed previously.
The Connected App that has been set up has api and refresh_token as the available scopes, users are able to authorize themselves and there are no ip restrictions set. The client id and secret from this app is then passed into the OAuth 2 Playground.
Below is a brief example on how my proper application redirects to the auth url using Java and the Google OAuth client library. We initially authorize the client without the immediate and then later on call the same code with immediate=true (shown in example)
AuthorizationCodeFlow authorizationCodeFlow = new AuthorizationCodeFlow.Builder(BearerToken.authorizationHeaderAccessMethod(),
httpTransport,
GsonFactory.getDefaultInstance(),
new GenericUrl("https://login.salesforce.com/services/oauth2/token"),
new ClientParametersAuthentication(CLIENT_ID, CLIENT_SECRET),
CLIENT_ID,
"https://login.salesforce.com/services/oauth2/authorize")
.setCredentialDataStore(StoredCredential.getDefaultDataStore(MemoryDataStoreFactory.getDefaultInstance()))
.build();
AuthorizationCodeRequestUrl authUrl = authorizationCodeFlow.newAuthorizationUrl()
.setRedirectUri("https://72hrn138.ngrok.io/oauth/callback")
.setScopes(ImmutableSet.<String> of("api", "refresh_token"))
.set("prompt", "consent")
.set("immediate", "true");
response.redirect(authUrl);
Question(s)
Are there any settings that I may have missed in Salesforce that would alleviate the error?
Is there any other option in the OAuth 2 spec that has to be set for the immediate option to work?
Does the immediate setting work?
I managed to solve this issue in the end. To allow the immediate=true option to work, the scopes have to be removed from the request. In the example provided you would amend the authUrl to the following:
AuthorizationCodeRequestUrl authUrl = authorizationCodeFlow.newAuthorizationUrl()
.setRedirectUri("https://72hrn138.ngrok.io/oauth/callback")
.set("prompt", "consent")
.set("immediate", "true");
I believe the theory is that defining a scope means you are asking for permissions to use those scope and therefore requires approval for those permissions. This clashes with the immediate option which states that the user must be logged in and the client id already been approved for it to succeed.

Widget LTI -> API Authentication

I am working on an LTI widget, that then needs to authenticate to the API to get additional information.
I'm struggling with trying to figure out how to process the API user authentication, and redirect back retaining the LTI information.
The request string that is returned looks like:
Array ( [x_a] => **********************
[x_b] => **********************
[x_c] => *********************************** )
The issue is that I have my PHP LTI script setup to only load if it meets the following condition:
if(!isset($_REQUEST['lis_outcome_service_url'])
|| !isset($_REQUEST['lis_result_sourcedid'])
|| !isset($_REQUEST['oauth_consumer_key'])
)
x_a is the user id, x_b is the user key .. what is x_c?
Any suggestions appreciated!
My answer is referring to the detailed topic on the IDKey Auth scheme for the Valence dev platform.
The part of the auth sequence you are referring to here is equivalent to the second stage of the sequence, just after the user has successfully authenticated themselves (when you chain on the back of an LTI launch like this, you know that the user driving the user-agent has already authenticated, because they wouldn't have otherwise been able to do the LTI launch) and the service sends back the long-lived user tokens to your service.
See steps 5 to 7 in the sequence notes, in the section called Using a third-party web application in the IDKey Authentication docs topic:
x_a={tokenID} – Unique ID associated with the long-lived token: the web application can provide this ID so that the service can precisely locate the web application/user context.
x_b={tokenKey} – Key associated with the long-lived token: the web application can use this as a key to generating session signatures.
x_c={tokenSig} – Token identity signature: the service joins (and delimits with an ampersand) the User ID (tokenID) and the User Key (tokenKey) to use as the base-string, and uses the Application Key as the key.
Note that you will need to use your Valence Application ID/Key pair in order to verify the token signature contained in x_c.
Remote plugins. Note that the Brightspace Remote Plugin service is a convenience service wrapper around LTI/external learning tools. The docs about Remote Plugins contain a fairly detailed walkthrough/sample that showcases a simple Python web-service Tool Provider implementation that receives a Brightspace LTI launch, and can turn around and use Valence API calls to get more information. You might find it useful to have a close look at that.

Resources