OAuth2 "client credentials" grant: remote IP check? - oauth

I'm developing an API which only needs to be accessed by servers, as opposed to specific, human users. I've been using the client credentials grant which, if I'm not mistaken, is appropriate for this use case.
So the remote websites/apps, after registering their corresponding OAuth2 clients, are simply requesting an an access token using their client ID/secret combination, via a SSL POST request + HTTP Basic authentication.
Now I was wondering if it would be a good idea, during said access token request, to check the remote IP to make sure it actually belongs to the client that was registered (you'd have to state one or several IPs when declaring your app, then it would be checked against the remote IP of the server making the POST /token request).
I feel like this would be an easy way to make sure that, even if the client ID/secret are somehow stolen, they wouldn't be just usable from anywhere.
Being fairly new to the OAuth2 protocol, I need some input as to whether this is a valid approach. Is there a more clever way to do this, or is it straight up unnecessary (in which case, for what reasons)?
Thanks in advance

That's certainly a valid approach but binds the token tightly to the network layer and deployment which may make it difficult to change the network architecture. The way that OAuth addresses your concern is by the so-called Proof-of-Possession extensions https://datatracker.ietf.org/doc/html/draft-ietf-oauth-pop-architecture.
It may be worth considering implementing that: even though it is not a finalized specification yet, it binds the token to the client instead of the IP address which safeguards against network changes and is more future proof.

Related

PKCE vs DCR for OAuth2 and Mobile Applications

I’m developing my own OAuth2 + OpenID Connect implementation. I am a bit confused about how to handle OAuth flows for native (specifically, Mobile) clients. So far, I am seeing that I need to use an Authentication Code Flow. However, based on my research, there are some details that seem to contradict each other(at least based on my current understanding).
First, standard practice seems to say that mobile apps are not inherently private and, as such, standard flows that make use of a back channel should not be used. As a work around, the PKCE extension can be used (and utilize the built-in device browser as opposed to a web view so the tokens and sensitive information are less likely to be leaked).
However, under the Protocol’s Dynamic Client Registration specification, it is also mentioned that mobile apps should use this method of client registration to get a valid client ID and client secret... But, why would we do this when in an earlier section it was established that mobile applications were indeed public clients and couldn’t be trusted with confidential information like a client secret (which we are getting by using this DCR mechanism...
So, what am I not understanding? These two things seem to contradict one another. One claims mobile apps are public shouldn’t be trusted with a secret. Yet, in the recommended DCR mechanism, we assign them the secret we just established they can’t be trusted with.
Thanks.
A bit late, but hope it helps. So part of the OAuth2.0 protocol is two components, the client_id, and client secret. The client and server must agree on those two values outside the protocol i.e. before the protocol start. Usually, the process is as follows. The client communicates with the Authorization Server using an out-of-bound communication channel to get these values and be registered at the server. There is two way this client registration can happen, statically and dynamically. Statically mean the client_id and secret do change, i.e. the client gets them once when he registers with the server. Dynamic client registration refers to the process of registering a client_id every time the client wants to use to protocol, i.e. a client secret will be generated for him every time (also by an outbound communication).
Now, Why use dynamic registration?
Dynamic client registration is better at managing clients across replicated authorization servers., The original OAuth use cases revolved around single-location APIs, such as those from companies providing web services. These APIs require specialized clients to talk to them, and those clients will need to talk to only a single API provider. In these cases, it doesn’t seem unreasonable to expect client developers to put in the effort to register their client with the API, because there’s only one provider.
Does Dynamic Client registration offer any security advantages?
No, both are vulnerable if used with a JavaScript or a Native Mobile Client (JavaScript client can be inspected, and Mobile apps can be decompiled). Hence, both of them require PKCE as an extra layer of security.

Why do communications between internal services need authorization like oauth if the outside world can't access the apis directly?

This is just a general question about microservice architecture. Why do 2 or more internal services still need token auth like oauth2 to communicate with each other if the outside world doesn't have access to them? Couldn't their apis just filter internal IP addresses instead? What are the risks with that approach?
Why do 2 or more internal services still need token auth like oauth2 to communicate with each other if the outside world doesn't have access to them?
You don't need OAuth2 or token authentication, but you should use it. It depends on how much you trust your traffic. Now in the "cloud age" it is common to not own your own datacenters, so there is another part that own your server and network hardware. That part may do a misconfiguration, e.g. traffic from another customer is routed to your server. Or maybe you setup your own infrastructure and do a misconfiguration so that traffic from your test environment is unintendently routed to your production service. There is new practices to handle this new landscape and it is described in Google BeyondCorp and Zero Trust Networks.
Essentially, you should not trust the network traffic. Use authentication (e.g. OAuth2, OpenID Connect, JWT) on all requests, and encrypt all traffic with TLS or mTLS.
Couldn't their apis just filter internal IP addresses instead? What are the risks with that approach?
See above, maybe you should not trust the internal traffic either.
In addition, it is now common that your end-users is authenticated using OpenID Connect (OAuth2 based authentication) - JWT-tokens sent in the Authorization: Bearer header. Most of your system will operate in a user context when handling the request, that is located within the JWT-token, and it is easy to pass that token in requests to all services that are involved in the operation requested by the user.
For internal services it's usually less about verifying the token (which in theory was already done by the external-facing gateway/api), and more about passing through the identifying information on the user. It's very common for even internal services to want to know who the requesting/acting user is for permissions and access control, and sometimes it's just easier to tell every service creator who needs user-scoping to accept a JWT in the Authorization header than it is to say, "look for the user ID in the X-COMPANY-USER-ID header".
You can implement very granular role based access control(RBAC) on the apis exposed by Microservices using Oauth which you can not do using filtering IP address.

Access Token/Authorization Between Microservices

I'm creating an online store REST API that will mainly be used by a mobile app. The plan is for a microservices architecture using the Spring Cloud framework and Spring Cloud OAuth for security.
My question is really on best practices for communication between microservices: Should I have each service register for their own token, or should they just pass the user's token around?
For example, I have 3 services: user-service, account-service, order-service.
I've been able to implement two procedures for creating an order: One passes the user's token around, and in the other each service gets their own token. I use Feign for both approaches.
So for option 1: order-service -> GET account-service/account/current
order-service calls the account-service which returns the account based on a userId in the token. Then the order-service creates an order for the account.
Or for option 2: order-service -> GET account-service/account/user-id/{userId}
order-service gets the userId from the sent token, calls the account-service with it's own token, then creates the order with the retrieved account.
I'm really not sure which option is best to use. One better separates information but then requires two Feign Clients. However the other doesn't require the 2 clients and it becomes easier to block off end certain endpoints to outside clients, however it requires extra endpoints to be created and almost every service to go digging into the Authentication object.
What are all your thoughts? Has anyone implemented their system in one way or another way entirely? Or perhaps I've got the completely wrong idea.
Any help is appreciated.
I have found below 3 options:
If each microservice is verifying the token then we can pass the same token. But the problem is - in between same token can be expired.
If we use client_credentials grant then there we are having two issues: one is, we need to send the username/id in next microservice. Another one is, we need to request two times - first for getting the access token, next for actual call.
If we do the token verification in API gateway only (not in microservices) then from the API gateway we need to send the username in every microservices. And microservices implementation needs to be changed to accept that param/header.
When you do server to server communication, you're not really acting on behalf of a user, but you're acting on behalf of the server itself. For that client credentials are used.
Using curl for exemple :
curl acme:acmesecret#localhost:9999/oauth/token -d grant_type=client_credentials
You should do the same with your http client and you will get the access token. use it to call other services.
You should use client tokens using the client_credentials flow for interservice communication. This flow is exposed by default on the /oauth/token endpoint in spring security oauth.
In addition to this, you could use private apis that are not exposed to the internet and are secured with a role that can only be given to oauth clients. This way, you can expose privileged apis that are maybe less restrictive and have less validation since you control the data passed to it.
In your example, you could expose an public endpoint GET account-service/account/current (no harm in getting information about yourself) and a private api GET account-service/internal/account/user-id/{userId} that could be used exclusively by oauth clients to query any existing user.

Is it OK for an OAuth client to determine the access_token server side and then pass the token to the browser?

I can find plenty of example of single page apps (which can't manage a client secret) and plenty of examples of old-school server-side apps (which can manage a client secret) using OAuth.
But for us and, I suspect, the majority of enterprisey systems, a system is both server-based and client-based.
We can easily (and securely) identify the client server-side, and we could then make the resulting (user) access_token available browser-side.
The question is, does doing this introduce a risk?
The client-server (the server-side component of the client) cannot guarantee that the browser is running its code - but it can guarantee that all access to the resource owner's data on the client has been approved by the resource owner.
Thanks.
The principle itself does not introduce a risk but of course you need to take care of the method used to expose the access token to the browser. One such approach is documented here: https://hanszandbelt.wordpress.com/2017/02/24/openid-connect-for-single-page-applications/
It suggests to expose a server-side endpoint that can be called with a cookie which will then return session information that may include the access token.

OAuth Access token Validation with Microservices

I am currently thinking about building an application following the microservice architecture. To authorize the user I was thinking of using the OAuth protocol. Now the question is where/when to validate the Access Token.
I basically see two possibilities:
Each microservice is doing it on it's own (meaning one call that might involve 10 microservices would result in 10 token validations)
Introduce an API gateway (which needs to be there anyways I guess) which does the token validation and passes on the user ID, scopes, ... that the other microservices trust and use (which also means that some kind of authentication between the API gateway and the microservice must be there, e.g. client secret!?)
As you probably have already guessed, I tend to go with the second approach. Is that a valid one? Do you have some practical experiece with one of those approaches? Or yould you suggest another approach?
I'm looking forward to your comments/remarks on that!
Thanks and regards!
You almost definitely want a public/private split in your microservices architecture. The public side should be authenticating the token, and the private side is used to service calls from other API calls. This way you are only authenticating once per call.
You can accomplish this by, as you said, creating a gateway service, which dispatches those calls to the private services. This is a very common pattern. We have found it useful to authenticate the gateway side to the private API with client certificate authentication, sometimes referred to as two-way SSL. This is a little more secure than a shared-secret (which can easily leak).

Resources