I'm currently trying to figure out, if securing machine-to-machine OpenID Connect endpoints besides TLS (for example with basic authentication) is allowed. I refer machine-to-machine endpoints to, for example the token endpoint (https://openid.net/specs/openid-connect-core-1_0.html#TokenEndpoint) or the well-known endpoint (https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig).
So far I couldn't find anything in the OpenId/OAuth2 specification (i.e https://openid.net/specs/openid-connect-core-1_0.html) on this topic if this is allowed/disallowed/discouraged/etc.
There is no need to protect the discovery and the other public endpoints, as they are meant for public consumption by the clients and APIS.
You should always use HTTPS/TLS with the browser because otherwise, you will have problems with the cookies.
For machine-to-machine communication, you have the client credentials flow, which gives you a secure way to establish communication between two services.
How you secure the communication internally on the backend is up to you.
With backend, I mean where the services are hosted:
Related
I have API which is hosted in Azure. It is using Microsoft Identity platform for Authorization. Now we need to integrate APIM Gateway for the API. APIM also provides OAuth Authorization. So my question is should I configure OAuth for my API in APIM since Api would be deployed in APIM or I can continue to use Microsoft Identity platform which is doing its job. So I am looking for benefits for using OAuth from APIM rather than throw Microsoft Identity. In other words what would be difference and pros using OAuth vs Microsoft identity which also relies on OAuth?
Each API should validate a JWT access token on every request, then use the token's scopes and claims to authorize access to resources. This is sometimes called a zero trust architecture.
Another important requirement is to avoid revealing sensitive data in tokens, such as emails, to internet clients. The phantom token pattern has more info on this, and involves the use of an API gateway.
I would favour a solution where there is an API gateway in front of your APIs. This is a hosting best practice and also enables you to perform tasks such as cookie and token translation in the gateway.
APIM is one solution so I would favour that type of option if it improves your API security. There are other Azure options though, so it can be worth clarifying the types of things you want to do in gateways before choosing one. The API Gateway Guides may give you some ideas.
We are planning to use OpenID Connect / OAuth2 to handle access to a list of resource servers.
We want to use JWT as access token when a user is going to call one of the resource servers. The access token will be issued by an auth server in our landscape according to OpenId Connect / OAuth2 standards. Access will be granted or rejected to the calling user based on the JWT access token.
The standards are new for us so we are still reading and understanding.
We are currently searching for an option to do a lookup of the resource servers with a call to auth server. We would like to use it in order to simplify the clients.
Is there any option available in OpenId Connect / OAuth2 to help clients finding the available resource server? Is there any endpoint available in auth server to do that? Or can the answer with the JWT be enhanced to return the list of the resource servers?
Thanks in advance
Thorsten
With the use of JWT, there is no need to go for one extra validation from the authorization server. Token & its claims should be enough to validate the access rights. You can customized the token claims as per the needs.
An Authorization Server does not know the list of APIs, since APIs are not usually registered as clients. There are some related concepts though. The IAM Primer article has a good overview on how these concepts fit together.
Each API covers an area of data and this maps neatly to the OAuth Scopes included in access tokens.
Each API has an audience, such as api.mycompany.com, which maps to the aud claim in an access token. This can enable related APIs to forward the same access token to each other and securely maintain the original caller's identity.
An API gateway is usually hosted in front of APIs, as a hosting best practice. This enables clients to use a single, easy to manage, API base URL, such as https://api.mycompany.com, followed by a logical path.
I am building a microservice project in which I need certain clarification on what to do in these situations:
for centralized authentication and authorization or centralized authentication on API gateway, every request must contain jwt token and pass-through API gateway to call other microservice also it should check which user has permission to access API in other microservice. So how can I handle those situations?
I will be using specific tool for exploitation.
users will come through either web browser or mobile app. your api gateway will be exposed to external world. most of the apiGateway nowdays contains plugins for authentication and authorization. for example you can use OIDC plugin with api gatway to authenticate the users which will return JWT token to call the internal apis. you can refer below component diagram link for architecture diagram
I'm developing a microservice in C++ (for low latency reasons), and I'm beginning to dive into OpenID and Keycloak. Developing in C++ means I've almost no library support for OpenID, but I've (hopefully) the all the low level details working (like proper JWT verification). I've to do all the communication flows and redirects myself.
So much as a background. Keep that in mind because I need to know and implement details which usually a library will hide for a developer.
There are three parties in my application:
A web client W
Microserice A
Microservice B
General communication between those three: The web client W could be either a frontend UI or a mobile device using just the API as a service without having any sort of frontend. W connects to microservice A to manipulate and consume data from it. Microservice A exchanges data with microservice B and vice versa. W does not need to know about B.
So far I thought of the following architecture:
For the Web Client to Microservice A communication I'd use dedicated users and clients with access type "Public" in Keycloak to allow user/pw logins
For the Microservice A to Microservice B communication I'd use Access Type Bearer because they never initiate any login
Please advise if you think that does not sound right. My actual question is however what kind of login flow(s) is required and which step there are in between which I may miss:
Is it ok to have an endpoint for the login on microservice A https://servicea.local/login which redirects the requests of the web client to OpenID / Keycloak. E.g. the web client sends username, password, client id and grant typeto the OpenID token request endpoint http://127.0.0.1:8080/auth/realms/somerealm/protocol/openid-connect/token ?
Should the client take the token and add it to all subsequent calls as authorization token?
Should the Microservice implement a callback to retrieve the authorization information?
Should the flow instead be changed for the client to service communication to provide an access code to he service which it exchanges with an access token?
I would aim for an architecture where the role of your C++ APIs is just to verify tokens and serve data.
The client side is a separate solution though, requiring its own code for logging in + dealing with authorization codes and getting tokens. This should not involve your API - so to answer your questions:
Not recommended
Yes
No
No
These days all logins should occur via the system browser, whether you are writing any of these. This client side code is probably not C++ and often requires more work than that to build the API:
Web UI
Mobile UI
Console / Desktop App
If it helps my blog has a lot of write ups and code samples about these flows. In the following post, notice that the API is not called until step 13, once all login processing has been completed by the web client.
OAuth Message Workflow
Authentication (delegating to Keycloak) and then getting Token should be done by your UI by directly contacting keycloak and that token should be passed on from UI to A to B
Here are the OIDC endpoints that keycloak provides
https://www.keycloak.org/docs/latest/server_admin/index.html#keycloak-server-oidc-uri-endpoints
I'm trying to understand how to best apply the OAuth 2.0 grant types to a microservice architecture I am working on. Here's the situatation...
I have a Single-Page Application/Mobile App acting as a client running in a web browser (browser acting as the user agent) or mobile phone. I use the Implicit Grant defined in RFC 6749, section 4.1 to authenticate a user and acquire an access token that the app uses to access some externally exposed API.
The architecture I am dealing with is a collection of microservices that call on one another. For example, consider an externally exposed API serviceA and internal APIs serviceB and serviceC. Let's say serviceA depends on serviceB which subsequently depends on serviceC (A --> B --> C).
My question is, what is the typical authorization flow for this situation? Is it standard to use Implicit Grant for the SPA to acquire an access token and then use the Client Credentials Grant defined in RFC 6749, section 4.4 to acquire an access token for the machine to machine interaction between serviceB and serviceC?
For your single page application, use the Implicit grant, which is designed for browser applications - they cannot hold any secrets and with the Implicit grant, the tokens stay in the browser (because it's in the hash part of the redirect URL).
The mobile app, take a look at the OAuth 2.0 for Native Apps, it recommends the use of the Auth code grant. It also describes implementation details for common platforms and security considerations.
There is a new grant described in the OAuth 2.0 Token Exchange RFC that would suit your needs for chained calls between services:
... An OAuth resource server, for example, might assume
the role of the client during token exchange in order to trade an
access token, which it received in a protected resource request, for
a new token that is appropriate to include in a call to a backend
service. The new token might be an access token that is more
narrowly scoped for the downstream service or it could be an entirely
different kind of token.
But I don't know whether Auth0 supports it. If it doesn't, I would probably pass the original access token from serviceA to serviceB and serviceC. The internal services could be secured at the network level too (e.g. they could called just from other services).
If serviceB and serviceC are internal and will never be called from an external client then the Client Credentials Grant would be a good candidate. As the client is also a resource server too.
You could also look at passing the same bearer token between services, providing the SPA (which requests the token initially) obtains consent for all scopes which may be used by the other services and the "audience" of the token must allow for all the possible resource servers (services).
I don't think either are best practice and there are tradeoffs with both ways.
I would honestly recommend for each backend service to implement the Authorization Grant. That is have an endpoint exposing the redirect to your provider. Then for each frontend app go to that endpoint to trigger the OAuth flow. After the flow has been completed handle the Authorization part in the callback url and return a token which will be stored on the frontend somewhere.
Hope this helps.