I am trying to enforce OAuth 2 in Bluemix for a suite of microservices via API Management. Am I right in the assumption that the API Management service will always act as an OAuth 2 Authorization Provider as opposed to only checking the validity of access tokens as an enforcement gateway? The latter would be what I really want to have though.
Ideally I would like to specify my own OAuth Provider Implementation with the means to issue, validate and revoke access tokens. Then having the API Management service as a proxy for my services ensuring that requests coming in carry a valid token.
Reading the documentation I am struggling to understand how the API Management was designed. Could somebody point me to the direction I am looking for, or alternatively help me to understand what the API management's role in the OAuth Flows, listed in the Security Schemes, is?
Thanks.
Related
As far as I understand, applications that we can login with our different accounts use OpenID Connect(A profile of OAuth2.0).
OAuth is for Authorization and OIDC is for authentication(It has ID Token-User Info Endpoint).
So, was it not possible to login to an application from another application account using OAuth before OIDC? (If possible, how?)
If plain OAuth can't be used for authentication, what is/was it used for?
I mean what does it do with 'authorization' exactly?
What does it get from the resource service with the access token?
I have always found the jargon around this unhelpful so I understand your confusion. Here is a plain English summary:
OAuth 2.0
Before OIDC apps used OAuth 2.0 to get tokens, and this involved optional user consent. The process of getting tokens was termed 'delegation'.
In practical terms though all real world OAuth 2.0 providers also included authentication in order for their system to be secure. How authentication was done is not defined in OAuth specifications.
OAuth is primarily about protecting data, where scopes and claims are the mechanisms. These links provide further info:
IAM Primer
Scope Best Practices
Claims Best Practices
OIDC
This just adds some clearer definition around how authentication messages before and after authentication should work:
A client simply includes an openid scope to use OpenID Connect
A client may force a login during a redirect via a prompt=login parameter
A client may request an authentication method via an acr_values parameter
The client receives an ID token (assertion) once authentication is complete, can digitally verify it if required, then use the information in it (eg a user name)
OIDC still does not define how the actual authentication works though.
Use them together
Pretty much all OAuth secured apps (and libraries) these days use both together, so that the authentication and delegation both use standards based solutions. It gives you the best application features and design patterns for doing the security well.
I'm looking for an API management system, that will intergrate with Keycloak (Keycloak should provide all of the authentication).
Wicked.io (kong based) looks nice.
I've tried adding oAuth2 as an auth method, but wicked keeps generating its own client ids and secrets.
Is there any way to do this with wicked?
wicked.haufe.io maintainer here.
Combining wicked/Kong with KeyCloak may not be a very good idea. Wicked solves a lot of the problems KeyCloak is intended to solve as well, and the overlap is quite big, e.g. doing identity federation (SAML, OAuth2) to an OAuth2 workflow and all that.
KeyCloak and wicked/Kong follow different principles though; KeyCloak issues tokens which are validated via a KeyCloak library inside your services. This more or less replaces the API Gateway - it's implemented inside your services, based on the KeyCloak library.
wicked/Kong as a contrast is built up differently, where the big differentiator is the API Portal wicked provides on top of Kong and that you have a dedicated API Gateway (Kong). Wicked provides you with its own client credentials, and it also wants to do the entire authentication and authorization bit. What you get in exchange for that is the self service API Portal; if you don't need that, you will probably not need wicked.
What you can do is to federate a KeyCloak OAuth2 flow into wicked, by having KeyCloak act either as a SAML or OAuth2 identity provider. You would then register your KeyCloak Authorization Server as an identity provider with wicked (using a single service provider (SAML) or client (OAuth2)). The "but" here is that you would still need to provide an entry point to your services which do not go via the KeyCloak library.
wicked/Kong always works like this: It takes away the need to implement Authentication/Authorization inside your services; instead you need to check for the headers X-Authenticated-UserId and X-Authenticated-Scope. With wicked, this will typically contain something like sub=<some id>, also depending on which type of identity provider(s) you have configured wicked to use. But this approach usually replaces KeyCloak. The upside is that you can have one single entry point to your services (=Kong), and you have a quite lightweight way how you can protect arbitrary services - not only services written in languages KeyCloak supports! - behind an API Gateway while providing self service access (with configurable plans, documentation,...) via an API Portal.
All these things are obviously somewhat complicated; wicked is extremely flexible in its usage, but it's not really meant to be combined with KeyCloak (which is likewise. It all boils down to really understanding the use case and finding a solution architecture which solves your use case in the best way.
If your use cases involve an API Portal and API documentation (it should), going wicked/Kong may be a good possibility. If it doesn't, you may be happier sticking to KeyCloak (which you can see as a headless API Management system with a decentralized gateway of you want).
Disclaimer: My knowledge on KeyCloak is somewhat outdated; it may be that there are updates which also go into the API Portal direction, but of this I am not aware.
I'm trying to figure out the best way to deploy an API in Amazon API Gateway. I'm getting totally confused about the appropriate authorization to use.
The API will be used by our customers for their own custom developed apps.
We don't need to provide end user authentication. This will be handled on a by our customers, based on the specific requirements for their apps.
What we need to do, is provide a way for our customers apps to authenticate against our API.
My understanding is that I have the following options...
AWS_IAM - This may not be appropriate, adding customer
credentials to our Amazon account.
Cognito User Pool Authorizer -
This seems to mostly be designed for user authorization, rather than
client authorization.
Custom Authorizer - Presumably can be
tailored to our specific requirements, but would need a lot of code
to be built from scratch.
API Key Authentication - Quick and
easy, but doesn't seem particularly secure, to simply rely on a key
header.
I had originally assumed, that there would be some straightforward way to enable OAuth2 Authentication. For our use-case, the "Client Credentials" flow would have been suitable. However from the research I've done, it sounds like OAuth2 Authentication would require a Custom Authorizer Lambda. I'm really not keen on the idea of having to implement a full OAuth2 service, to authenticate the App. It will simply cost too much to build something like this.
Also if we're writing our own full custom OAuth2 Authorizer, and writing all of the functionality for the API itself, I'm not sure how API Gateway is actually providing us any value.
Is there some best practice, or standard for authenticating API clients for API Gateway?
What we need to do doesn't seem like a particularly unusual thing, there must be some standard way people do this.
Any suggestions would be very much appreciated.
Does OpenId support a two way exchange of tokens at any place in the spec? Specifically allowing both parties to share tokens with each other in some way so they can share services with each other?
I've looked through the spec, but can't see anything detailing any scenarios like this.
An app I'm working on has integrated itself with a trusted OpenId provider, we'll call Acme.
We'd also like to provide access tokens and refresh tokens to Acme, as they'd like to access features of our service as well.
It seems natural that during our interactions to get tokens from Acme, that we'd like to expose tokens to them.
Is this part of the spec in any way? Or is the only way to do this is to become a full identity provider ourselves?
You could include the tokens as part of a request object, see: http://openid.net/specs/openid-connect-core-1_0.html#RequestObject but that would depend on a pair-wise agreement with Acme since they'd need to handle the non-standaridzed request object contents.
The best way forward is to become a provider yourself so you can leverage all the features of the various flows without being dependent on a pair-wise agreement and accompanying implementation.
It sounds like you're confusing OpenID Connect and plain OAuth2 to some extent.
OpenID Connect is a specification for identifying end users to a client application, based on their authentication at the OpenID Provider. It's not clear from your question whether end users are even part of the picture, so even plain OAuth2 may not be relevant (unless you are just using the "client credentials" grant).
Neither spec says anything about mutual exchange of tokens. It would probably help if you describe the interactions you anticipate in more detail and which grants you expect to use. Who will authenticate to your identity provider and what would be a typical client application?
I am learning oAuth2 for the first time. I am going to use it to provide authentication for some simple web services using a two-legged approach.
According to what I have read, the flow should go like this: the web service client supplies some kind of credential to the oAuth server (I'm thinking of using JWT). If the credentials are valid, the oAuth server returns an access token. The web service client then supplies the access token when attempting to use the web service end point.
Here is my question, why not just supply the JWT when making a request to the end point? Why is oAuth's flow conceived this way. Why not just supply to JTW to the end point and use that for authentication? What is the advantage of having the extra step of getting an access token?
Thanks!
You can certainly supply the JWT directly to the web service. The questions is how do you generate it in a way that the service trusts.
A JWT is and access_token, but not all access_tokens are JWTs.
Your client can issue a JWT, sign it with a key (or a cert) and then send it to the API. The advantage of having a 3rd party (an Issuer) is that you can separate authentication from issuing tokens. Clients can authenticate in multiple ways (e.g. usr/pwd, certs, keys, whatever) and then use the JWT to call your API.
The additional abstraction gives you more flexibility and management scalability. For example: if you have 1 consumer of your API, then you are probably ok with a single credential (or JWT, or whatever). If you plan your APIs to be consumed by many clients, then handing that responsibility to a specialized component (e.g. the the issuer) makes more sense.
OAuth BTW, was designed for a specific use case: delegate access to an API to another system on your behalf. You grant access to system-A to access resources on system-B on your behalf with a permission scope.