Centralized management server for many systems - oauth-2.0

We intend to create a REST API that will be implemented on 100+ servers for use by a Centralized Management Portal (CMP). This CMP will itself have full access to the API (for scheduled tasks etc.) and the authorization is done on the CMP itself.
As an added security measure, all the 100+ servers' API can only be accessed from the CMP's IP Address.
In this circumstance, what would be the security advantage, if any, of using OAuth2 rather than a set of API Keys (unique for each server) that is stored as environment variables on the CMP? Upon reading this, it seems that our use case is somewhat different.
Ultimately, we were thinking that we could just open the CMP only to a subset of IP Addresses for those who need to access it, however, this may not be possible later down the track.

I would think about the API from the viewpoint of its clients:
How would a web or mobile client call the API securely?
How would the end user identity flow to the API?
If you don't need to deal with either of these issues then OAuth doesn't provide compelling benefits, other than giving you some improved authorization mechanisms:
Scopes
Claims
Zero Trust
USER v INFRASTRUCTURE SECURITY
I would use OAuth when user level security is involved, rather than for your scenario, which feels more like infrastructure security.
Some systems, such as AWS or Kubernetes, give you built in infrastructure policies, where API hosts could be configured to only allows calls from hosts with a CMP role.
I would prefer this type of option for infrastructure security if possible, rather than writing code to manage API keys.

Related

Keycloak (or another system) implemented composite IDP via automation/APIs to support large number of IDPs

I would like to support hundreds of thousands of IDPs in my environment (for example 300k SAML 2 IDPs or OIDC OPs). If I was to get picky, I would like to support multiple protocols for authentication; SAML, SAML 2, OAuth, OAuth 2, OIDC but that isn't a strict requirement. Specifically, I want a login page on an IDP that supports authentication, password reset and SSO to SPs for each IDP stack, so to speak.
To do this, I would like to be able to automate deployment and configuration of new IDPs using APIs. Additionally, I have a single identity store that would back all of these IDPs, hopefully retaining global uniqueness across usernames by leveraging the IDP (maybe name) as an additional differentiator. I'm thinking something conceptually like (IDP X Username) tuple. I would like each IDP to reuse a templated UI but have it's own API endpoints, keys, etc to support their respective flows securely.
I'm not an expert in Keycloak and would like some advice on the mechanisms I can use in Keycloak to support this, if at all, and whether or not Keycloak could support this from a performance/volume perspective. I'm happy to write custom code/extensions, like UIs and storage integrations, but I would prefer to leave the identity management/IDP tasks to Keycloak where possible. I'm also assuming that deployment would not actually deploy a new artifact onto the network, it would just add new endpoints to a currently running Keycloak system but I would like validation of that assumption, if possible.
Long story short, I would like all the IDPs to support the same authentication flows/interaction patterns but look like unique IDPs on the network while limiting the number of deployed software components.
I'm happy to take pointers, like "this might work for you". I'm not really looking for a full design here.

Best practices for securely granting user credential access to other internal services (API key)?

I have a Ruby on Rails application with a database that stores sensitive user information (hashed with Devise). I now need to pass some of this sensitive information to another internal service on another server that needs it to make calls to third party APIs, so it needs a way to request that information from the RoR app.
What's the best approach to something like this? My first intuition was to grant an internal API key that would grant access to all sensitive information in the DB (via a private endpoint), the same way developer keys give access to a subset of API endpoints. Is this secure enough as long as I hash the API key? What's the best approach to passing sensitive information around through internal services?
Private APIs
My first intuition was to grant an internal API key that would grant access to all sensitive information in the DB (via a private endpoint), the same way developer keys give access to a subset of API endpoints
Well private endpoints or private APIs don't exist in the sense of only protecting them by using an API key. From a web app you only need to see the html source code to be able to find the API keys. In mobile devices you can see how easy is to reverse engineer API keys in this series of articles about Mobile API Security Techniques. While the articles are in the context of mobile devices, some of the techniques used are also valid in other type of APIs. I hope you can see now how someone could grab the API key and abuse from the API you are trying to secure.
Now even if you don't expose the API key in a mobile app or web app, the API is still discoverable, specially if the endpoint is served by the same API used for the other public endpoints. This is made even easier when you tell in robots.txt that bots should not access some of the endpoints, because this is the first place where hackers look into for trying to enumerate attack vectors into your APIs.
Possible Solutions
Private API Solution
What's the best approach to something like this? My first intuition was to grant an internal API key that would grant access to all sensitive information in the DB (via a private endpoint)
In order to have a private API the server hosting it needs to be protected by a firewall and locked to the other internal server consuming it with certificate pinning and maybe also by IP address. In order to be able to properly secure and lock down the internal server hosting the supposed private API it MUST not support any public requests.
Certificate Pinning:
Pinning effectively removes the "conference of trust". An application which pins a certificate or public key no longer needs to depend on others - such as DNS or CAs - when making security decisions relating to a peer's identity. For those familiar with SSH, you should realize that public key pinning is nearly identical to SSH's StrictHostKeyChecking option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key.
Database Direct Access Solution
What's the best approach to passing sensitive information around through internal services?
Personally I would prefer to access the database directly from the other server and have the database software itself configured to only accept requests from specific internal servers for specific users with the less privileges possible to perform the action they need. Additionally you would employ the firewall locking to and use certificate pinning between the internal servers.
Conclusion
No matter what solution you choose place your database with the sensitive data in server that only hosts that database and is very well lock-down to your internal network.
Anyone needing to access that sensitive data MUST have only read privileges for that specific database table.

OAuth client implementation w/ multiple resources, multiple auth servers

I'm trying to understand OAuth best practice implementation strategies for systems requiring access to protected resources backed by different authorization servers. The default answer is to use the access tokens provided by each authorization service and write the logic to store them on an as-needed basis, but the use case of systems requiring multiple, federated protected resources seems common enough that there might be a protocol/framework-level solution. If so, I haven't been able to find it.
Here's a hypothetical example to clarify:
I'm a user with an account on Dropbox, Google Drive, and Boxx. I'd like to make a backend API to report the total number of files I own across all three systems, i.e., Result = FileCount(Dropbox) + FileCount(Drive) + FileCount(Boxx). How to I organize the system in such a way that I can easily manage authorizations? A few cases:
Single-account: If I only have, say, a Drive account, the setup is easy. There's one protected resource (my folders), one authorization server (Google), and so I only have one token to think about. By changing the authorization endpoints and redefining the FileCount function, I can make this app work for any storage client I care about (Dropbox, Google, Boxx).
Multi-account: If I want to aggregate data from each protected resource, I now need three separate authorizations, because each protected resource is managed by a separate authorization server. AFAIK, I can't "link" these clients to use a single authorization server. As a result, if I have N protected resources backed by N authorization servers, I'll have N access tokens to manage for a given request/session. Assuming this is true, what provisions do software frameworks provide to handle this (any example in any language is fine)? It just seems too common of a problem not to be abstracted.
The closest related question I can find is probably this one. The accepted answer seems completely reasonable: one application should not be able to masquerade as another without explicit consent. What I'm looking for is (I think) slightly different: some standard methodology/framework/approach to managing multiple simultaneous access tokens per session. I've also wondered about the possibility of an independent authorization server that can proxy the others as needed and manage the token bookkeeping (still requiring user consent for each), but I think this amounts to the same thing.
Thanks in advance.

Central JWT management system for my micro-service based architecture

We are building our applications in micro-services based architecture to implement our applications. As true with micro-services, we now see a lot of cross service interactions happening between services.
In order to safeguard the endpoints we plan to implement JWT based authentication between such secure exchanges.
There are 2 approaches we see helping us achieve it:
Embed an JWT engine in each application to generate the token (#consumer side) and evaluate (#provider side). With an initial exchange of keys, the token exchange shall work smooth for any future comms.
Have an external (to application) JWT engine, that sits in between all micro-service communications for the distributed application, and takes care of all token life cycle, including its encryption-decryption and validation.
There are lot of options to do it as per option #1 as listed on https://jwt.io but considering the over-head token generation and management adds to a micro-service, we prefer to go with 2nd option by having de-centralised gateway.
After quite some research and looking at various API gateways we have not yet come across a light weight solution/tool that can serve to our need and help us get centralised engine for one applications comprised of many micro-services.
Do anyone know about one such tool/solution?
If you have any other inputs on this approach, please let me know.
I prefer also option 2, but why are you looking for a framework?
The central application should only be responsible of managing the private key and issuing the tokens. Including a framework for solve one service could be excessive
You can also think to implement a validation service, but since applications are yours, I suggest to use an assymetric key and verify the token locally instead of executing remote validation requests to central application. You can provide a simple library to your microservices to download the key and perform the validation. Embed any of the libraries of JWT.io or build It from scratch. Validating a JWT is really simple
If you would need to reject a token before expiration time, for example using a blacklist, then It would be needed a central service. But I do not recommend this scheme because breaks JWT statelessness
Both scenarios could be implemented in Spring Cloud Zuul.
For more info:
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_router_and_filter_zuul
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_configuring_authentication_downstream_of_a_zuul_proxy

Spring Cloud OAuth2: Resource server with multiple Authorization server

We are developing an application in a microservice architecture, which implements signle sign-on using Spring Cloud OAuth2 on multiple OAuth2 providers like Google and Facebook. We are also developing our own authorization server, and will be integrated on next release.
Now, on our microservices, which are resource servers, I would like to know how to handle multiple token-info-uri or user-info-uri to multiple authorization servers (e.g. for Facebook or Google).
This type of situation is generally solved by having a middle-man; a single entity that your resource servers trust and that can be used to normalize any possible differences that surface from the fact that users may authenticate with distinct providers. This is sometimes referred to as a federation provider.
Auth0 is a good example on this kind of implementation. Disclosure: I'm an Auth0 engineer.
Auth0 sits between your app and the identity provider that authenticates your users. Through this level of abstraction, Auth0 keeps your app isolated from any changes to and idiosyncrasies of each provider's implementation.
(emphasis is mine)
It's not that your resource servers can't technically trust more than one authorization server, it's just that moving that logic out of the individual resource servers into a central location will make it more manageable and decoupled.
Also have in mind that authentication and authorization are different things although we are used to seeing them together. If you're going to implement your own authorization server, you should make that the central point that can:
handle multiple types of authentication providers
provide a normalized view of the user profile to downstream resource servers
provide the access tokens that can be used by your client application to make authorized requests to your microservices

Resources