Internal API call between two applications behind Spring Cloud Gateway - spring-security

Behind Spring Cloud Gateway, we have two API applications A1 and A2. Gateway serves OAuth2 Authentication with Spring Security and pass the request with token to either API A1 or A2 based on the route. A1 and A2 will verify the token for each API call. Now A2 API needs to call A1 API.
What is the best way to implement such scenario with OAuth2 enforced at Gateway?
Could A2 directly call A1 without going through gateway but then how it pass Token validation?

It's an API architecture question, and ultimately these are almost always about clients rather than about APIs themselves.
COMMON REQUIREMENTS
Core APIs / microservices should be able to freely call each other and not be impacted by OAuth constraints around scopes / audience.
What each API client can do with a token is controllable. If someone steals a token from a Web UI / mobile app, they should not be able to perform high privilege operations outside the scope of that app.
MY OPINIONS
Separate APIs into 2 layers in order to control client access to Core APIs, especially high privilege operations
Apply OAuth security such as token validation in Entry Point APIs
Decouple Core APIs (A1 and A2) from OAuth if possible. This is often achieved via a locked down network / virtual private cloud.
Pass user context / claims to A1 and A2 via a non OAuth mechanism such as HTTP headers.
TO SUMMARISE
A generic gateway that allows any client to call any Core API operation based on a path is not a security design that will scale. My blog post explores this type of question.
Quite a deep topic, but I hope this gives you a few pointers on how to grow your APIs.

Related

Microservice architecture structure with docker-compose : .NET 6

An ex-employee planned a Microservice Architecture which is being implemented now. I've few question regarding the design and I'd highly appreciate your feedbacks.
Explanation
Dematerialized UI has a matching dematerialized API.
Dematerailized API validates the user and generates token via SSO Library.
Flight API does the I/O validation & validate the request via validate request microservice
Flight API calls Booking API to get some bookings based on the UserId
Flight API calls Print Booking API to generate Messages using Generate Message Microservice
Print Booking API must call Data Access API to get data and then call Generate PDF microservices.
Data Access API calls the database for data.
My Project Structure
FlightBookingsMicroserice.V1 //solution
ApiGatways //folder
DMZ.API/DMZ.API.csproj //Folder/project
BuildingBlocks
EventBus/EventBus.csproj
EventBus/EventBusRabbitMQ
Services
SSO
SSO.API/SSO.csproj
SSO.UnitTests
Flight
Flight.API/Flight.API.csproj
Flight.UnitTets
//Similar for all
ValidationRequest
Booking
PrintBooking
PrintBooking.API.csproj
DataAccess
DataAccess.API.csproj
GeneratePDF
GenerateMessage
UI
UI
Docker-compose
Questions
Should I be using ocelot in DMZ.API.csproj, Flight API and Print Booking API.
Is my project structure a Microservice way of development
Should I continue to use ASP.NET Core Web API with .NET 6 for Dematerialized API in orange, Function API in blue and Microservice in purple projects.
For validation, since the SSO is passed from Dematerialized UI what if the token expires while CRUD operations
is already performed for some stages [rolling back changes is a hassle].
Should each API access to an identidy server and validate the user passed and generate its own token for its
services in purple.
Thank you in advance.
The core question is if you really need all those services and if you perhaps are making things too complicated. I think the important thing is to really consider and really make sure you justify why you want to go through this route.
If you do synchronous API calls between the services, that creates coupling and in the long run a distributed monolith.
For question #4, you typically use one access token for the user to access the public service, and then you use a different set of internal tokens (machine-to-machine also called client credentials in OpenID Connect parlor) between services that have a totally different lifetime.
q1: ocelot is an API GATEWAY which is the entry point for your requests. so it should be the first layer/service meet by user request in front of your services and it forwards the request to the service according to its configuration. so it is lay in the front for all services you have. some arch provide another api gateway for different reasons like specific api gateway for mobiles request for example.
q2: as looking separate services (i cant understand function api but i assume they are services also ) yes but the microservices development is not just about separating things, its about design and identifying the services from business context (Domain Driven Design).its very challenging to identify services and their size and the way they are communicate to each other (asynchronous communication and synchronous communication).
q3: microservices is not about languages and frameworks.one of benefits of microservices architecture is its not language or framework dependent. the may be multiple languages used in microservices. choosing languages it depends on organization policy or your own reasons. if you are .net developer then go for .net.
q4: all the services are registered with identity server and they validate the given token by it. the identity server generate token (there may be multiple tokens) with scopes . the request from identified users always has the token in the headers and the services validate incoming token by referring identity server. this tokens has lifetime and also identity server generates refresh tokens in case of expiry of current token. please look at Oauth docs and rfc. also this https://www.youtube.com/watch?v=Fhfvbl_KbWo&list=PLOeFnOV9YBa7dnrjpOG6lMpcyd7Wn7E8V may helped. you can skip the basic topics. i learned a lot from this series.

Direct Integration of Client App with Keycloak OIDC or via a microservice?

I have a setup like this:
Keycloak OIDC Server for Identity and Access Management Service - Running in Cloud - A
Backend RESTful Microservices - Running in Cloud - B
Backend RESTful Microservices - Running in On-prem servers across multiple locations - C
User Mobile app - Multiple users across locations - X
User web app - Multiple users across locations - Y
X, Y uses Password grant to access B - i.e. X, Y calls login API of B with username and password; B gets the access token from Keycloak and then sends it in response and they (X,Y) use it for further API calls towards B to get authorized.
Now, this is the doubt that I have:
Should we do the same for C? i.e. Should there be an API in B available for C to call to post the client-id and client-secret (client-credentials grant), to get the access token? Is this a good pattern/ valid implementation?
The need for this method of access:
Ops team is planning to hide A from being exposed to the internet. So, B will be acting as an abstraction layer for it.
Is keeping the IAM service from being exposed to internet a good idea? I have never seen an IAM service, abstracted before. Please clarify.
HOSTING
The usual hosting best practice is to place the Identity system behind a reverse proxy / gateway such as NGINX Plus. This is because the Identity system connects to sensitive data sources, whereas the reverse proxy can be the entry point, eg in a DMZ. You can then limit the OIDC endpoints exposed to the internet.
Avoid writing home grown proxying to the Identity system. since that is likely to be less resilient than a battle tested system such as NGINX. See the IAM Primer for an overview.
SECURITY DESIGN PATTERNS
A reverse proxy also supports plugins that can do many utility security jobs, such as translating secure cookies to access tokens. So it is a highly useful part of the architecture. See this article for an example plugin using the high level LUA programming language.

Centralized management server for many systems

We intend to create a REST API that will be implemented on 100+ servers for use by a Centralized Management Portal (CMP). This CMP will itself have full access to the API (for scheduled tasks etc.) and the authorization is done on the CMP itself.
As an added security measure, all the 100+ servers' API can only be accessed from the CMP's IP Address.
In this circumstance, what would be the security advantage, if any, of using OAuth2 rather than a set of API Keys (unique for each server) that is stored as environment variables on the CMP? Upon reading this, it seems that our use case is somewhat different.
Ultimately, we were thinking that we could just open the CMP only to a subset of IP Addresses for those who need to access it, however, this may not be possible later down the track.
I would think about the API from the viewpoint of its clients:
How would a web or mobile client call the API securely?
How would the end user identity flow to the API?
If you don't need to deal with either of these issues then OAuth doesn't provide compelling benefits, other than giving you some improved authorization mechanisms:
Scopes
Claims
Zero Trust
USER v INFRASTRUCTURE SECURITY
I would use OAuth when user level security is involved, rather than for your scenario, which feels more like infrastructure security.
Some systems, such as AWS or Kubernetes, give you built in infrastructure policies, where API hosts could be configured to only allows calls from hosts with a CMP role.
I would prefer this type of option for infrastructure security if possible, rather than writing code to manage API keys.

Central JWT management system for my micro-service based architecture

We are building our applications in micro-services based architecture to implement our applications. As true with micro-services, we now see a lot of cross service interactions happening between services.
In order to safeguard the endpoints we plan to implement JWT based authentication between such secure exchanges.
There are 2 approaches we see helping us achieve it:
Embed an JWT engine in each application to generate the token (#consumer side) and evaluate (#provider side). With an initial exchange of keys, the token exchange shall work smooth for any future comms.
Have an external (to application) JWT engine, that sits in between all micro-service communications for the distributed application, and takes care of all token life cycle, including its encryption-decryption and validation.
There are lot of options to do it as per option #1 as listed on https://jwt.io but considering the over-head token generation and management adds to a micro-service, we prefer to go with 2nd option by having de-centralised gateway.
After quite some research and looking at various API gateways we have not yet come across a light weight solution/tool that can serve to our need and help us get centralised engine for one applications comprised of many micro-services.
Do anyone know about one such tool/solution?
If you have any other inputs on this approach, please let me know.
I prefer also option 2, but why are you looking for a framework?
The central application should only be responsible of managing the private key and issuing the tokens. Including a framework for solve one service could be excessive
You can also think to implement a validation service, but since applications are yours, I suggest to use an assymetric key and verify the token locally instead of executing remote validation requests to central application. You can provide a simple library to your microservices to download the key and perform the validation. Embed any of the libraries of JWT.io or build It from scratch. Validating a JWT is really simple
If you would need to reject a token before expiration time, for example using a blacklist, then It would be needed a central service. But I do not recommend this scheme because breaks JWT statelessness
Both scenarios could be implemented in Spring Cloud Zuul.
For more info:
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_router_and_filter_zuul
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_configuring_authentication_downstream_of_a_zuul_proxy

OAuth 2 Enforcement in Bluemix API Management

I am trying to enforce OAuth 2 in Bluemix for a suite of microservices via API Management. Am I right in the assumption that the API Management service will always act as an OAuth 2 Authorization Provider as opposed to only checking the validity of access tokens as an enforcement gateway? The latter would be what I really want to have though.
Ideally I would like to specify my own OAuth Provider Implementation with the means to issue, validate and revoke access tokens. Then having the API Management service as a proxy for my services ensuring that requests coming in carry a valid token.
Reading the documentation I am struggling to understand how the API Management was designed. Could somebody point me to the direction I am looking for, or alternatively help me to understand what the API management's role in the OAuth Flows, listed in the Security Schemes, is?
Thanks.

Resources