What is the use of particular WSO2 Product - oauth

I am new to WSO2 Products. To Secure a REST web service using OAuth2, There are different combinations used of WSO2 products. I need to know what is the Exactly role and need of Identity Server, When we required WSO-ESB, when we required Data Services Server and when we need Application Server.

WSO2-IS You can use this as the IDP for all your applications. WSO2 Identity Server is the central backbone that connects and manages multiple identities across applications, APIs, the cloud, mobile, and Internet of Things devices, regardless of the standards on which they are based.
WSO2-ESB ESB is an architecture. It is a set of rules and principles for integrating various applications (mostly heterogeneous systems) together over a bus to achieve a purpose. WSO2 Enterprise Service Bus is a lightweight, high performance, and comprehensive ESB. It effectively addresses integration standards and supports all integration patterns, enabling interoperability among various heterogeneous systems and business applications.
WSO2-DSS Behind most application silos are heterogeneous data stores. Most businesses require secure and managed data access across these federated data stores, data service transactions, data transformation and validation. An organization's data exposed as a service, decoupled from the infrastructure where it is stored, is called data services in service oriented architecture (SOA).
You can refer to WSO2 docs for more information regarding each product
http://wso2.com/products/identity-server/
http://wso2.com/products/enterprise-service-bus/
http://wso2.com/products/data-services-server/

Related

Microservice architecture structure with docker-compose : .NET 6

An ex-employee planned a Microservice Architecture which is being implemented now. I've few question regarding the design and I'd highly appreciate your feedbacks.
Explanation
Dematerialized UI has a matching dematerialized API.
Dematerailized API validates the user and generates token via SSO Library.
Flight API does the I/O validation & validate the request via validate request microservice
Flight API calls Booking API to get some bookings based on the UserId
Flight API calls Print Booking API to generate Messages using Generate Message Microservice
Print Booking API must call Data Access API to get data and then call Generate PDF microservices.
Data Access API calls the database for data.
My Project Structure
FlightBookingsMicroserice.V1 //solution
ApiGatways //folder
DMZ.API/DMZ.API.csproj //Folder/project
BuildingBlocks
EventBus/EventBus.csproj
EventBus/EventBusRabbitMQ
Services
SSO
SSO.API/SSO.csproj
SSO.UnitTests
Flight
Flight.API/Flight.API.csproj
Flight.UnitTets
//Similar for all
ValidationRequest
Booking
PrintBooking
PrintBooking.API.csproj
DataAccess
DataAccess.API.csproj
GeneratePDF
GenerateMessage
UI
UI
Docker-compose
Questions
Should I be using ocelot in DMZ.API.csproj, Flight API and Print Booking API.
Is my project structure a Microservice way of development
Should I continue to use ASP.NET Core Web API with .NET 6 for Dematerialized API in orange, Function API in blue and Microservice in purple projects.
For validation, since the SSO is passed from Dematerialized UI what if the token expires while CRUD operations
is already performed for some stages [rolling back changes is a hassle].
Should each API access to an identidy server and validate the user passed and generate its own token for its
services in purple.
Thank you in advance.
The core question is if you really need all those services and if you perhaps are making things too complicated. I think the important thing is to really consider and really make sure you justify why you want to go through this route.
If you do synchronous API calls between the services, that creates coupling and in the long run a distributed monolith.
For question #4, you typically use one access token for the user to access the public service, and then you use a different set of internal tokens (machine-to-machine also called client credentials in OpenID Connect parlor) between services that have a totally different lifetime.
q1: ocelot is an API GATEWAY which is the entry point for your requests. so it should be the first layer/service meet by user request in front of your services and it forwards the request to the service according to its configuration. so it is lay in the front for all services you have. some arch provide another api gateway for different reasons like specific api gateway for mobiles request for example.
q2: as looking separate services (i cant understand function api but i assume they are services also ) yes but the microservices development is not just about separating things, its about design and identifying the services from business context (Domain Driven Design).its very challenging to identify services and their size and the way they are communicate to each other (asynchronous communication and synchronous communication).
q3: microservices is not about languages and frameworks.one of benefits of microservices architecture is its not language or framework dependent. the may be multiple languages used in microservices. choosing languages it depends on organization policy or your own reasons. if you are .net developer then go for .net.
q4: all the services are registered with identity server and they validate the given token by it. the identity server generate token (there may be multiple tokens) with scopes . the request from identified users always has the token in the headers and the services validate incoming token by referring identity server. this tokens has lifetime and also identity server generates refresh tokens in case of expiry of current token. please look at Oauth docs and rfc. also this https://www.youtube.com/watch?v=Fhfvbl_KbWo&list=PLOeFnOV9YBa7dnrjpOG6lMpcyd7Wn7E8V may helped. you can skip the basic topics. i learned a lot from this series.

Centralized management server for many systems

We intend to create a REST API that will be implemented on 100+ servers for use by a Centralized Management Portal (CMP). This CMP will itself have full access to the API (for scheduled tasks etc.) and the authorization is done on the CMP itself.
As an added security measure, all the 100+ servers' API can only be accessed from the CMP's IP Address.
In this circumstance, what would be the security advantage, if any, of using OAuth2 rather than a set of API Keys (unique for each server) that is stored as environment variables on the CMP? Upon reading this, it seems that our use case is somewhat different.
Ultimately, we were thinking that we could just open the CMP only to a subset of IP Addresses for those who need to access it, however, this may not be possible later down the track.
I would think about the API from the viewpoint of its clients:
How would a web or mobile client call the API securely?
How would the end user identity flow to the API?
If you don't need to deal with either of these issues then OAuth doesn't provide compelling benefits, other than giving you some improved authorization mechanisms:
Scopes
Claims
Zero Trust
USER v INFRASTRUCTURE SECURITY
I would use OAuth when user level security is involved, rather than for your scenario, which feels more like infrastructure security.
Some systems, such as AWS or Kubernetes, give you built in infrastructure policies, where API hosts could be configured to only allows calls from hosts with a CMP role.
I would prefer this type of option for infrastructure security if possible, rather than writing code to manage API keys.

Does OAUTH and OIDC make sense in a scenario when you need Single-Sign On on a microservices mesh and every one of these is owned by same company?

From theoretical point of view, it looks like it fits the scenario to have descentralized authorization consumed by multiple resource servers.
But then when analyzing the intention behind "scopes", I am not that sure. Because the typical usecase I always see is telling a third-party app what data can consume (R/W) on other (micro)services of auth server's company.
Is disregarding scopes a sign of misusing Oauth2/OIDC? Or is using scopes in the following way (microservice1:read, microservice2:write) or (microservice1:big_role, microservice2:admin_role) also perverting the purpose?
Ultimately I like to think of OAuth in these terms:
Managing Data Security
Enabling modern and productive apps
Microservices Security
Most software companies build APIs, Web UIs and Mobile UIs. The OAuth family of technologies provides options for securing all of them based on modern JSON based messages.
In doing so you can externalise some of the really hard stuff such as multi factor authentication, managing privacy and auditing. OAuth also tends to fit well with general architecture goals such as performance.
Scopes
These represent areas of data and what you can do with the data, as described in This Curity Article. There is nothing wrong in using scopes in a basic way to start with, then gradually evolving scopes based on data sensitivity and who the client is.
Summary
There is a learning curve to OAuth, but once done your apps tend to have the best capabilities.

Spring Cloud OAuth2: Resource server with multiple Authorization server

We are developing an application in a microservice architecture, which implements signle sign-on using Spring Cloud OAuth2 on multiple OAuth2 providers like Google and Facebook. We are also developing our own authorization server, and will be integrated on next release.
Now, on our microservices, which are resource servers, I would like to know how to handle multiple token-info-uri or user-info-uri to multiple authorization servers (e.g. for Facebook or Google).
This type of situation is generally solved by having a middle-man; a single entity that your resource servers trust and that can be used to normalize any possible differences that surface from the fact that users may authenticate with distinct providers. This is sometimes referred to as a federation provider.
Auth0 is a good example on this kind of implementation. Disclosure: I'm an Auth0 engineer.
Auth0 sits between your app and the identity provider that authenticates your users. Through this level of abstraction, Auth0 keeps your app isolated from any changes to and idiosyncrasies of each provider's implementation.
(emphasis is mine)
It's not that your resource servers can't technically trust more than one authorization server, it's just that moving that logic out of the individual resource servers into a central location will make it more manageable and decoupled.
Also have in mind that authentication and authorization are different things although we are used to seeing them together. If you're going to implement your own authorization server, you should make that the central point that can:
handle multiple types of authentication providers
provide a normalized view of the user profile to downstream resource servers
provide the access tokens that can be used by your client application to make authorized requests to your microservices

What is the recommended Binding to use with Silveright and iPad clients

I am starting a new product that will require a .NET based server (using WCF) hosted on Azure. I would like to have basic authentication and security features. The clients are all "rich" UI but are not neccessarily microsoft ones.
We intend to have the first client application written in Silverlight, but we want to keep our options open to implement clients for iOS and Android in the future. So we do not want to use WCF specific features but rather protocols that are easily available on other enviroments.
Of course, with the Silverlight client, we hope to get as much done for us automatically as possible. We intend to only communicate through web services.
Which bindings are recommended for such a scenario?
How would you implement security? (assuming we need basic security - Users being able to log in with encrypted user and password and perhaps some built in basic role management althouh this is optional).
Suggestions?
You could use WCF to implement a REST interface
The binding would have to be a basicHttpBinding (to be open to all platforms) and using SSL to secure the line.
Managing credentials could be done using tokens to be passed back and forth after authentication. Much like a http session. You could pass the token using a cookie but the token could be part of the API or Headers as well. See this Best Practices for securing a REST API / web service
This would grant you the power of .NET and WCF without losing interopability.

Resources