WSO2 ESB has Mediation Sequences and Proxy Services for implementing EAI patterns. I am currently new and couldn't distinguish when to use a Mediation Sequence or Proxy Service. Both seem to work well in most of the use cases. When should I use each?
Sequence (Mediation Sequence) is a sequence of Mediators. A message comes into the sequence, passes through the each mediator, in the order they are located in the sequence. So a Mediation Sequence is the generic building material of WSO2 ESB.
A Mediation Sequence can be theoretically used to process any type of message (binary, JSON, XML) passes through it given that mediators can successfully process these messages.
Therefore a mediation sequence can be used to,
Proxy the messages to/from a web service
Proxy the messages to/from a REST service
And many more applications.....
A Proxy Service is the special module in WSO2 ESB that is designed to fulfill the requirements of the 1. (Proxy the messages to/from a web service)
Therefore a Proxy service is a specialized Mediation Sequence with the support of Web Service Endpoints.
WSO2 ESB facilitates to create Proxy Services in different ways for the different types of requirements.
e.g.:
WSDL based proxy - Used to create a proxy service using a given WSDL
Pass through proxy - Used to simply create a proxy service using the Endpoint URL
If you need to proxy a service and if you need to mediate and perform different operations to the message you can use proxy services.
Sequence is a set of mediator (tree of mediators) though which you can send messages. If you consider mediator as building units, you can add them in a order and define it as a sequence which is reusable later. You can refer the sequence inside the proxy service and let the message flow through mediators defined in the sequence.
At high level proxy is also apparently a service to a service consumer but it actually call the actual endpoint to get the actual work done.
Related
An ex-employee planned a Microservice Architecture which is being implemented now. I've few question regarding the design and I'd highly appreciate your feedbacks.
Explanation
Dematerialized UI has a matching dematerialized API.
Dematerailized API validates the user and generates token via SSO Library.
Flight API does the I/O validation & validate the request via validate request microservice
Flight API calls Booking API to get some bookings based on the UserId
Flight API calls Print Booking API to generate Messages using Generate Message Microservice
Print Booking API must call Data Access API to get data and then call Generate PDF microservices.
Data Access API calls the database for data.
My Project Structure
FlightBookingsMicroserice.V1 //solution
ApiGatways //folder
DMZ.API/DMZ.API.csproj //Folder/project
BuildingBlocks
EventBus/EventBus.csproj
EventBus/EventBusRabbitMQ
Services
SSO
SSO.API/SSO.csproj
SSO.UnitTests
Flight
Flight.API/Flight.API.csproj
Flight.UnitTets
//Similar for all
ValidationRequest
Booking
PrintBooking
PrintBooking.API.csproj
DataAccess
DataAccess.API.csproj
GeneratePDF
GenerateMessage
UI
UI
Docker-compose
Questions
Should I be using ocelot in DMZ.API.csproj, Flight API and Print Booking API.
Is my project structure a Microservice way of development
Should I continue to use ASP.NET Core Web API with .NET 6 for Dematerialized API in orange, Function API in blue and Microservice in purple projects.
For validation, since the SSO is passed from Dematerialized UI what if the token expires while CRUD operations
is already performed for some stages [rolling back changes is a hassle].
Should each API access to an identidy server and validate the user passed and generate its own token for its
services in purple.
Thank you in advance.
The core question is if you really need all those services and if you perhaps are making things too complicated. I think the important thing is to really consider and really make sure you justify why you want to go through this route.
If you do synchronous API calls between the services, that creates coupling and in the long run a distributed monolith.
For question #4, you typically use one access token for the user to access the public service, and then you use a different set of internal tokens (machine-to-machine also called client credentials in OpenID Connect parlor) between services that have a totally different lifetime.
q1: ocelot is an API GATEWAY which is the entry point for your requests. so it should be the first layer/service meet by user request in front of your services and it forwards the request to the service according to its configuration. so it is lay in the front for all services you have. some arch provide another api gateway for different reasons like specific api gateway for mobiles request for example.
q2: as looking separate services (i cant understand function api but i assume they are services also ) yes but the microservices development is not just about separating things, its about design and identifying the services from business context (Domain Driven Design).its very challenging to identify services and their size and the way they are communicate to each other (asynchronous communication and synchronous communication).
q3: microservices is not about languages and frameworks.one of benefits of microservices architecture is its not language or framework dependent. the may be multiple languages used in microservices. choosing languages it depends on organization policy or your own reasons. if you are .net developer then go for .net.
q4: all the services are registered with identity server and they validate the given token by it. the identity server generate token (there may be multiple tokens) with scopes . the request from identified users always has the token in the headers and the services validate incoming token by referring identity server. this tokens has lifetime and also identity server generates refresh tokens in case of expiry of current token. please look at Oauth docs and rfc. also this https://www.youtube.com/watch?v=Fhfvbl_KbWo&list=PLOeFnOV9YBa7dnrjpOG6lMpcyd7Wn7E8V may helped. you can skip the basic topics. i learned a lot from this series.
IBM has MQIPT (IBM MQ Internet Pass-Thru) that acts as MQ forwarder/reverse proxy to implement messaging solutions between remote sites across the internet. Is there such an equivalence for Solace?
Solace has all kinds of fancy advanced features for load balancing and hybrid/multi-site deployments like bridges and dynamic message routing, but I don't really know those, and where's the fun in having everything ready-made and pre-solved for you anyway? :-)
So here I am going to assume you want to roll your own solution and use an actual reverse proxy:
You can switch to HTTP-based protocols, and just use any regular HTTP reverse proxy. Solace message brokers have a REST message interface, or if your application already uses the Solace API for messaging (or needs its advanced features), you can switch over to HTTP streaming or WebSockets as a transport by modifying the scheme portion of the broker URL in your application configuration. (http:// or ws:// instead of tcp://) This will only allow you to balance sessions, not individual messages within a single elephant flow.
Preface
I am currently trying to learn how micro-services work and how to implement container replication and API gateways. I've hit a block though.
My Application
I have three main services for my application.
API Gateway
Crawler Manager
User
I will be focusing on the API Gateway and Crawler Manager services for this question.
API Gateway
This is a docker container running a Go server. The communication is all done with GraphQL.
I am using an API Gateway because I expect to have different services in my application each having their own specialized API. This is to unify everything.
All it does is proxy requests to their appropriate service and return a response back to the client.
Crawler Manager
This is another docker container running a Go server. The communication is done with GraphQL.
More or less, this behaves similar to another API gateway. Let me explain.
This service expects the client to send a request like this:
{
# In production 'url' will be encoded in base64
example(url: "https://apple.example/") {
test
}
}
The url can only link to one of these three sites:
https://apple.example/
https://peach.example/
https://mango.example/
Any other site is strictly prohibited.
Once the Crawler Manager service receives a request and the link is one of those three it decides which other service to have the request fulfilled. So in that way, it behaves much like another API gateway, but specialized.
Each URL domain gets its own dedicated service for processing it. Why? Because each site varies quite a bit in markup and each site needs to be crawled for information. Because their markup is varied, I'd like a service for each of them so in case a site is updated the whole Crawler Manager service doesn't go down.
As far as the querying goes, each site will return a response formatted identical to other sites.
Visual Outline
Problem
Now that we have a bit of an idea of how my application works I want to discuss my actual issues here.
Is having a sort of secondary API gateway standard and good practice? Is there a better way?
How can I replicate this system and have multiple Crawler Manager service family instances?
I'm really confused on how I'd actually create this setup. I looked at clusters in Docker Swarm / Kubernetes, but with the way I have it setup it seems like I'd need to make clusters of clusters. That makes me question my design overall. Maybe I need to not think about keeping them so structured?
At a very generic level, if service A calls service B that has multiple replicas B1, B2, B3, ... then it needs to know how to call them. The two basic options are to have some sort of service registry that can return all of the replicas, and then pick one, or to put a load balancer in front of the second service and just directly reach that. Usually setting up the load balancer is a little bit easier: the service call can be a plain HTTP (GraphQL) call, and in a development environment you can just omit the load balancer and directly have one service call the other.
/-> service-1-a
Crawler Manager --> Service 1 LB --> service-1-b
\-> service-1-c
If you're willing to commit to Kubernetes, it essentially has built-in support for this pattern. A Deployment is some number of replicas of identical pods (containers), so it would manage the service-1-a, -b, -c in my diagram. A Service provides the load balancer (its default ClusterIP type provides a load balancer accessible only within the cluster) and also a DNS name. You'd configure your crawler-manager pods with perhaps an environment variable SERVICE_1_URL=http://service-1.default.svc.cluster.local/graphql to connect everything together.
(In your original diagram, each "box" that has multiple replicas of some service would be a Deployment, and the point at the top of the box where inbound connections are received would be a Service.)
In plain Docker you'd have to do a bit more work to replicate this, including manually launching the replicas and load balancers.
Architecturally what you've shown seems fine. The big "if" to me is that you've designed it so that each site you're crawling potentially gets multiple independent crawling containers and a different code base. If that's really justified in your scenario, then splitting up the services this way makes sense, and having a "second routing service" isn't really a problem.
I have a Windows service that listens on a queue; when there is a new message, it parses it, and stores it in its own storage.
It's "uni-directional" in the sense that it just listens on a queue, but doesn't expose any endpoint and it doesn't interact with other services.
Is this considered a micro-service?
As the name implies any service which is not monolithic, which can be independently built and deployable can be a microservice.
There is 12 factor approach to be called a true micro service, https://www.nginx.com/blog/microservices-reference-architecture-nginx-twelve-factor-app/
Microsoft have a article about that.
https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices
Services communicate with each other by using well-defined APIs.
Internal implementation details of each service are hidden from other
services.
I have an ASP.NET Core MVC API hosted in an Azure App Service. The API has several endpoints. Is it possible to expose only one of the endpoints to the internet, but keep the rest of the endpoints locked down and only consumable by clients from restricted IP ranges?
You could write a custom middleware that blocks requests that are not part of a set of whitelistet IPs (using HttpContext.Connection.RemoteIpAddress). To allow certain endpoints you could tag your controller / methods with a custom attribute and skip the IP check for them.
Here is an example how you can implement the middleware.