implement a distributed deployment approach in messagebroker - messagebroker

I am making a broker that I want to deploy on three servers and I want to run it on only one one by one.flow that is running called active flow and others are passive.when it runs it send an email that flow is running on server x.
can we do this broker through a flag.
i.e. to implement a distributed deployment approach

The best approach would be to use a multi instance broker so that you only have one node active at any one time, you could then have either in the onInitialize() method of a JCN, or in a timeout control driven flow a branch of subflow which creates an email based on the local hostname.

Related

Routing a clients connection to a specific instance of a SignalR backend within a Kubernetes cluster

While trying to create a web application for shared drawing I got stuck on a problem regarding Kubernetes and scaling. The application uses an ASP.NET Core backend with SignalR for sharing the drawing data across its users. For scaling out the application I am using a deployment for each microservice of the system. For the SignalR part though, additional configuration is required.
After some research I have found out about the possibility to sync all instances of the SignalR backend either through the use of Azures SignalR Service or the use of a Redis backplane. The latter of which I have gotten to work on my local minikube environment. I am not really happy with this solution because of the following reasons:
My main concern is that like this I have created a hard bottleneck in
the system. Unlike in a chat application where data is sent only once
in a while, messages are sent for every few points drawn in the
shared drawing experience by any client. Simply put, a lot of traffic
can occur and all of it has to pass through the single Redis backplane.
Additionally to me it seems unneccessary to make all instances of the SignalR backend talk to each
other. In this application shared drawing does only occur in small groups of up to 10 clients lets
say. Groups of this size can easily be hosted on a single instance.
So without syncing all instances of the SignalR backend I would have to route the clients connection based on the SignalR group name to the right instance of the SignalR backend when the client is trying to join a group.
I have found out about StatefulSets which allow me to have a persistent address for each backend pod in the cluster. I then could somehow associate the SignalR group IDs with the pod addresses they are running on in lets say another look up microservice. The problem with this is that the client needs to be able to access the right pod from outside of the cluster where that cluster internal address does not really help.
Also I am wondering if there isnt a whole better approach to the problem since I am very new to the world of kubernetes. I would be very greatful for your thoughts on this issue and any hint towards a (better) solution.

Laravel events behind load balancer - how to make the event visible to all the servers in the autoscale group

I have an application running Laravel 6.1. There are clients which connect to it via laravel websockets and listen for events. I have an external service which sends post requests to this server which will then raise an event, and the websocket clients see it. I am in the dev stage, and it's not been deployed yet, this is what I'm currently researching. I use Docker, so there's an nginx container, a php container, and a Mysql container(in production, the containers will use RDS though)
This works fine in development, but the plan is to deploy in ECS, with Elastic Beanstalk, as it enables multiple containers per EC2 instance. I was planning on having these instances auto scale with a load balancer, so my question is how can I make the incoming events be raised and visible on all the servers? For example, the post request may hit one instance and the clients connected to that instance would see that the event was raised, but the clients connected to another instance will not see the raised event. Is this accurate? I'd imagine the events will have to be sent to some kind of "queue" which is monitored by all instances, but not sure how to implement that with Laravel or if there's a simpler faster way.
Based on the comments.
The proposed solution involves the use of sns instead of the SQS.
The reason is that sns allows delivery of messages to multiple recipients at the same time. In contrast, SQS is designed for one delivery of messages to only one recipient, unless used in fan out architecture.

How can I implement a sub-api gateway that can be replicated?

Preface
I am currently trying to learn how micro-services work and how to implement container replication and API gateways. I've hit a block though.
My Application
I have three main services for my application.
API Gateway
Crawler Manager
User
I will be focusing on the API Gateway and Crawler Manager services for this question.
API Gateway
This is a docker container running a Go server. The communication is all done with GraphQL.
I am using an API Gateway because I expect to have different services in my application each having their own specialized API. This is to unify everything.
All it does is proxy requests to their appropriate service and return a response back to the client.
Crawler Manager
This is another docker container running a Go server. The communication is done with GraphQL.
More or less, this behaves similar to another API gateway. Let me explain.
This service expects the client to send a request like this:
{
# In production 'url' will be encoded in base64
example(url: "https://apple.example/") {
test
}
}
The url can only link to one of these three sites:
https://apple.example/
https://peach.example/
https://mango.example/
Any other site is strictly prohibited.
Once the Crawler Manager service receives a request and the link is one of those three it decides which other service to have the request fulfilled. So in that way, it behaves much like another API gateway, but specialized.
Each URL domain gets its own dedicated service for processing it. Why? Because each site varies quite a bit in markup and each site needs to be crawled for information. Because their markup is varied, I'd like a service for each of them so in case a site is updated the whole Crawler Manager service doesn't go down.
As far as the querying goes, each site will return a response formatted identical to other sites.
Visual Outline
Problem
Now that we have a bit of an idea of how my application works I want to discuss my actual issues here.
Is having a sort of secondary API gateway standard and good practice? Is there a better way?
How can I replicate this system and have multiple Crawler Manager service family instances?
I'm really confused on how I'd actually create this setup. I looked at clusters in Docker Swarm / Kubernetes, but with the way I have it setup it seems like I'd need to make clusters of clusters. That makes me question my design overall. Maybe I need to not think about keeping them so structured?
At a very generic level, if service A calls service B that has multiple replicas B1, B2, B3, ... then it needs to know how to call them. The two basic options are to have some sort of service registry that can return all of the replicas, and then pick one, or to put a load balancer in front of the second service and just directly reach that. Usually setting up the load balancer is a little bit easier: the service call can be a plain HTTP (GraphQL) call, and in a development environment you can just omit the load balancer and directly have one service call the other.
/-> service-1-a
Crawler Manager --> Service 1 LB --> service-1-b
\-> service-1-c
If you're willing to commit to Kubernetes, it essentially has built-in support for this pattern. A Deployment is some number of replicas of identical pods (containers), so it would manage the service-1-a, -b, -c in my diagram. A Service provides the load balancer (its default ClusterIP type provides a load balancer accessible only within the cluster) and also a DNS name. You'd configure your crawler-manager pods with perhaps an environment variable SERVICE_1_URL=http://service-1.default.svc.cluster.local/graphql to connect everything together.
(In your original diagram, each "box" that has multiple replicas of some service would be a Deployment, and the point at the top of the box where inbound connections are received would be a Service.)
In plain Docker you'd have to do a bit more work to replicate this, including manually launching the replicas and load balancers.
Architecturally what you've shown seems fine. The big "if" to me is that you've designed it so that each site you're crawling potentially gets multiple independent crawling containers and a different code base. If that's really justified in your scenario, then splitting up the services this way makes sense, and having a "second routing service" isn't really a problem.

How to know which server storm/bolt is running

I have a question related to Apache Storm. Currently we use some servers to implement Storm, our application needs facebook/Twitter tokens.
So we want to design like this: each token belongs to a specific server, when a bolt received tuple, it'll request a token which is specifically for that bolt running instance, this is to prevent token blocking if different servers use same token in a short time.
Anyone knows how to achieve this way, is there any way to know which servers of a running instance of bolt? Thanks a lot.
If you want one token per bolt instance then add an instance variable to your bolt class to hold that token and initialize/cleanup that token at the appropriate times in the bolt lifecycle.
If you want to have a token for each machine then you can create a Singleton bean to hold one token for the entire JVM. Note that if you want to have more than one worker on a single machine then you need to be happy with multiple tokens for each machine (one per JVM on the machine), or build a stand-alone middleware server that owns the token and which handles requests from multiple JVM's on the machine. Even if that is acceptable you'll still need to work out how to make all of the bolt instances in a single JVM/worker share the one token for that JVM.

How to shutdown one instance of an app service in azure

I have deployd an Asp.Net MVC application to an app service in azure and scaled it out to 2 instances.
Sometimes I need to restart an instance, but I only find a way to restart the whole web app. Is there a way to restart one instance only? Even removing the instance and then creating a new one would work for me.
There is no super clean way to do this, but it is still possible to achieve with the following steps:
Go to the Web App in the portal
Choose Process Explorer from Tools menu
You'll see processes for all instances. You can right click on specific w3wp's and kill them, which effectively restarts the site. You don't have to kill the Kudu process (the one with the K icon) if you only want to restart the site. For WebJobs, kill Kudu as well.
You can now restart an instance of an App Service Plan from the App Service Plans - Reboot Worker page in the Azure docs. You can restart the instance directly from that page using the 'Try it' feature.
Visit the Reboot Worker page
Login using an account from the Azure tenant containing the App Service Plan
Click 'Try it'
In the right hand pane enter the Name of the App Service plan and the resource group which contains the plan
Select the Azure subscription which contains the App Service plan
Enter the name of the work machine (instance) you wish to restart. This value typically starts with RD and may be found using the metric and diagnostic tools for the Web App in the Azure Portal.
Click the green Run button at the below the request preview.
If you are using App Services then, unfortunately, this is not possible. You can only update the no. of instances.
But as an alternative, you can decrease the number of instances and then increase back again. Or if you want more granular control, for any reason, then you can deploy the web app in IaaS Virtual Machine workloads and setup instances manually.
At the time of this posting, there is a Health Check (Preview) feature mentioned in the Azure Portal under the "Diagnose and solve problems -> Best Practices" blade for an App Service.
Health Check feature automatically removes a faulty instance from rotation, thus improving availability.
This feature will ping the specified health check path on all instances of your webapp every 2 minutes. If an instance does not respond within 10 minutes (5 pings), the instance is determined to be unhealthy and our service will stop routing requests to it.
It is highly recommended for production apps to utilize this feature and minimize any potential downtime caused due to a faulty instance.
Note : Health Check feature only works for applications that are hosted on more than one instance. For more information check the documentation below.
You can restart individual instances using "Advanced Application Restart", which you can find under diagnostic tools for your App Service in the Azure Portal.

Resources