Configure Load Balancer as broadcaster - amazon-elb

This may be a naive question so please bear with me. :)
In the below diagram, there is a load balancer in front of 3 instances...
Is it possible to configure the load balancer (ELB/Google Load Balancer/Azure load balancer...basically any) to forward/broadcast every request to all instances...
If not load balancer, what component natively provided by IaaS providers can broadcast such requests to all instances?
I saw a similar question, but it was from 2016 and was hoping if anything changed?

Nothing has changed. The purpose of a Load Balancer is to distribute traffic, not to broadcast traffic.
The closest AWS service that matches your diagram is Amazon Simple Notification Service (Amazon SNS).
It uses a public/subscribe model:
An Amazon SNS Topic is created
Various subscriptions can be attached to the topic (Email, SMS, AWS Lambda, HTTP endpoint, Amazon SQS queue)
A message is sent to the topic and it is forwarded to all subscribers
(source: amazon.com)
See: What is Amazon Simple Notification Service

Related

How can I combine a Django REST backend and an event-driven backend?

I am developing an app that's going to have two capabilities. First of all, it has to be able to ingest a huge load of events (think millions per minute). It's also going to have a typical REST frontend/backend where I'll have the classic authentication flows, some dashboard based on an analysis of the ingested events, etc. For the REST portion of the app, I'm using Django + Postgres and React on Docker. For the event-driven backend, since Django is too slow to handle clients sending me millions of requests, I was thinking of using Kafka + a streaming database like Materialize.
However, I'm still unclear on how I would route requests to these different endpoints. Do I need to write an endpoint in something fast like Golang/Rust to send client payloads to Kafka? Or do clients communicate with Kafka directly? Is something like Nginx the right way to route these requests? I would still need to have some sort of reference to the the Postgres DB in order to verify the API Key used in the client request is valid.
be able to ingest a huge load of events (think millions per minute).
Through HTTP, or Kafka? If Kafka, then you can use Kafka Connect to directly write into the database without needing any frontend web server.
Django is too slow to handle clients sending me millions of requests
Based on your own benchmarks? Using how many instances?
Do I need to write an endpoint in something fast like Golang/Rust to send client payloads to Kafka?
Python can send producer requests to Kafka. There's nothing preventing you from using Kafka libraries in Django, although none of your question seems very specific to Kafka vs other message queues, especially when you've only referenced needing one or two databases.
If you're going to pick a different "service worker" language, you may as well write your HTTP server using those as well...
Or do clients communicate with Kafka directly?
Web clients? No, at least not without an HTTP proxy.
Is something like Nginx the right way to route these requests?
Unclear what you mean by "requests" here. Kafka requests - no, database requests - probably not, load balancing of HTTP requests - yes.
have some sort of reference to the Postgres DB in order to verify the API Key used in the client request is valid
Okay fine, use a Postgres client in your backend code.

Laravel events behind load balancer - how to make the event visible to all the servers in the autoscale group

I have an application running Laravel 6.1. There are clients which connect to it via laravel websockets and listen for events. I have an external service which sends post requests to this server which will then raise an event, and the websocket clients see it. I am in the dev stage, and it's not been deployed yet, this is what I'm currently researching. I use Docker, so there's an nginx container, a php container, and a Mysql container(in production, the containers will use RDS though)
This works fine in development, but the plan is to deploy in ECS, with Elastic Beanstalk, as it enables multiple containers per EC2 instance. I was planning on having these instances auto scale with a load balancer, so my question is how can I make the incoming events be raised and visible on all the servers? For example, the post request may hit one instance and the clients connected to that instance would see that the event was raised, but the clients connected to another instance will not see the raised event. Is this accurate? I'd imagine the events will have to be sent to some kind of "queue" which is monitored by all instances, but not sure how to implement that with Laravel or if there's a simpler faster way.
Based on the comments.
The proposed solution involves the use of sns instead of the SQS.
The reason is that sns allows delivery of messages to multiple recipients at the same time. In contrast, SQS is designed for one delivery of messages to only one recipient, unless used in fan out architecture.

Using SignalR with Azure Table Storage - What architecture?

I have a smart grid system where multiple hardware devices are sending raw sensor data to an Azure Queue. Each device sends a single data packet once every minute. Multiple Worker Roles process the data packets on the queue and push the data to Table Storage. I have a Web Role which holds the application for users to view their device data and a host of other alerts and messages relating to their smart energy system. At the moment the web application just uses ajax polling at one minute intervals to get the latest data updates and any other messages and alerts. Instead of using ajax 'pulling', I'd like to use SignalR instead and 'push' the updates from the cloud when they become available. I'm not sure on what the overall architecture might look like.
So far I have added a SignalR Hub to my Web Role, just to see if I could do that. And it works fine. However, how do I trigger updates from this Hub when there are changes in Table Storage? Should I host the Hub with the Worker Roles that process the raw data, and then make a cross-domain SignalR connection from the web app (client)? Can I even associate an endpoint with a Worker Role? If I have many Worker Roles wouldn't I only be able to connect to one of them, and therefore miss data updates from other Worker Roles?
Perhaps I should create a separate Web Role to host the SignalR hub, but then how do I communicate the changes from the Worker Roles that process the raw data to the hub? Maybe I need to include another Azure Queue that takes messages from the Worker Roles regarding data updates, alerts, and any other messaging, and that queue is processed by the SignalR server. However, would this approach be scalable? If I have multiple instances of the SignalR server processing the message queue(s), would they share the same end point and be aware of all the client connections across the instances? Or maybe the Worker Roles themselves connect as clients to the SignalR server and the messages forwarded from there to the clients.
Is SignalR even the right approach to take if data is being generated at a predictable rate of once every minute for each device. Maybe for updates of this regular data ajax 'pulling' is the best approach, and I should just be using SignalR for the infrequent alerts and messages, although, again, how do I communicate these events from the Worker Roles to the SignalR server?
What overall architecture would suit my needs here?
EDIT 06-09-2014 Half the problem solved
I came across http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-with-windows-azure-service-bus which seems to be exactly what I am after. This deals with the problem of multiple Hub server (Web Role) instances. Now I just need a SignalR client library that can run on the Worker Roles so that they can notify the Hub that new data is available, and the Hub class can then be enhanced to route the new data to the appropriate connected web clients.
EDIT 06-10-2014 A workable solution found
I have added an answer to my question of "What architecture". I thought a quick summary of my setup might be useful. I have many remote devices associated with different users posting real-time data to Azure Queues. The data posted to these queues are parsed and saved to Table Storage, by a number of Worker Roles. Web Roles provide the MVC5 web application for the users (clients) to log on and review their data. I wanted a mechanism by which when new data was posted, any connected clients would receive a real-time notification (and data tables and charts in the client apps could be updated accordingly). SignalR with Service Bus scaleout proved to be the answer.
The first part of the solution I needed was to deploy a SignalR hub that the clients could connect to, to receive any notifications sent. I couldn't use the basic SignalR solution as the MVC5 web app is hosted on a Web Role that will likely have more than one instance - the problem was how to keep all these instances synced so that whatever instance a client was connected to they'd still receive the notifications. SignalR scaleout with Azure Service Bus proved to be the answer to that part of the problem. Details of how to set this up can be found at: http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-with-windows-azure-service-bus - it was VERY easy to setup.
The second part of the problem was how to generate the notifications originating from the Worker Roles (my queue data processors). First I needed to be able to host OWIN in my worker roles - the instructions provided at http://www.asp.net/aspnet/overview/owin-and-katana/host-owin-in-an-azure-worker-role were more than sufficient. Once this was done I created an empty Hub instance with the same name as the one deployed on my Web Roles (it was empty because I didn't expect to have an clients connected to it directly), and changed the Startup class to:
public class Startup
{
public void Configuration(IAppBuilder app)
{
String connectionString = "[Service Bus Connection String]";
GlobalHost.DependencyResolver.UseServiceBus(connectionString, "[App Name]");
app.MapSignalR();
}
}
With this in place if I want to send a notification out to the clients, from the Worker Roles, I do something like:
var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
context.Clients.All.clientMethod("[Message]");
What really happens is that a copy of the message gets pushed to the backplane (Service Bus) and is picked up by the Web Roles and pushed out to the connected clients. In reality I will check who is online (in the Web Role Hub instance I override the OnConnected method to save the user's connection id in their profile which is stored in Table Storage), and only create notifications that are associated with those users to reduce SignalR traffic.

Failover IP when server on DNS supplied IP fails iOS

My iOS app uses a single hard-code URL api.xyz.com to find our REST service. At the moment there are just two servers running this service, and we use Amazon Route 53 DNS. But I've found that the timeout of an hour (or more) is too long incase one of our servers fails; don't want to leave users in the dark that long.
The alternative would be to implement a failover mechanism in the app. To be honest, I don't like the idea of pulling this low level DNS-related logic in the app, but I don't see another solution at the moment.
So my question is: How do I implement such a failover mechanism on iOS? I'm using AFNetworking for my REST API.
Or, are there better alternatives on server side? At the moment the servers are individually rented ones, so no Amazon, Google, ... cloud service.

Communication between Rails processes

Consider the tic-tac-toe game built with Nginx as a reverse proxy and having multiple Rails backends. Each client sets up a websocket connection with some Rails backends. If two clients playing a game are each connected to a different Rails backend, then a move sent to one backend needs to be routed to the other backend so it can be pushed on the other websocket as shown in the picture below.
In Rails what is the idiomatic way to communicate between two Rails backends?
In this situation you should setup separate WebSocket server and connect both users and Rails servers to it. This way you will be able to handle all users from one server without worrying about sharding.
In case of high traffic you could also setup several WebSocket servers and implement some kind of queue or message bus between them that will propagate new messages - for example master server that will only handle propagating messages and slave servers that will be connected to it and sent all messages received from users to it. Please note that in such configuration master server should not handle connection from users and server only for propagation of messages between slaves.
Finaly, answering your last question directly, there is usually no need to contact between Rails servers directly - as opposite to WebSocket servers they serve on request-response basis so exchanging informations via database is enough in most cases. If you really need immediate change then solutions like AMQP should help.

Resources