Using a load balancer to dispatch messages from Redis pub sub - docker

I have several Python application that all connect to a Redis server and consume messages using the pubsub mechanism. I have containerized the applications with Docker and I would like to scale each application by replicating the number of container instances. The challenge is that I don’t want each container to act as an independent subscriber to Redis, meaning I would essentially like to load balance the network traffic so that, when a message is published, only one container receives it per service.
Let’s take the simple example of two services, Service A and Service B. Both services need to be subscribed to the same topic so that each is notified upon a message published to that topic. Each service will process the message differently; in other words the same message will trigger two different outcomes, one executed by Service A and one by Service B. Now, I am trying to imagine an architecture in which these services consist of replicated containers, let’s call them workers. Say Service A consists of two workers A1 and A2, and Service B consists of three workers B1, B2, and B3 (maybe it requires more processing power per message than Service A, so it requires more workers for the same message load). So my use case requires that both Service A and Service B need to subscribe to the same topic so that they both receive updates as they come in, but I only want one worker to handle the message per service. Imagine that a message comes in and worker A1 handles it for Service A while B3 handles it for Service B.
Overall this feels like it should be pretty straightforward, I essentially have multiple applications, each of which needs to scale horizontally and should handle network traffic as if they were sitting behind a load balancer.
I am intending to deploy these applications with something like Amazon ECS, where each application is essentially a service with task replication and all services connect to a centralized Redis cache acting as a message broker. In a situation like this, from the limited research I’ve done, it would be nice to just put a network load balancer up in front of each service so that published messages would be directed to what looks like a single subscriber, but behind the scenes is a collection of workers acting like they’re pulling off a task queue.
I haven’t had much luck finding examples of this kind of architecture, or for that matter any examples of tasks that use something like Redis in the way I’m imagining. This is an architecture I’ve more or less dreamed up, so I could just be thinking about this all wrong, but at the same time it doesn’t seem like a crazy use case to me. I’m looking for any advice about how this could be accomplished and/or if what I’m talking about just sounds insane and there’s a better way.

Related

Calling specific instances of a docker service

Not exactly sure how to ask this question or if this is a valid approach. So I am learning all about docker, containers, etc. From what I have read it is great for creating individual different microservices that perform various tasks such as BasketService, CartService, etc, which can each be contained in their own docker container on a vm which I think the URL calls from my UI (If hosted on a linux vm) would be something along the lines of https://MyLinuxVM/BasketService/{controller}.
My Question:
Now lets say I have only 1 service. We will call it MyService, that needs to have multiple instances. So I could have 4 instances i.e: MyService1, MyService2, MyService3, MyService4. All exactly the same. From my client, would the following assumption be correct?
I can call https://MyLinuxVM/MyService1/{controller} or https://MyLinuxVM/MyService2/{controller} to send to a specific container instance?
Why:
I feel this may help explain why I am doing this and possibly help everyone understand my problem in the first place. I have 4 physical devices I need to communicate with. We will call them Device1, Device2, Device3, Device4. Each device has its own IP Address, and its own set of "Tools" connected to it on various ports of the device (10-20 ports per device).
From our UI, the users can click a button that sets some torque values for the tool in their hand by sending the data to the MVC backend which gets sent to the "Correct" background worker/container which will then transform the data into byte[] and pass it along to its dedicated device. I am not sure if I need multiple background workers in a single container, or just a single configurable container with a single background worker that gets deployed multiple times dependent on number of devices we have running in the shop.
I have read a lot of things on creating different worker services that do different tasks, but I need multiple instances of a worker service that can be configured (preferably from db tables) to send to a specific device.
Picture for additional details / visual:

Notify backend that container has been deployed

I want to create containers which start small webservices. Developers of our team should then upload small images which contain different services. A main backend system then uses these services.
My problem is: When a developer uploads a new service, how does the backend service know there is a new service it can use? Beforehand, when service X should be used and there was no service for that functionality, it returned just a simple message. When there is a service uploaded to do X, the main backend should use that service. But how does it know it is there and should be used?
You can add some notification to your small web service. But what do you do when the service will be down unexpectedly or network will be down at the short time? You need add some logic to clients for refreshing connection.
And docker recommends to use this way.
The problem of waiting for a database (for example) to be ready is
really just a subset of a much larger problem of distributed systems.
In production, your database could become unavailable or move hosts at
any time. Your application needs to be resilient to these types of
failures.
To handle this, your application should attempt to re-establish a
connection to the database after a failure. If the application retries
the connection, it should eventually be able to connect to the
database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason.

How to use Kubernetes to do multiplayer online game with websocket?

If develop a online real time game with websocket, multiplayers running on the different containers, how to sync data when add or reduce containers if they are playing?
Does kubernetes has any good feature on this case?
ThatBrianDude already gave an awesome answer, and mine will not be that good. But I think your last comment gave us more hints about the architecture you have in mind. I hope my humble answer will shed a light on more ideas to your game. Here are some suggestions:
First, avoid keeping any state in the websocket apps.
The basic idea with containers is that they should be stateless.
ThatBrianDude
So, why not use caches and a messaging layer to help you with that. Imagine the following examples:
Situation 1: if the client sends an action to the websocket server, the server should put it in a queue/topic (some other service will process it later on).
Situation 2: The server might also listen to a(some) topic(s) for some types of messages, and send them back to the clients that need that information.
Situation 3: when the client asks for information or if the websocket server needs some information to send to the client, the server must read it from a cache, as reading from DB might be slow for a multiplayer game.
Situation 4: eventually a container is killed. The clients connected to that server will receive a connection error, and should reconnect. That means another handshake, and the player might feel it, depending on what the game was doing, so killing a container should not happen that often. But that would be just it, no information is lost.
This way, the websocket server containers are totally stateless, and the messaging topics and caches will help you to: provide all the information needed to containers, and; keep websockets, persistance and processing isolated and scalable.
Summing up, the information would flow like this:
clients are showering the websocket server containers with actions
websocket servers just send them to the messaging layer
processing containers (which can be scalled too!) receive those messages, process them, save to the database and/or to a cache and eventually send more messages to other topics
(optional) websocket servers receive those messages and send them to the clients.
Or like this:
clients ask for information or websocket servers periodically need to send the world state to clients
websocket servers look up the information in the cache
and send it to the clients.
Or even like this:
Some processing servers are independent of messages, they just read the game/world state (from the cache?) periodically
they process the physics and mechanics of the game
and save the result back in the cache, which will be sent to the clients by the websocket servers periodically, or send it in a topic so the websocket server can listen to it and send it to the clients.
Lastly, don't forget the suggestion to have one machine responsible for one game/world. It would be nice if each processing server (or each thread of a server) works with one game/world. That would make it easier to persist things without the need to sync stuff.
The basic idea with containers is that they should be stateless.
This means that any persistant data your game might have (highscores etc.) must be saved to a persistant DB whereas other temporary data like current ingame score or nickname etc. can stay inside the memory of the container and be gone once the container dies.
how to sync data when add or reduce containers if they are playing?
This sounds like you want to use multiple containers computing one game world?
Thats a whole other beast on its own but you might want to take a look at SpatialOS which pretty much allows for massive multiplayer worlds and is designed for games that require more than one machine per world.
If thats not what you are looking for I would recommend you to keep one machine responsible for one game/world as you will avoid high complexity when you try to sync stuff later on.

Which approach is better for discovering container readiness?

This question is discussed many times but I'd like to hear some best practices and real-world examples of using each of the approaches below:
Designing containers which are able to check the health of dependent services. Simple script whait-for-it can be usefull for this kind of developing containers, but aren't suitable for more complex deployments. For instance, database could accept connections but migrations aren't applyied yet.
Make container able to post own status in Consul/etcd. All dependent services will poll certain endpoint which contains status of needed service. Looks nice but seems redundant, don't it?
Manage startup order of containers by external scheduler.
Which of the approaches above are preferable in context of absence/presence orchestrators like Swarm/Kubernetes/etc in delivery process ?
I can take a stab at the kubernetes perspective on those.
Designing containers which are able to check the health of dependent services. Simple script whait-for-it can be useful for this kind of developing containers, but aren't suitable for more complex deployments. For instance, database could accept connections but migrations aren't applied yet.
This sounds like you want to differentiate between liveness and readiness. Kubernetes allows for both types of probes for these, that you can use to check health and wait before serving any traffic.
Make container able to post own status in Consul/etcd. All dependent services will poll certain endpoint which contains status of needed service. Looks nice but seems redundant, don't it?
I agree. Having to maintain state separately is not preferred. However, in cases where it is absolutely necessary, if you really want to store the state of a resource, it is possible to use a third party resource.
Manage startup order of containers by external scheduler.
This seems tangential to the discussion mostly. However, Pet Sets, soon to be replaced by Stateful Sets in Kubernetes v1.5, give you deterministic order of initialization of pods. For containers on a single pod, there are init-containers which run serially and in order prior to running the main container.

Is it necessary to use as few queues as possible And solutions for web messaging

I read in forum that while implementing any application using AMQP it is necessary to use fewer queues. So would I be completely wrong to assume that if I were cloning twitter I would have a unique and durable queue for each user signing up? It just seems the most natural approach and if not assign a unique queue for each user how would one design something like that.
What is the most used approach for web messaging. I see RabbitHUb and Rabbit WebHooks but Webhooks doesn't seem to be a scalable solution. i am working with Rails and my AMQP server as running as a Daemon.
In RabbitMQ, queues are quite cheap. They're effectively lightweight Erlang processes, and you can run tens to hundreds of thousands of queues on a single commodity machine (i.e. my laptop). Of course, each will consume a bit of RAM, but unused-recently queues will hibernate, so they'll consume as little memory as possible. In addition, if Rabbit runs low on memory for messages, it will page old messages to disk.
The above only applies to a single machine. RabbitMQ supports a form of lightweight clustering. When you join several Rabbit nodes into a cluster, each can see the queues and exchanges on the other nodes but each runs only its own queues. So, you'll be able to have even more queues! (to the limit of Erlang clusters, which is usually a few hundred nodes) So, a cluster forms a logical broker distributed over several machines; clients connect to it and use it transparently through any of the nodes.
That said, having a single durable queue for each user seems a bit strange: in AMQP, you cannot browse messages while they're on the queue; you may only get/consume messages which takes them off the queue and publish which adds the to the end of the queue. So, you can use AMQP as a message router, but you can't use it as a sort of message database.
Here is a thread that just talks about that: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-February/003041.html

Resources