Docker Swarm - Route a request to ALL containers - docker

Is there any sort of way to broadcast an incoming request to all containers in a swarm?
EDIT: More info
I have a distributed application with many docker containers. The client can send requests to the swarm and have it respond. However, in some cases, the client needs to change a state on all server instances and therefore I would either need to be able to broadcast a message or have all the Docker containers talk to each other similar to MPI, which I'm trying to avoid.

There is no built-in way to turn a unicast packet into a multicast packet, nor any common 3rd party way of doing (That I've seen or heard of).
I'm not sure what "change a state on all server instances" means. Are we talking about the running state on all containers in a single service?
Or the actual underlying OS? All containers on all services? etc.
Without knowing more about your use case, I'd say it's likely better to design something where the request is received by one Swarm service, and then it's stored in a queue system where a backend worker would pick it up and "change the state on all server instances" for you.

It depends on your specific use case. One way to do it is to send a docker service update --force, which will cause all containers to reboot. If your containers fetch the information that is changed at startup, it would have the required effect

Related

Share data between two docker containers sharing same network

I have a requirement to build two applications (in Golang), first application just receives data via UART and send it to the second application for processing, second application should receive the data and process.
I have already completed receiving data via UART in first application, now I'm looking for better way to get data from first module to second module. They both are running as docker containers and sharing same docker network.
I was thinking of creating rest API in second application and first application will simply send data with http call, but is there a better way to do? Any other option that can take advantage of docker network?
In general, yes sockets are what you need. Plain TCP/UDP, HTTP server (RESTful API or not), gRPC, etc.
Or you start another container of a message queue (NATS, Kafka, RabbitMQ, etc), and write pub-sub logic. Or you can use a database.
Or you can mount a shared Docker volume between both containers and communicate via files.
None of these are necessarily unique to Golang and will work with any language.

Google Cloud Run Container Networking

I have a system of apps/services in docker containers that, when I bring them up using docker-compose, talk to each other using a bridge network.
Workers start up and register themselves with a manager. The manager assigns the workers work to do. In order to do this, the workers need to know where the manager is, and the manager needs to know where the workers are.
I want to deploy them all to Google Cloud Run.
At the moment, in docker via docker-compose, they talk to each other using their container names. For example the worker might call: http://manager:5000/register?name=worker1&port=5000 to register on startup, and then the manager can call http://worker1:5000 to send work. All thanks to the fact that they're connected to the same bridge network.
How does this work with Google Cloud Run? As far as I can see, when you create a service linked with a container, you get a permanent URL to communicate with your app once it has started. The app in the container doesn't know what the URL is.
Can I use the service names to communicate with each other in the same way as a docker bridge network?
Cloud Run currently does not support hostname based service discovery for other services in your project.
Currently, your best bet is to configure service URLs that your app depends on using environment variables or something like that.
In fact, you can't orchestrate in the same way the workers. Indeed, the Cloud Run services reply to an HTTP request. When an instance is spawn, there is no registration to a manager.
If you want to perform several task in parallel, perform several HTTP requests.
If you want a strong isolation between the different instances of a same service, set the concurrency param to 1 (only 1 HTTP request is processed in the same time by an instance of the service).
For information, you can have up to 100 instances for a same service.
So, deploy a manager service, and a worker service. The manager service perform HTTP request to worker with the right param for doing the right job.
Take care of the job duration. For now, the timeout can be set up to 900 seconds (15min) maximum
About the naming, the pattern is the following: https://<service-name>-<project-hash>.run.app/

On-prem docker swarm deployment with HA

I’m doing on-prem deployments using docker swarm and I need application and DB high availability.
As far as application HA is concerned, it works great within docker (service discovery and load balancing), but I’m not sure how to use it on my network. I mean how can I assign a virtual IP to all of my docker managers so that if any of them goes down, that virtual IP automatically points to the other docker manager in the cluster. I don’t want to have a single point of failure in my architecture, that’s why I’m not inclined to use any (single) reverse proxy solution in front of my swarm cluster (because to my understanding, if nginx/HAProxy goes down, the whole system goes into abyss. I would love to know that I’m wrong).
Secondly, I use WebSockets in my application for push notifications which doesn’t behave normally with all the load balancing stuff because socket handshakes get distorted.
I want a solution to these problems without writing anything in code (HA-specific and non-generic like hard coding IPs etc). Any suggestions? I hope I explained my problem correctly.
Docker Flow Proxy or Traefik can be placed on a set of swarm nodes that you want to receive traffic for incoming connections, and use DNS routing to get packets to the correct containers. Both have sticky sessions option (I know Docker Flow does, not sure about Traefik).
Then you can either:
If your incoming connections are just client HTTP/S requests, you can use DNS Round Robin with multiple A records, which works great, or
By an expensive hardware fault tolerant reverse proxy like F5
Use some network-layer IP failover that is at the OS and physical network level (not related to Docker really), but I'm not sure how well that would work with Swarm.
Number 2 is the typical solution in private datacenters that need full HA at all layers.

Is it possible to do webserver affinity in a Docker Swarm?

I have a Docker container that is a REST API webserver. I want to use this webserver in a Docker Swarm. A couple of the REST API calls are used in an asynchronous pattern. That is, the first call provides data for processing, and is returned a request identifier. The second call uses the request identifier to check on the processing and get the results when processing is done. Since there is no connection between any of the webservers in the Docker Swarm, how can I force the second REST API call back to the Docker instance that was used in the first REST API call? Is there anyway to ensure webserver affinity for these two REST API calls in a Docker Swarm?
With Docker Swarm Mode and Ingress networking, connections are processed with round robin load balancing, and this isn't configurable. If the connection remains open, which is the case for most web browsers, you'll find that requests go back to the same instance.
You can use a reverse proxy in front of your application that is aware of each instance of the service. Docker has this with their HRM tool in the EE offering, and many of the other reverse proxies, like traefik, offer similar sticky session options.
If you can, a better design would be to utilize an external cache for any persistence, e.g. redis. This way you can perform a rolling update of your application without breaking all the sessions.

How to implement failover with MQTT brokers?

I'm implementing a few devices and services with a fair bit of data running around my local network, with all publishers and most subscribers within the firewall. Because it's easy, I'm starting with a CloudMQTT subscription, but ideally, I'd like that to be the primary (to service a couple of external clients), but if the internet goes down, I'd like an internal server to be a hot back up, with all publishing and, for internal clients, subscribe service. I'm not sure if Bridging can help?
Is there anyway to implement this? In my head, it's something like the way DNS works - you can have a local server, and to the extent it knows your answer, it can service it, but it has a place to go for further answers.
Run a broker on each site and bridge it to the CloudMQTT instance. This way things can always communicate locally even if the internet is down

Resources