I have a requirement to build two applications (in Golang), first application just receives data via UART and send it to the second application for processing, second application should receive the data and process.
I have already completed receiving data via UART in first application, now I'm looking for better way to get data from first module to second module. They both are running as docker containers and sharing same docker network.
I was thinking of creating rest API in second application and first application will simply send data with http call, but is there a better way to do? Any other option that can take advantage of docker network?
In general, yes sockets are what you need. Plain TCP/UDP, HTTP server (RESTful API or not), gRPC, etc.
Or you start another container of a message queue (NATS, Kafka, RabbitMQ, etc), and write pub-sub logic. Or you can use a database.
Or you can mount a shared Docker volume between both containers and communicate via files.
None of these are necessarily unique to Golang and will work with any language.
Related
I have been assigned a problem statement which goes as follows:
I am building platform-as-a-service from scratch, which has pipelined execution. Here pipelined execution means that output of a service can be input into another service. The platform can offer number of services, which can be pipelined together. Output of a service can be input to multiple services.
I am new to this field so how to go about this task is not very intuitive to me.
After researching a bit, I found out that I can use Docker to deploy services in containers. So I installed Docker on Ubuntu and installed few images and run them as service (for example, MongoDB). What I am thinking of is that I need to run the services in containers, and define a way of taking input and output to these services. But how exactly do I do this using Docker containers. As an example, I want to send a query as an input to MongoDB (running as a service) and want an output, which I want to feed into another service.
Am I thinking in the right direction? If not in what direction should I be thinking of going about implementing this task?
Is there a standard way of exchanging data between services? (For example output of on service as input to another)
Is there something that Docker offers that I can leverage?
NOTE: I cannot use any high level API which does this for me. I need to implement it myself.
I have a system of apps/services in docker containers that, when I bring them up using docker-compose, talk to each other using a bridge network.
Workers start up and register themselves with a manager. The manager assigns the workers work to do. In order to do this, the workers need to know where the manager is, and the manager needs to know where the workers are.
I want to deploy them all to Google Cloud Run.
At the moment, in docker via docker-compose, they talk to each other using their container names. For example the worker might call: http://manager:5000/register?name=worker1&port=5000 to register on startup, and then the manager can call http://worker1:5000 to send work. All thanks to the fact that they're connected to the same bridge network.
How does this work with Google Cloud Run? As far as I can see, when you create a service linked with a container, you get a permanent URL to communicate with your app once it has started. The app in the container doesn't know what the URL is.
Can I use the service names to communicate with each other in the same way as a docker bridge network?
Cloud Run currently does not support hostname based service discovery for other services in your project.
Currently, your best bet is to configure service URLs that your app depends on using environment variables or something like that.
In fact, you can't orchestrate in the same way the workers. Indeed, the Cloud Run services reply to an HTTP request. When an instance is spawn, there is no registration to a manager.
If you want to perform several task in parallel, perform several HTTP requests.
If you want a strong isolation between the different instances of a same service, set the concurrency param to 1 (only 1 HTTP request is processed in the same time by an instance of the service).
For information, you can have up to 100 instances for a same service.
So, deploy a manager service, and a worker service. The manager service perform HTTP request to worker with the right param for doing the right job.
Take care of the job duration. For now, the timeout can be set up to 900 seconds (15min) maximum
About the naming, the pattern is the following: https://<service-name>-<project-hash>.run.app/
I have a Docker container that is a REST API webserver. I want to use this webserver in a Docker Swarm. A couple of the REST API calls are used in an asynchronous pattern. That is, the first call provides data for processing, and is returned a request identifier. The second call uses the request identifier to check on the processing and get the results when processing is done. Since there is no connection between any of the webservers in the Docker Swarm, how can I force the second REST API call back to the Docker instance that was used in the first REST API call? Is there anyway to ensure webserver affinity for these two REST API calls in a Docker Swarm?
With Docker Swarm Mode and Ingress networking, connections are processed with round robin load balancing, and this isn't configurable. If the connection remains open, which is the case for most web browsers, you'll find that requests go back to the same instance.
You can use a reverse proxy in front of your application that is aware of each instance of the service. Docker has this with their HRM tool in the EE offering, and many of the other reverse proxies, like traefik, offer similar sticky session options.
If you can, a better design would be to utilize an external cache for any persistence, e.g. redis. This way you can perform a rolling update of your application without breaking all the sessions.
Is there any sort of way to broadcast an incoming request to all containers in a swarm?
EDIT: More info
I have a distributed application with many docker containers. The client can send requests to the swarm and have it respond. However, in some cases, the client needs to change a state on all server instances and therefore I would either need to be able to broadcast a message or have all the Docker containers talk to each other similar to MPI, which I'm trying to avoid.
There is no built-in way to turn a unicast packet into a multicast packet, nor any common 3rd party way of doing (That I've seen or heard of).
I'm not sure what "change a state on all server instances" means. Are we talking about the running state on all containers in a single service?
Or the actual underlying OS? All containers on all services? etc.
Without knowing more about your use case, I'd say it's likely better to design something where the request is received by one Swarm service, and then it's stored in a queue system where a backend worker would pick it up and "change the state on all server instances" for you.
It depends on your specific use case. One way to do it is to send a docker service update --force, which will cause all containers to reboot. If your containers fetch the information that is changed at startup, it would have the required effect
I want to create containers which start small webservices. Developers of our team should then upload small images which contain different services. A main backend system then uses these services.
My problem is: When a developer uploads a new service, how does the backend service know there is a new service it can use? Beforehand, when service X should be used and there was no service for that functionality, it returned just a simple message. When there is a service uploaded to do X, the main backend should use that service. But how does it know it is there and should be used?
You can add some notification to your small web service. But what do you do when the service will be down unexpectedly or network will be down at the short time? You need add some logic to clients for refreshing connection.
And docker recommends to use this way.
The problem of waiting for a database (for example) to be ready is
really just a subset of a much larger problem of distributed systems.
In production, your database could become unavailable or move hosts at
any time. Your application needs to be resilient to these types of
failures.
To handle this, your application should attempt to re-establish a
connection to the database after a failure. If the application retries
the connection, it should eventually be able to connect to the
database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason.