Examples:
moment 1:
Docker run container A that listen 32781(export port)->8000(service port)
Consul health check done pass by TCP connection(cycle 10s).
moment 2:
Docker restart container A and run container B at close time(Less than 10s).
Now the port 32781 is container B (reuse port), the new container A got another port.
But next cycle of consul health check, the port 32781 is ok, and the consul take for container A is ok.
How to solve the issue?
It seems to me, you have to deregister a service and it's health checks on container restart. Consul API provide such an opportunity, you just have to use it in your microservices. How to exactly make it work, depends on the way your services are built. Otherwise, no way Consul will determine, that some service was restarted with another port.
Related
I need some clarification in regards to using HEALTHCHECK on a docker service.
Context:
We are experimenting with a multi-node mariadb cluster and by utilizing HEALTHCHECK we would like the bootstrapping containers to remain unhealthy until bootstrapping is complete. We want this so that front-end users don’t access that particular container in the service until it is fully online and sync’d with the cluster. The issue is that bootstrapping relies on the network between containers in order to do a state transfer and it won’t work when a container isn’t accessible on the network.
Question:
When a container’s status is either starting or unhealthy does HEALTHCHECK completely kill network access to and from the container?
As an example, when a container is healthy I can run the command getent hosts tasks.<service_name>
inside the container which returns the IP address of other containers in a service. However, when the same container is unhealthy that command does not return anything… Hence my suspicion that HEALTHCHECK kills the network at the container level (as opposed to at the service/load balancer level) if the container isn’t healthy.
Thanks in advance
I ran some more tests and found my own answer. Basically docker does not kill container networking when it is either in the started or unhealthy phase. The reason getent hosts tasks.<service_name> command does not work during those phases is that that command goes back to get the container IP address through the service which does not have the unhealthy container(s) assigned to it.
I wrote a simple peer to peer system, in which, when starting a node
the node looks for a free port and then makes its service accessible on that port,
it registers its URL including the port to a central server, so that other nodes know how to connect to it.
I have the impression this is a typical kind of task that docker is useful for, so I thought of making a container for these peers (my previous experience with docker has only been to write a hello world container).
Ideally I would map (publish) my exposed port to a host port from within the container using the same code that I am running now, but I could imagine that is simply not possible, and I could get around that by starting the image using a script that checks for availability of ports and then runs the container on an appropriate free host port. If the first is possible however, that would be even better. To be explicit, I do something like the following in Python
port = 5001
while not port_is_free(port):
port += 1
The second part really has to be taken care of from within the container. Assume that it has been started with the command docker run -p 5005:80 p2p-node then I need to find out the published port 5005 that the exposed port 80 is mapped to from within the container.
Searching this site and the internet it looks like more people are interested in doing the same, but I couldn't find a solution, nor an affirmation that this simply cannot be done.
So this is the main question I want to ask: how can I see which published ports my exposed ports are mapped to from within a running docker container?
Some of your requirements are not clear to me.
However if you want to know only which host port is mapped with your container's port, you can simply pass an environment variable, -e VAR=val. Just an idea
Start container:
docker run -p 5005:80 -e HOST_PORT=5005 p2p-node
Access the variable from container
echo $HOST_PORT
there is docker-py, a python library of docker.
It is not clear why you want the host port. In docker, containers can communicate with each other without having to expose ports on host machines.
As long as the peer apps are containerized, you don't need the expose port. The containers can be connected via a Docker network and the internal port can be used for communication between the containers.
I have a docker container that listens on a socket, lets say its udp port 20000. (this is iot udp data coming in)
This app can (and should) be loadbalanced.
I publish it to my docker host and expose the port. Docker assigns some random port on the docker host.
I need to add this container to the pool on my loadbalancer, which sits outside of the docker network.
How do I automate this? Any time a new instance of this container starts I need to add it to the pool. When it dies I need to remove it from the pool.
The pattern which worked for me is to use 2 pieces:
Registrator, which detects containers as they go on-line and register them in a kind of service registry (e.g. Consul).
Load-balancer that is aware (watches) of services registered and changing its configuration accordingly. In my particular case it was HAProxy supported by Consul Template, which did a great job in automating all this stuff.
In general, this pattern can vary in details, but usually it will be something like that.
Use case: haproxy container running with docker compose. I want to have the container discover which hosts are available in order to recreate haproxy config and reload it.
I know the there will be one or more containers named server1 and server2 available. From inside the haproxy container I can query dns for server1 and receive more than one IP address. Is that the only way to know when a new server1 cointainer becomes available or dies? I know I can use the docker api from python running inside a container that hast the docker host socket mapped to it, but I'm not sure that will work when running on swarm.
The perfect solution would be an api or command that let's me register an event handler that is called when a new container joins the network.
There is a solutions that you can use Registrator (https://github.com/gliderlabs/registrator), Consul and Consul Template.
Consul is a Service Discovery
Consul-Template watches Consul and updates HA Proxy config and reload it.
Registrator listens Docker Engine and update Consul if there is any container is up or down.
Please see the image:
For the full tutorial, you can refer to my blog (https://sonnguyen.ws/microservices-with-docker-swarm-and-consul/) to know how to implement it.
I am not sure if I understand the docker port concept. Say I have an application inside a container that listens on port 6000 for tcp connections. This container is on server A.
I want to connect to the application from another server B. But I want to start multiple instances of the same container on server A and the internal port should stay 6000. However the external port should change.
E.g
container 1 6000->9660
container 2 6000->9661
...
So from outside the application should expose 9660, 9661,... Is this possible? I tried with:
docker run -p 9660:6000 ...
however the client could not connect. Any ideas?
I forgot to
EXPOSE 6000
inside my Dockerfile. Now it works :)