Does Docker HEALTHCHECK disable container networking when unhealthy? - docker

I need some clarification in regards to using HEALTHCHECK on a docker service.
Context:
We are experimenting with a multi-node mariadb cluster and by utilizing HEALTHCHECK we would like the bootstrapping containers to remain unhealthy until bootstrapping is complete. We want this so that front-end users don’t access that particular container in the service until it is fully online and sync’d with the cluster. The issue is that bootstrapping relies on the network between containers in order to do a state transfer and it won’t work when a container isn’t accessible on the network.
Question:
When a container’s status is either starting or unhealthy does HEALTHCHECK completely kill network access to and from the container?
As an example, when a container is healthy I can run the command getent hosts tasks.<service_name>
inside the container which returns the IP address of other containers in a service. However, when the same container is unhealthy that command does not return anything… Hence my suspicion that HEALTHCHECK kills the network at the container level (as opposed to at the service/load balancer level) if the container isn’t healthy.
Thanks in advance

I ran some more tests and found my own answer. Basically docker does not kill container networking when it is either in the started or unhealthy phase. The reason getent hosts tasks.<service_name> command does not work during those phases is that that command goes back to get the container IP address through the service which does not have the unhealthy container(s) assigned to it.

Related

How to monitor a process that is running inside a container

I am new to Docker container and my question is how to monitor a process that is running inside a container. For Example, I have a container running apache in it. how would I know if apache process inside container got killed but my container is still running.
How we will ensure specific process inside the container is running,if that process goes down how we will get alert ?
The Dockerfile reference has the answer:
https://docs.docker.com/engine/reference/builder/
More specifically, the HEALTHCHECK directive:
https://docs.docker.com/engine/reference/builder/#healthcheck
Essentially, when your container's entrypoint fails, the container dies:
https://docs.docker.com/engine/reference/builder/#entrypoint
But, in any case, a process running inside a container is also visible from the host's process list, so you can safely use the output of ps aux| grep httpd to monitor your apache's PIDs.
In production , you don't just use docker run , you need to use some container orchestrator like kubernetes where you define the health checks such as liveness and readiness probes and the orchestrator will take care of the rest , it will restart the container if apache fails for some reason.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes

Start docker container after another container exits

So I have two services run in docker containers, configured in a docker-compose.yaml. There is a dependency between them. Unlike regular dependencies where one container must be up before the other container starts, I have a service which must finish before starting the other service: service 1 updates a DB and service 2 reads from the DB.
Is there some way to perform this type of dependency check?
Both containers will start at the same time, but you could have the code in the second container wait for the first container to signal that it is finishing before it starts. See here:
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
Sidecar containers in Kubernetes Jobs?
"Sidecar" containers in Kubernetes pods

Access kubernetes service from a docker container which is run by mesos

I have mesos-master (mesosphere/mesos-master) and mesos-slave (mesosphere/mesos-slave) running inside my Kubernetes cluster.
Mesos slave starts the docker containers (docker is accessed by mounting /usb/bin/docker from host) with my data processing application (short lived, 1-5 min) which needs to access other kubernetes services. So shortly speaking I need to access Kubernetes DNS from a container.
Is it possible to do that?
Thanks
I found only one way:
I am resolving "kube-dns.kube-system" host into an IP address. Then I am injecting "metadata.namespace" into environment variable KUBERNETES_NAMESPACE. and finally I am passing --dns RESOLVED_IP and --dns-search ${KUBERNETES_NAMESPACE}.svc.cluster.local, so a mesos's docker container is able to talk to the services.

high availability with docker swarm mode

I have some problem using docker swarm mode .
I want to have high availability with swarm mode.
I think I can do that with rolling update of swarm.
Something like this...
docker service update --env-add test=test --update-parallelism 1 --update-delay 10s 6bwm30rfabq4
However there is a problem.
My docker image have entrypoint. Because of this there is a little delay before the service(I mean docker container) is really up. But docker service just think the service is already running, because status of the container is 'Up'. Even the service still do some work on entrypoint. So some container return error when I try to connect the service.
For example, if I create docker service named 'test' and scale up to 4 with port 8080. I can access test:8080 on web browser. And I try to rolling update with --update-parallelism 1 --update-delay 10s options. After that I try to connect the service again.. one container return error.. Because Docker service think that container already run..even the container still doesn't up because of entrypoint. And after 10s another container return error.. because update is started and docker service also think that container is already up.
So.. Is there any solution to solve this problem?
Should I make some nginx settings for disconnect connection to error container and reconnect another one?
The HEALTHCHECK Dockerfile command works for this use case. You specify how Docker should check if the container is available, and it gets used during updates as well as checking service levels in Swarm.
There's a good article about it here: Reducing Deploy Risk With Docker’s New Health Check Instruction.

How to kill networking to a docker container?

What approach can I use to kill networking to a docker container (ie: make it unreachable from the host OS)? A typical approach for a non-container would be to alter iptables, but for Docker I'm not sure how to go about this.
It's mostly this way by default. If you don't expose any ports and don't run network services in the OS (usually you just run your application), there's nothing to reach in the container.
You might clarify precisely what you mean with "reachable". Reachable from where for what purpose? If you don't expose any ports, your container is not reachable from any other host. Your container may still be "reachable" from other containers within the docker network on the host, so if your concern is other docker containers within the same docker host, docker provides the --icc=false flag to disable inter-container communication, which by default is enabled. More info here in the docs.

Resources