So I have two services run in docker containers, configured in a docker-compose.yaml. There is a dependency between them. Unlike regular dependencies where one container must be up before the other container starts, I have a service which must finish before starting the other service: service 1 updates a DB and service 2 reads from the DB.
Is there some way to perform this type of dependency check?
Both containers will start at the same time, but you could have the code in the second container wait for the first container to signal that it is finishing before it starts. See here:
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
Sidecar containers in Kubernetes Jobs?
"Sidecar" containers in Kubernetes pods
Related
I need some clarification in regards to using HEALTHCHECK on a docker service.
Context:
We are experimenting with a multi-node mariadb cluster and by utilizing HEALTHCHECK we would like the bootstrapping containers to remain unhealthy until bootstrapping is complete. We want this so that front-end users don’t access that particular container in the service until it is fully online and sync’d with the cluster. The issue is that bootstrapping relies on the network between containers in order to do a state transfer and it won’t work when a container isn’t accessible on the network.
Question:
When a container’s status is either starting or unhealthy does HEALTHCHECK completely kill network access to and from the container?
As an example, when a container is healthy I can run the command getent hosts tasks.<service_name>
inside the container which returns the IP address of other containers in a service. However, when the same container is unhealthy that command does not return anything… Hence my suspicion that HEALTHCHECK kills the network at the container level (as opposed to at the service/load balancer level) if the container isn’t healthy.
Thanks in advance
I ran some more tests and found my own answer. Basically docker does not kill container networking when it is either in the started or unhealthy phase. The reason getent hosts tasks.<service_name> command does not work during those phases is that that command goes back to get the container IP address through the service which does not have the unhealthy container(s) assigned to it.
Let's say I have 1 stack with 3 services each running in my Docker Swarm. When I update a service, Docker creates new containers and deletes the old ones.
When a new container is created Docker automatically sends web traffic to the container even if it's not ready yet.
Is there a possibility to block traffic until the container is fully operational? like wait before entrypoint is finished otherwise, data will be sent to a container that can't handle it yet and will result in errors.
I have a docker swarm setup with a typical web app stack (nginx & php). I need redis as a service in docker swarm. The swarm has 2 nodes and each node should have the web stack and redis service. But only one redis container should be active at a time (and be able to communicate on with each web stack), the other one must be there but in standby mode so that if the first redis fails, this one could switch quickly.
When you work with docker swarm, having a backup, standby container would be considered anti-pattern. A more recommended approach to deploy a reliable container using swarm would be to have a HEALTHCHECK command as part of your Dockerfile. You can set a specific interval after which the healthcheck commands comes into effect for your container to be able to warm up.
Now, club the HEALTHCHECK functionality with the fact that docker-swarm always maintains the specified number of contianers. Make your healthcheck script throw the exit code 1 if it becomes unhealthy. As soon as the swarm detects exit code 1, it kills the container and to maintain the number of containers, it spins up a new one.
The entire process takes only milliseconds and works seamlessly. Have multiple containers in case the warm-up time is long. This will prevent your service from becoming unavailable if one of the containers goes down.
Example of a healthcheck command:
HEALTHCHECK --interval=5m --timeout=3s CMD curl -f http://localhost/ || exit 1
I can create a docker container by command
docker run <<image_name>>
I can create a service by command
docker service create <<image_name>>
What is the difference between these two in behaviour?
When would I need to create a service over container?
docker service command in a docker swarm replaces the docker run. docker run has been built for single host solutions. Its whole idea is to focus on local containers on the system it is talking to. Whereas in a cluster the individual containers are irrelevant. We simply use swarm services to manage the multiple containers in a cluster. Swarm will orchestrate the containers of the services for us.
docker service create is mainly to be used in docker swarm mode. docker run does not have the concept of scaling up/down. With docker service create you can specify the number of replicas to be created using the --replicas command. This will create and manage multiple replicas of a containers in many different nodes. There are several such options for managing multiple containers using docker service create and other commands under docker service ...
One more note: docker services are for container orchestration systems(swarm). It has built in facility for failure recovery. ie. it recreates a container on failure. docker runwould never recreate a container if it fails. When the docker service commands are used we are not directly asking to perform action like "create a single container", rather we are saying to the orchestration system to "put this job in your queue and when you can get to it perform that action on the swarm". This means it has rollback facilities, failure mitigation and lots of intelligence built in.
You need to consider using docker service create when in swarm mode and docker run when not in swarm mode. You can lookup on docker swarms to understand docker services.
There is no real difference. In the official documentation you can read "Services are really just containers in production".
Services can be declared in "docker-compose.yml" and can be started from it. Once started, they will run as containers.
It is just a common way to name parts of your stack.
In a docker swarm environment what will happen if the container dies because of an internal error? Will the task be reborn?
It depends.
With Swarm Mode introduced in 1.12, the orchestration will run a new container on another node when it detects the target state doesn't match the current state.
With the prior container based Swarm solution, the Swarm itself won't restart the container, but the host running the container may restart it if you pass a flag like --restart=always.