How to achieve a rolling update with docker-compose? - docker

I have a following setup in docker compose
nginx for proxying to frontend, backend and serving static conent
backend app on port 8080 (spring boot)
frontend app on port 4000 (node for SSR)
mysql used by backend
Frontend can be updated relatively fast using
docker-compose up -d --no-deps frontend
Unfortunately backend takes about 1 minute to start.
Is there an easy way to achieve lower downtime without having to change the current setup too much? I like how simple it is right now.
I would imagine something like:
Start a new instance of backend
Wait till it starts (it could be per timer or a healthtest
Close the perviously running instance

Swarm is the right solution to go, but this is still painfully doable with docker-compose.
First, ensure your proxy can do service discovery. You can't use container_name (as you can't use it in swarm) because your will increase the number of container of the same service. Proxy like traefik or nginx-proxy uses labels to do this.
Then, docker-compose up -d --scale backend=2 --no-recreate this creates a new container with the new image without touching the running one.
After it's up and running, docker kill old_container, then docker-compose up -d --scale backend=1 --no-recreate just to reset the number.
EDIT 1
docker kill old_container should be docker rm -f old_container
EDIT 2
how to handle even and not even runs
We want to always kill the oldest containers
docker rm -f $(docker ps --format "table {{.ID}} {{.Names}} {{.CreatedAt}}" | grep backend | (read -r; printf "%s\n" "$REPLY"; sort -k 3 ) | awk -F " " '{print $1}' | head -1)

Here is the script I've ended up using:
PREVIOUS_CONTAINER=$(docker ps --format "table {{.ID}} {{.Names}} {{.CreatedAt}}" | grep backend | awk -F " " '{print $1}')
docker-compose up -d --no-deps --scale backend=2 --no-recreate backend
sleep 100
docker kill -s SIGTERM $PREVIOUS_CONTAINER
sleep 1
docker rm -f $PREVIOUS_CONTAINER
docker-compose up -d --no-deps --scale backend=1 --no-recreate backend
docker-compose stop http-nginx
docker-compose up -d --no-deps --build http-nginx

Related

list all containers with restart policies

Is it possible to list all docker contianers with a restart policy set? I could inspect the running containers (How to quickly show policies of all docker containers), but what if one has been previously stopped, or has an on-failure policy and has completed successfully?
I could inspect the running containers (How to quickly show policies of all docker containers), but what if one has been previously stopped, or has an on-failure policy and has completed successfully?
Exactly, so do not stop at running containers - inspect all containers.
Just list all containers with that policy.
docker ps -a -q | xargs docker inspect | jq -r '.[] | select(.HostConfig.RestartPolicy.Name != "no") | .Name'
How does docker system prune work with this?
It does not work with this (?).
Will it skip the removal of any containers that are stopped, but have a restart policy set?
No, a stopped container is also removed. Observe the following terminal transcript:
$ docker run --restart always -d --name test alpine
7f22c1f1439b9a211ac85bf10b4da563aaa8f76a2229c290f1fceb0404ded2a8
kamil#leonidas /home/kamil
$ docker stop test
test
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7f22c1f1439b alpine "/bin/sh" 3 seconds ago Exited (0) Less than a second ago test
$ docker ps -a | grep test
7f22c1f1439b alpine "/bin/sh" 10 seconds ago Exited (0) 7 seconds ago test
$ yes y | docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] Deleted Containers:
7f22c1f1439b9a211ac85bf10b4da563aaa8f76a2229c290f1fceb0404ded2a8
Total reclaimed space: 41B
$ docker ps -a | grep test
/* empty */

How to delete specific running docker containers in batch

I need to run more than 70 docker containers at once. Later, these containers need to be stopped.
At the moment I can docker stop all of them with the shell command docker stop $(docker ps -f since=<last docker before>). It works OK, but if there are any containers started after mine, I have a problem as the above code will stop them too.
Is there any way I can close all of running containers with some kind of specific search?
I know there is an docker ps -f label=<some label>, but I just haven't figured out on how to use it yet.
If you're launching many containers at the same time, launch them all with
docker run --label=anyname other-docker-args-of-yours image:tag
And when you want to delete all your containers just do
docker stop $(docker ps -f label=anyname | awk 'NR>1 {print$1}')
where anyname is the label name you provide during the docker run command and
awk 'NR>1 {print$1}' ignores the column header CONTAINER_ID and just prints the values alone.
Edit-1:
I later realized that you can achieve the list of Container_ID without awk as well. I'd consider using the below line.
docker stop `docker ps -qaf label=anyname`
If you want to remove all stoppped containers also, then include a within the options, like instead of -qf use -qaf.
-q to print container IDs alone.
-a for all containers including stopped.

docker container not removing from docker

I am trying to remove container but when i run docker-compose rm ,runs fine but when i run docker ps then again it shows container:
root#datafinance:/tmp# docker-compose rm
Going to remove tmp_zookeeper_1_31dd890a1cbf
Are you sure? [yN] y
Removing tmp_zookeeper_1_31dd890a1cbf ... done
root#datafinance:/tmp# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03b08e4ef0b3 confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 14 hours ago Up 14 hours docker_c_zookeeper_1_7c953dce7d69
Use docker-compose ps, it will show the container which only launched by docker-compose up. If it shows there is no container, then this means this container was not launched by this docker-compose.yaml.
And Error starting userland proxy: listen tcp 0.0.0.0:32181: bind: address already in use' means the port 32181 is occupied either by other docker container or other process. You could use docker rm -f $(docker ps -qa) to delete all containers or more you can use netstat -oanltp | grep 32181 to find which process really occupy 32181.
Finally, if for any reason you did not able to delete container as you said, you can just use service docker restart or systemctl restart docker to make all container down. Then repeat above docker rm xxx.
With above steps, you can use docker compose up -d to use your service now.
try this :
docker rm -f 03b08e4ef0b3
DANGER
you may also try this, but be aware that will delete everything (Containers, Images, Networks, ....)
docker system prune -a -f
when all not helped your last resort is to restart Docker daemon
service docker restart
and then repeat the steps...
I think what you are looking for is :
docker-compose down
which removes the containers after stopping them according to this.
According to this, docker-compose rm removes the "stopped" containers. If your container(s) are running, I think it won't remove to prevent accidents.

Docker deployment - one machine - no downtime

I have only one small web project to be run through the Docker and only one machine where I can't use virtualization and I don't really need that either. I would like to know how can I deploy my application to VPS with Docker without any downtime.
For now, I am just using a repository and creating docker container with docker-compose (including some configuration for production through specific .yaml file).
I guess the best would be to use Swarm, but I think it's not possible since I could only use one machine.
Single machine deployments are a great use case for Swarm. You can do "rolling updates" if your services that make it possible for zero downtime service updates (assuming your running 2 containers of a service).
Obviously, you won't have hardware or OS level fault-tolerance, but Swarm is a better solution for production then the docker-compose cli.
See all my reasons for using Swarm in this case in my GitHub AMA on the subject: Only one host for production environment. What to use: docker-compose or single node swarm?
See my YouTube video on an example of rolling updates.
Here's a simple approach we’ve used in production with just nginx and docker-compose: https://engineering.tines.com/blog/simple-zero-downtime-deploys
Basically, it’s this bash script:
reload_nginx() {
docker exec nginx /usr/sbin/nginx -s reload
}
zero_downtime_deploy() {
service_name=tines-app
old_container_id=$(docker ps -f name=$service_name -q | tail -n1)
# bring a new container online, running new code
# (nginx continues routing to the old container only)
docker-compose up -d --no-deps --scale $service_name=2 --no-recreate $service_name
# wait for new container to be available
new_container_id=$(docker ps -f name=$service_name -q | head -n1)
new_container_ip=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id)
curl --silent --include --retry-connrefused --retry 30 --retry-delay 1 --fail http://$new_container_ip:3000/ || exit 1
# start routing requests to the new container (as well as the old)
reload_nginx
# take the old container offline
docker stop $old_container_id
docker rm $old_container_id
docker-compose up -d --no-deps --scale $service_name=1 --no-recreate $service_name
# stop routing requests to the old container
reload_nginx
}

Swarm rescheduling after adding new node

With the new version of Rancher is it possible to tell docker SWARM (1.12+) to redistribute containers when I add a new node in my infrastructure?
Suppose I have 4 nodes with 5 containers on each, if I add a 5th node, I'd like to redistribute my containers to have 4 of them on each node.
When a node crashes or it shuts down (scaling down my cluster), the re-scheduling triggers well, but when I scale up by adding 1 or more nodes, nothing happens.
This is not currently possible to do this. What you can do, is update a service with docker service update (i.e.: by adding an environment variable)
A new feature coming in docker 1.13 will be a force update of services, that will update the service and force the redistribution of nodes, so something like docker service update --force $(docker service ls -q) might be possible (haven't tried this yet, so can't confirm yet).
You can find more info about this feature in this blogpost
I was having the exact same issue. But rather than because of adding a new node, I had a node failure on the underling shared storage between nodes (was using shared NFS storage for sharing mount points of read only configs).
The docker service update --force $(docker service ls -q) as of version Docker version 17.05.0-ce, build 89658be does not work.
scale-up.sh:
#!/bin/bash
echo "Enter the amount by which you want to scale your services (NUMBER), followed by ENTER: "
read SCALENUM
for OUTPUT in $(docker service ls | awk '{print $2}' | sed -n '1!p')
do
echo "Scaling up "$OUTPUT" to "$SCALENUM
docker service scale $OUTPUT=$SCALENUM
done
scale-down.sh:
#!/bin/bash
for OUTPUT in $(docker service ls | awk '{print $2}' | sed -n '1!p')
do
echo "Scaling down "$OUTPUT" to 0"
docker service scale $OUTPUT=0
done
Note that the second script SCALES DOWN the service, making it unavailable. You can also use the following command as a starting point for other scripting you may need, as this prints out the service name independently of the other columns in a typical docker service ls command:
$(docker service ls | awk '{print $2}' | sed -n '1!p')
I hope this helps!

Resources