Docker stack deploy is not updating existing containers - docker

I am deploying 4 containers using docker stack deploy as below:
docker stack deploy --compose-file compose.yml --with-registry-auth myapp
For the first time, the containers are built using the latest image on the registry, no problem.
But when I push new images to the registry and run the commands again, the containers are not rebuilt using the latest images.
I am using the latest tag in my images. I know it is not the recommended way to do things, but for what I have read in the documentation, docker stack deploy if using the latest tag, will check for image sha with the registry, if it is different the containers will rebuild using latest images, but In my case, it's not happening. Am I missing something here?
I also get an error/warning when I run docker stack deploy once the stack is already up:
Updating service service_name (id: some_hash_value)
image docker.pkg.github.com/username/repository/image-name:latest could not be accessed on a registry to record
its digest. Each node will access docker.pkg.github.com/username/repository/image-name:latest independently,
possibly leading to different nodes running different
versions of the image.

I encoutered the same error message when I started using a new docker registry. The new registry's SSL certificate was not considered secured by docker.
So I got this error until I added my new registry to the insecure-registries section of the /etc/docker/daemon.json
I've seen nobody mentionning this solution on this question or other similar ones, so I hoped this could help.

Related

Docker swarm cannot find image

I deploy a docker swarm with 6 nodes. I built some images and I am trying to add them as services to the swarm. I have 5 microservices. When I run the on one host with docker-compose everything works fine. I run this command docker service create rate --with-registry-auth and I get the following message.
image rate:latest could not be accessed on a registry to record
its digest. Each node will access rate:latest independently,
possibly leading to different nodes running different
versions of the image.
yyf9m49xw3enwano1scr55ufc
overall progress: 0 out of 1 tasks
1/1: No such image: rate:latest
I run docker images and the rate image is appeared. rate is the repository name. I also tried with the image id but didn't worked. The only images that I can add to swarm is images that is public.
There is an issue (https://github.com/moby/moby/issues/35187) on the Moby project about that.
If you tried already to set --with-registry-auth but didn't solved it, you should manually login to each cluster worker node and pull the Docker images.
image rate:latest could not be accessed on a registry to record
its digest. Each node will access rate:latest independently,
possibly leading to different nodes running different
versions of the image.
This error indicates you are trying to run an image that was never pushed to a registry. Push your images to a registry first. And then you can run them on any node in the cluster (which will pull any missing images from that registry). If the registry requires authentication to pull the image, then run docker stack deploy --with-registry-auth ..., but you must first push the image, and specific the pushed image name (which will not be rate:latest since you do not have access to push to the official library on Docker Hub).

Gitlab-CI error upon deploying Docker Image on swarm mode

Hi i have problem with updating / changing image of my service on the server running Docker swarm mode.
Here is the process of manually updating the service.
push the project to gitlab from local machine.
pull the project from gitlab in server.
build a Docker image as my-project:latest
tag my-project:latest as registry.gitlab.com/my-group/my-project:staging
i push the image using docker push registry.gitlab.com/my-group/my-project:staging
i run docker stack deploy -c ~/docker-stack.yml api --with-registry-auth
and it works fine.
However if i move the codes above into a gitlab-ci.yml despite of ending the job successfully i get an error when it is trying to update the service.
Updating service api_backend (id: r4gqmil66kehzf0oehzqk57on)
image registry.gitlab.com/my-group/my-project:staging could not be accessed on a registry to record
its digest. Each node will access registry.gitlab.com/my-group/my-project:staging independently,
possibly leading to different nodes running different
versions of the image.
Also the gitlab runner is executing commands in Shell mode.
I have tried different solutions as you can see i'm even using the --with-registry-auth flag.
To summarize this:
everything works fine if i enter the codes manually but i get an error when i use gitlab-ci.yml.

How to push a Docker Compose stack to a Docker Swarm without uploading containers to a registry?

I've researched around before asking here, but all answers lead me to the same conclusion:
Build your Docker Compose stack locally
Tag and push the images to a registry (either a private or public one like Docker Hub)
Push the stack to the swarm using docker stack deploy --compose-file docker-compose.yml stackdemo
From here, the stack picks up and "pulls" the images from the registry and runs the containers
Is there no straightforward way to make the following (I think common sense) scenario work seamlessly?
Docker swarm manager has access (SSH keys) to pull the project from Git.
It periodically pulls the project and builds it "locally" using docker-compose up
When the build succeeds (containers are ready), it pushes the stack to the swarm using docker stack deploy, propagating the images to all worker nodes.
In that way, the original "source code" is only known by the Manager Node and only it has direct access to the Git repository.
Maintaining a registry (or paying for a cloud one), seems like a huge disadvantage for using Docker in Swarm Mode.
Side note: I've tried the approach of deploying a registry as a service within the stack and tagging + pushing the images to 127.0.0.1/myimage but that led to a different set of problems of its own - e.g. the fact that worker nodes that do not have an instance of the Registry container running, have no access to pull the image (the registry needs to be replicated to all nodes).
Use the docker save and docker load commands to transfer images from your dev machine to all of your swarm machines.
Docker swarm is orchestration & used or intended for managing docker node cluster. When any service is deployed, docker engine can start it on any of the node in the cluster (node that satisfy placement constraints if provided). Now, if one dont have registry, & images are available on node locally, docker cant verify if all nodes will point to same version of the image. Hence, swarm pulls image from registry & then deploys it to nodes.
Having registry also helps in keeping copy of images & registry keeps all version even if images are prune on one or all of docker nodes. One can enable backup of registry & hence there's no chance for loss of any image built & pushed to a registry.
Starting a registry (at least on localhost) is very easy - but that's not what your question.
Coming to your question, you can keep the compose stack file in the same directory where you have Dockerfile & then in the stack compose file you can write service which will get build at the time you deploy the stack:
version: "3.9"
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
Here,
build: .
this line builds the image with given name tagging to registry on localhost - but is not pushed which needs to be done manually.
So, in Dockerfile you can write git pull or git clone & then run command to build your code & so on.
Here's a link which provides simple steps to start a registry & build image on the fly while deploying the stack:
https://docs.docker.com/engine/swarm/stack-deploy/
Also, swarm does not works without registry & hence it's not possible to just save & load image & use with swarm orchestration.

How to get transferable docker compose stack without dockerhub

I have few docker images composed together in the stack using docker-compose.yml.
Now I want to transfer whole docker compose stack to the other host machine without uploading to the dockerhub,
And deploy it on the docker swarm.
I saw there is a thing called docker compose bundle, would that help?
If you’re deploying on a multi-host swarm (or something similar like Kubernetes or Nomad) you all but need a Docker registry. It doesn’t specifically have to be Docker Hub — quay.io, Amazon’s ECR, Google’s GCR, and self-hosted registries all work fine — but you do need to have pushed the built images somewhere where the orchestrator can retrieve them by name.
I’ve never used docker-compose bundle myself, but its documentation also notes that its operation “requires interaction with a Docker registry”.
The only real alternative is using docker save and docker load to manually move images between machines, but as a manual process it will get tedious very quickly, and you need to make sure an identical set of images are on every machine for consistency. Using a registry will be vastly easier.
The easyest way to do it is to use a Docker registry. The problem with Docker Hub is that you can only have one private registry, the rest must be public or paid.
Thankfully, there are other (free) alternatives:
Deploy your own private registry. Here is a nice tutorial where you can try it in the browser.
Use a free private registry. I personnaly use Codefresh. It can automatically build your image from a private repo (like bitbucket who has free plan too), but you can also just use it like a "simple" docker registry and push and pull your Docker images there.

docker service update vs docker stack deploy with existing stack

I have a doubt in using docker swarm mode commands to update existing services after having deployed a set of services using docker stack deploy.
As far I understood every service is pinned to the SHA256 digest of the image at the time of creation, so if you rebuild and push an image (with same tag) and you try to run a docker service update, service image is not updated (even if SHA256 is different). On the contrary, if you run a docker stack deploy again, all the services are updated with the new images.
I managed to update the service image also by using docker service update --image repository/image:tag <service>. Is this the normal behavior of these commands or is there something I didn't understood?
I'm using Docker 17.03.1-ce
Docker stack deploy documentation says:
"Create and update a stack from a compose or a dab file on the swarm. This command has to be run targeting a manager node."
So the behaviour you described is as expected.
Docker service update documentation is not so clear but you yourself said it only runs with --image repository/image:tag <service> so the flag is necessary to update the image.
You have two ways to accomplish what you want.
It is normal and expected behavior for docker stack deploy to update images of existing services to whatever hash the specified tag is linked.
If no tag is present, latest is assumed - which can be problematic at times, since the latest tag is not well understood by most persons, and thus lead to some unexpected results.

Resources