We are using Docker Swarm on developers machines for development. Docker services is using e.g. foo:beta image.
When a developer builds a new feature for foo, he builds a new image of the container locally, under the same name (sha is different).
However, we are not being able to update the service to use the new image version. We tried
docker service update --force --image <component>
w/o success.
We are running the latest edge docker build: 17.05.0-ce-rc1-mac8 (16582)
The key is to use a local tag for images, that does not exist on remote repository. When Swarm can't find the image by given tag on remote repo, it will use the local one.
For that purpose, we tag all developer-related containers also with e.g. dev tag, that only exist on developers machine. This way we can update the image and by forcing the service update, update the running code.
Related
I've set up a Docker Swarm consisting of two VMs on my local network, (1 manager, 1 worker). In the manager node, I created a private registry service and I want to deploy a number of locally built images in my local dev machine (which is not in the swarm) to that registry. The Swarm docs and the dozens of examples I've read in the Internet seem not to go beyond the basics, running commands inside the manager node, building, tagging and pushing images from the manager's local cache to the registry in that same node, and I have that uneasy feeling that I'm missing something right on my face.
I see that my machine could simply join the swarm as a manager, owning the registry. The other nodes would automagically receive the updates and my problem would go away. But does this make sense for a production swarm setting, a cluster of nodes serving production code, depending on my dev's home machine - even as non-worker, manager-only?
Things I've tried:
Retagging my local image to <my.node.manager.ip>/my_app:1.0.0, followed by docker-compose push. I can see this does push the image to the manager's registry, but the service fails to start with the message "No such image: <my.node.manager.ip>/my_app:1.0.0"
Creating a context and, from my machine, run docker-compose --context my_context up --no-start. This (re)creates the image in the manager node's local cache, which I can then push to the registry, but it feels very unwieldy as a deploy process.
Should I run a remote script in the manager node to git pull my code and then do the build/push/docker stack deploy?
TL;DR What's the expected steps to deploy an image/app to a Docker Swarm from a local dev machine outside the swarm? Is this possible? Is this supported by Docker Swarm?
After reading a bit more on private registries and tags, I could finally wrap my head around the necessary tagging for my use case to work. My first approach was halfway right, but I had to change my deploy script so as to:
extract the image field from my docker-compose.yml (in the form localhost:5000/my_app:${MY_APP_VERSION-latest}, to circumvent the "No such image" error)
create a second tag for pushing to the remote registry, replacing "localhost" by my manager node address (where the registry is at)
Tag my locally built image with that tag and docker-compose push it
Deploy the app with docker --context <staging|production> stack deploy my_app
I'm answering myself since I did solve my original problem, but would love to see other DevOps implementations for similar scenarios.
Hi i have problem with updating / changing image of my service on the server running Docker swarm mode.
Here is the process of manually updating the service.
push the project to gitlab from local machine.
pull the project from gitlab in server.
build a Docker image as my-project:latest
tag my-project:latest as registry.gitlab.com/my-group/my-project:staging
i push the image using docker push registry.gitlab.com/my-group/my-project:staging
i run docker stack deploy -c ~/docker-stack.yml api --with-registry-auth
and it works fine.
However if i move the codes above into a gitlab-ci.yml despite of ending the job successfully i get an error when it is trying to update the service.
Updating service api_backend (id: r4gqmil66kehzf0oehzqk57on)
image registry.gitlab.com/my-group/my-project:staging could not be accessed on a registry to record
its digest. Each node will access registry.gitlab.com/my-group/my-project:staging independently,
possibly leading to different nodes running different
versions of the image.
Also the gitlab runner is executing commands in Shell mode.
I have tried different solutions as you can see i'm even using the --with-registry-auth flag.
To summarize this:
everything works fine if i enter the codes manually but i get an error when i use gitlab-ci.yml.
I have a angular application and I have created an docker image of that, I have published it on Azure Container Register(ACR).
I want to pull the image from ACR and deploy it to Azure App service, and change the images, css files from the docker container at runtime.
I want to know if it is possible to update the images/css file at runtime as we do using docker cp command on localhost.
I would suggest using CI/CD for this purpose.
Just create a webhook in ACR. So, whenever the image gets updates, the WebApp will automatically get "notified" and pull in the new change.
I have a doubt in using docker swarm mode commands to update existing services after having deployed a set of services using docker stack deploy.
As far I understood every service is pinned to the SHA256 digest of the image at the time of creation, so if you rebuild and push an image (with same tag) and you try to run a docker service update, service image is not updated (even if SHA256 is different). On the contrary, if you run a docker stack deploy again, all the services are updated with the new images.
I managed to update the service image also by using docker service update --image repository/image:tag <service>. Is this the normal behavior of these commands or is there something I didn't understood?
I'm using Docker 17.03.1-ce
Docker stack deploy documentation says:
"Create and update a stack from a compose or a dab file on the swarm. This command has to be run targeting a manager node."
So the behaviour you described is as expected.
Docker service update documentation is not so clear but you yourself said it only runs with --image repository/image:tag <service> so the flag is necessary to update the image.
You have two ways to accomplish what you want.
It is normal and expected behavior for docker stack deploy to update images of existing services to whatever hash the specified tag is linked.
If no tag is present, latest is assumed - which can be problematic at times, since the latest tag is not well understood by most persons, and thus lead to some unexpected results.
A colleague find out Docker and want to use it for our project. I start to use Docker for test. After reading an article about Docker swarm I want to test it.
I have installed 3 VM (ubuntu server 14.04) with docker and swarm. I followed some How To ( http://blog.remmelt.com/2014/12/07/docker-swarm-setup/ and http://devopscube.com/docker-tutorial-getting-started-with-docker-swarm/). My cluster work. I can launch for exemple a basic apache container (the image was pull in the Docker hub) but I want to use my own image (an apache server with my web site).
I tested to load an image (after save it in a .tar) but this option isn't supported by the clustering mode, same thing with the import option.
So my question is : Can I use my own image without to push it in the Docker hub and how I do this ?
If your own image is based on a Dockerfile that you build you can execute the build command on your project while targeting the swarm.
However if the image wasn't built, but created manually you need to have a registry in between that you can push to, either docker hub or some other registry solution like https://github.com/docker/docker-registry