docker service update vs docker stack deploy with existing stack - docker

I have a doubt in using docker swarm mode commands to update existing services after having deployed a set of services using docker stack deploy.
As far I understood every service is pinned to the SHA256 digest of the image at the time of creation, so if you rebuild and push an image (with same tag) and you try to run a docker service update, service image is not updated (even if SHA256 is different). On the contrary, if you run a docker stack deploy again, all the services are updated with the new images.
I managed to update the service image also by using docker service update --image repository/image:tag <service>. Is this the normal behavior of these commands or is there something I didn't understood?
I'm using Docker 17.03.1-ce

Docker stack deploy documentation says:
"Create and update a stack from a compose or a dab file on the swarm. This command has to be run targeting a manager node."
So the behaviour you described is as expected.
Docker service update documentation is not so clear but you yourself said it only runs with --image repository/image:tag <service> so the flag is necessary to update the image.
You have two ways to accomplish what you want.

It is normal and expected behavior for docker stack deploy to update images of existing services to whatever hash the specified tag is linked.
If no tag is present, latest is assumed - which can be problematic at times, since the latest tag is not well understood by most persons, and thus lead to some unexpected results.

Related

How to restart a docker stack container by cron? [duplicate]

Does anyone know if there is a way to have docker swarm restart one service that is part of a stack without restarting the whole stack?
Doing docker stack deploy again for me is the way to go to update services. As Francois' Answer, and also in my own experience, doing so updates only services that need to be updated.
But sometimes, it seems easier when testing stuff to only restart a single service. In my case, I had to clear the volume and update the service to start it like it was fresh. I'm not sure if there is downside to the method I will describe. I tested it on my development stack and it worked great for me.
Get the service id you want to tear down then use docker service update --force <id> to force the update of the service which effectively re-deploy it
$ docker stack services <stack_name>
ID NAME ...
3xrdy2c7pfm3 stack-name_api ...
$ docker service update --force 3xrdy2c7pfm3
The --force flag will force the service to update causing it to restart.
Scale to 0 and back up:
docker service scale myservice=0
docker service scale myservice=10
Looking at the docker stack documentation:
Extended description
Create and update a stack from a compose or a dab file on the swarm
From this blog article: docker stack works in a similar way as docker compose. It’s idempotent. If the stack is already deployed, docker stack deploy will restart only those services which has the digest or tag that is updated:
From my experience, when I deploy the same stack again with one service changing, only the updated service will be restarted.
BUT... there seems to be some limitations to changes that are taken into account (some report bugs with image tags), so give it a try and see if works as expected.
You can also use service update if you want to be sure that only targeted service if updated with your changes.
You can also refer to this similar SO QA.
As per the example in the documentation for rolling updates:
$ docker service update --image redis:3.0.7 redis
However, that only works if your image is already on the local machines. If not then you need to use --with-registry-auth to send registry authentication details to the swarm agents. See details in the docker service update documentation.
$ docker service update --with-registry-auth --image redis:3.0.7 redis
To restart a single service (with rolling restart to avoid downtime in case the service has multiple replicas) in already configured, existing stack, you can do:
docker service update --force stack_service_name
I don't recommend running the same command again (in another shell) until this one completes (because otherwise the rolling restart isn't guaranteed; it might restart all replicas of that service).
The docker service update command also checks for newer version of the image:tag you are trying to use.
If your registry requires auth, also pass --with-registry-auth argument, like so:
docker service update --force --with-registry-auth stack_service_name
If you don't pass this argument, the service will still be restarted, but the check won't be made and the service will still use the old container image without pulling the new one first. Which might be what you want.
In case you want to also switch to different image tag (or completely different image), you can do it from here too, but remember to also change the tag in your docker-stack.yml, or your next docker stack deploy will revert it back to the verison defined there:
docker service update --with-registry-auth --force --image nginx:edge stack_service_name
remove it:
docker stack rm stack_name
redeploy it:
docker stack deploy -c docker-compose.yml stack_name

Docker stack deploy is not updating existing containers

I am deploying 4 containers using docker stack deploy as below:
docker stack deploy --compose-file compose.yml --with-registry-auth myapp
For the first time, the containers are built using the latest image on the registry, no problem.
But when I push new images to the registry and run the commands again, the containers are not rebuilt using the latest images.
I am using the latest tag in my images. I know it is not the recommended way to do things, but for what I have read in the documentation, docker stack deploy if using the latest tag, will check for image sha with the registry, if it is different the containers will rebuild using latest images, but In my case, it's not happening. Am I missing something here?
I also get an error/warning when I run docker stack deploy once the stack is already up:
Updating service service_name (id: some_hash_value)
image docker.pkg.github.com/username/repository/image-name:latest could not be accessed on a registry to record
its digest. Each node will access docker.pkg.github.com/username/repository/image-name:latest independently,
possibly leading to different nodes running different
versions of the image.
I encoutered the same error message when I started using a new docker registry. The new registry's SSL certificate was not considered secured by docker.
So I got this error until I added my new registry to the insecure-registries section of the /etc/docker/daemon.json
I've seen nobody mentionning this solution on this question or other similar ones, so I hoped this could help.

Save and later restore Docker stack state

I'm using docker swarm deploy -c docker-compose.yml somestack to deploy to a docker swarm. However, I can later scale it with docker service scale somestack_someservice=5 (or whatever). So now docker-compose.yml no longer reflects the system. My question is, is there any way to save off the current configuration of the stack, and then later reapply it, similar to how I originally created it (with docker-compose.yml)?
There is no direct way to generate docker-compose.yml file, although you can use
docker service inspect --pretty <service-name> command to obtain all configuration in text/json format.
There are some chances we can try to create docker-compose.yml from it.
Also, think about lack of this feature as advantage. If you want to make some adjustments, make them in docker-comose.yml first and then call docker stack deploy to apply it.

Local development and swarm service image update

We are using Docker Swarm on developers machines for development. Docker services is using e.g. foo:beta image.
When a developer builds a new feature for foo, he builds a new image of the container locally, under the same name (sha is different).
However, we are not being able to update the service to use the new image version. We tried
docker service update --force --image <component>
w/o success.
We are running the latest edge docker build: 17.05.0-ce-rc1-mac8 (16582)
The key is to use a local tag for images, that does not exist on remote repository. When Swarm can't find the image by given tag on remote repo, it will use the local one.
For that purpose, we tag all developer-related containers also with e.g. dev tag, that only exist on developers machine. This way we can update the image and by forcing the service update, update the running code.

Upgrade docker container to latest image

We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.

Resources