Missing tag after docker stack deploy - docker

I am researching the new docker stack deploy command in addition to the new docker compose v3 and it seems like docker stack deploy could almost replace docker-compose up -d.
One different I found is quite odd, and that is the running container shows the image name but not the tag.
Inspect Compose Up
This is a snipped of running inspect on a container created via docker-compose up -d.
{
Command: "node server.js",
Image: "styfle/notification-service:v1.0.0"
}
Inspect Stack Deploy
This is a snipped of running inspect on a container created via docker stack deploy -c docker-compose.yml --with-registry-auth=true tst.
{
Command: "node server.js",
Image: "styfle/notification-service#sha256:827e6a274c5ee2b941dde402f82069c2da644927cac53c0b2cd5acacb739f949"
}
Why is tag (in this case, the suffix :v1.0.0) missing from the Image and can it be found somewhere else? I'm using Docker CE 17.03.1-ce-win5 (10743).

Docker services use image pinning to ensure every node in the swarm is running the same image. If a tag is replaced, or in a different state on different nodes, the sha256 digest will ensure that only the v1.0.0 version that you had when creating the service is the one used. For more details, see docker's documentation on the subject.

Related

How to restart the ROS docker container with GUI enabled [duplicate]

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

docker-compose vs creating and running an image

I'm new to docker and trying to understand what's best for my project (a webapp).
So far, I understand that I can either :
use docker-compose up -d to start a container defined by a set of rule in a docker-compose.yaml
build an image from a dockerfile and then create a container from this image
If I understand correctly, docker-compose up -d allows me (via volumes) to mount files (e.g my application) into the container. If i build an image however, I am able to embed my application natively in it (with a Dockerfile and COPY instruction).
Is my understanding correct ? How should I choose between those 2 choices ?
Docker Compose is simply a convenience wrapper around the docker command.
Everything you can do in docker compose, you can do plainly with running docker.
For example, these docker commands:
$ docker build -t temp .
$ docker run -i -p 3000:80 -v $PWD/public:/docroot/ temp
are similar to having this docker compose file:
version: '3'
services:
web:
build: .
image: temp
ports: ["3000:80"]
volumes:
- ./public:/docroot
and running:
$ docker-compose up web
Although docker compose advantages are most obvious when using multiple containers, it can also be used to start a single container.
My advice to you is: Start without docker compose, to understand how to build a simple image, and how to run it using the docker command line. When you feel comfortable with it, take a look at docker compose.
As for the best practice in regards to copying files to the container, or mounting them - the answer is both, and here is why:
When you are in development mode, you do not want to build the image on every code change. This is where the volume mount comes into play. However, your final docker image should contain your code so it can be deployed anywhere else. After all, this is why we use containers right? This is where the COPY comes into play.
Finally, remember that when you mount a volume to the container, it will "shadow" the contents of that folder in the container - this is how using both mount and COPY actually works as you expect it to work.
Docker-compose is just a container orchestrator.
I just provides you a simple way to create multiple related containers. The relationship between containers can be volumes, networks, start order, environment variables, etc.
In background, docker-compose uses plain docker. So, anything you can do using docker-compose (mounting volumes, using custom networks, scaling) can be done using docker commands (but of course is harder).

Docker stack deploy latest image works even when image is deleted

I have a docker compose file with a container that uses the latest tag:
code_site:
image: code_site:latest
deploy:
restart_policy:
condition: any
volumes:
- ../../data_to_backup/code_site/drupal_sites:/drupal_www/sites
- drupal_core:/drupal_www/core
- php_fpm_socket:/var/run/php-fpm7
networks:
- main_net
I have a process which will rebuild the container. I use it when I want to make changes to my site.
I am investigating a problem where the docker stack deploy command:
docker stack deploy --compose-file=docker-compose.yml code_site
(The stack name happens to match the image name but this is a coincidence.)
If I go through the following process:
Delete the code_site:latest image (rmi code_site:latest)
Rebuild a new code_site:latest image
Redeploy the stack
It will bring up the OLD version of the container. This is confusing, especially as I have deleted the old version.
I have gone further and I deleted the code_site image then I ran the stack deploy command.
The stack deploys successfully still running the old version of the container.
I can use the docker images command and verify that there is no container named code_site:latest so I have no idea how the stack could possibly deploy.
Can anyone explain how the image is coming back from the dead, and what method I should use to get rid of it permanently and force docker stack to use the real image?
Thanks
Robert
Update 1
code_site is a locally built image
I am running on a swarm but there is only one node in the swarm
Docker stack deploy will pull the latest image from your docker registry, since '--resolve-image always' is set by default, therefore always resolving to the latest image. If you don't want this run
docker stack deploy --resolve-image never [rest of deploy command]
However, to make it easier to maintain changes, I would suggest using version tags for your images in your registry, such as code_site:v1 when code changes push new version tagged code_site:v2 and deploy the new image/version using the normal deploy command without --resolve-image never.
Also if you plan to add nodes to your swarm you will need to change your command to docker stack deploy --with-registry-auth to allow the other nodes to pull the image from your repo.
Update 1
If you are 100% sure you do not have an image in your docker registry with the name code_site:latest then
this should work
Run:
docker rm $(docker ps -aq)
docker volume rm $(docker volume ls --format {{.ID}})
To check for lingering containers/services:
List Existing Services
docker service ls
List Running Stacks
docker stack ls
List All Containers
docker ps -aq
Then redeploy with deploy command
Alternatively to update your service without removing old containers/volumes/images, you can just update your image, then update your service without removing anything.
This will update your service using the new image... no need to stop, remove, then update... Just update.
Docker Service Update Command
Run after new image is built:
docker service update [SERVICE NAME] --image [IMAGE NAME] --force

Docker Swarm Deploy a local Dockerfile

I am trying to deploy a stack of services in a swarm on a local machine for testing purpose and i want to build the docker image whenever i run or deploy a stack from the manager node.
Is it possible what I am trying to achieve..
On Docker Swarm you can't build an image specified in a Docker Compose file:
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. The docker stack command accepts only pre-built images. - from docker docs
You need to create the image with docker build (on the folder where the Dockerfile is located):
docker build -t imagename --no-cache .
After this command the image (named imagename) is now available on the local registry.
You can use this image on your Docker Compose file like the following:
version: '3'
services:
example-service:
image: imagename:latest
You need to build the image with docker build. Docker swarm doesn't work with tags to identify images. Instead it remembers the image id (hash) of an image when executing stack deploy, because a tag might change later on but the hash never changes.
Therefore you should reference the hash of your image as shown by docker image ls so that docker swarm will not try to find your image on some registry.
version: '3'
services:
example-service:
image: imagename:97bfeeb4b649
While updating a local image you will get an error as below
image IMAGENAME:latest could not be accessed on a registry to record
its digest. Each node will access IMAGENAME:latest independently,
possibly leading to different nodes running different
versions of the image.
To overcome this issue start the service forcefully as follows
docker service update --image IMAGENAME:latest --force Service Name
In the above example it is as
docker service update --image imagename:97bfeeb4b649 --force Service Name

How to upgrade docker container after its image changed

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

Resources