My docker data-only container is empty - docker

The scenario I most feared has happened: my data-only docker container is suddenly empty.
This is not serious: it's a development machine and I have back-up. But I fear this most because I know that I still have holes in my understanding of Docker.
I have read in this answer the following:
Docker containers will persist on disk until they are explicitly deleted with docker rm.
Here are the containers I'm interested in (from a docker ps command):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
478e59ecd218 dockerlocal_mongo_instance "/entrypoint.sh mongo" About an hour ago Exited (137) 12 minutes ago dockerlocal_mongo_instance_1
0ca49f6629cb tianon/true "/true" 3 hours ago Exited (0) About an hour ago dockerlocal_mongo_data_1
I have a 1) a mongo container which references the data-only container, and 2) the data-only container itself. I recently ran docker rm a couple of times on the mongo dockerlocal_mongo_instance_1 container which references the data-only container.
I can see from the output of the docker ps command (see above) that it says that the data-only container was created '3 hours ago'. But I created it about 2 weeks ago. Somehow my original one has gone. My question is how could this happen? What other possibilities are there?
I have checked my bash command history and the docker rm command was run only on the mongo container, not on the data-only container - which for obvious reasons I have been extremely careful not to touch.
Can anyone shed any light on this? I must have misunderstood something fundamental here.
I would be grateful for any other possible scenarios that could cause the data-container to be trashed and re-created in this way.
Docker compose .yml file (relevant bits):
mongo_data:
image: tianon/true
volumes:
- /data/db
mongo_instance:
build: mongodb
volumes_from:
- mongo_data
ports:
- "27017:27017"
environment:
- MONGODB_USER=$S_USER_NAME
- MONGODB_PASS=$S_USER_PASSWORD
# command: --auth

There's a couple of things you need to understand.
A data container doesn't need to be, and shouldn't be, running. It's really just a namespace for a set of volumes that can the be referred to from other containers. In your case, every time you start up your application, the data container will start, run true, then shut down. It would be better if you just created the container once and never ran it again.
Docker Compose defines the running services that make up your application. It has a lot of logic related to deciding when to recreate containers or reuse existing ones, which at some stage has decided to recreate your data container (I'm not sure why in this case). You should only put stuff in Compose that does not need to be persisted. Also note that Compose will attempt to copy volumes from old containers to new ones, which can cause confusion if you're not expecting it.
In your case, the solution is to define the data container outside Compose e.g:
docker run --name mongo_data mongodb echo "Data Container"
This will run the echo command then immediately exit. You can then remove the mongo_data entry from the Compose yaml. Note that I have intentionally used the mongodb image rather than tianon/true; as a data container isn't left running, it won't take up any extra space and using the mongodb image ensures file permissions etc are correct.

If you ever ran docker-compose rm (or, worse docker-compose rm -f), that would have deleted all extant containers defined in your docker-compose.yml. Note that, even if you only meant to start the mongo_instance container with docker-compose up mongo_instance, the mongo_data container would have been created as well, as mongo_instance depends on mongo_data, and so doing docker-compose rm while mongo_instance was around would delete both containers.

The answer for me in the end is to forget using a docker data container and just set a normal volume on the mongo container.
mongo_instance:
build: mongodb
volume:
- /data/db:/data/db
ports:
- "27017:27017"
environment:
- MONGODB_USER=$S_USER_NAME
- MONGODB_PASS=$S_USER_PASSWORD
Why?
The main reason to use Docker Compose in my view is because of its portability. You easily run all your docker commands in one place, which means a simple installation. So if using a docker data container means moving the creation of the data container out of the .yml file I am complicating everything - I first have to create the data container and then run docker compose: I would much prefer to just upload my .yml file and run it.
I found problems anyway with the creation of the data container on its own before calling docker-compose. You are advised to reuse your own images when creating data containers rather than using small third-party images like tianon/true. But my images, which I might be able to use, only get created when I run docker-compose and I haven't run it yet - a chicken and egg situation.
I tried creating a data container using tianon/true (and others) and I always ran into permission problems.
So I have removed the data container and used a volume parameter on the mongo_instance. I hope this also solves my original problem ...

Related

How to restart the ROS docker container with GUI enabled [duplicate]

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

docker-compose vs creating and running an image

I'm new to docker and trying to understand what's best for my project (a webapp).
So far, I understand that I can either :
use docker-compose up -d to start a container defined by a set of rule in a docker-compose.yaml
build an image from a dockerfile and then create a container from this image
If I understand correctly, docker-compose up -d allows me (via volumes) to mount files (e.g my application) into the container. If i build an image however, I am able to embed my application natively in it (with a Dockerfile and COPY instruction).
Is my understanding correct ? How should I choose between those 2 choices ?
Docker Compose is simply a convenience wrapper around the docker command.
Everything you can do in docker compose, you can do plainly with running docker.
For example, these docker commands:
$ docker build -t temp .
$ docker run -i -p 3000:80 -v $PWD/public:/docroot/ temp
are similar to having this docker compose file:
version: '3'
services:
web:
build: .
image: temp
ports: ["3000:80"]
volumes:
- ./public:/docroot
and running:
$ docker-compose up web
Although docker compose advantages are most obvious when using multiple containers, it can also be used to start a single container.
My advice to you is: Start without docker compose, to understand how to build a simple image, and how to run it using the docker command line. When you feel comfortable with it, take a look at docker compose.
As for the best practice in regards to copying files to the container, or mounting them - the answer is both, and here is why:
When you are in development mode, you do not want to build the image on every code change. This is where the volume mount comes into play. However, your final docker image should contain your code so it can be deployed anywhere else. After all, this is why we use containers right? This is where the COPY comes into play.
Finally, remember that when you mount a volume to the container, it will "shadow" the contents of that folder in the container - this is how using both mount and COPY actually works as you expect it to work.
Docker-compose is just a container orchestrator.
I just provides you a simple way to create multiple related containers. The relationship between containers can be volumes, networks, start order, environment variables, etc.
In background, docker-compose uses plain docker. So, anything you can do using docker-compose (mounting volumes, using custom networks, scaling) can be done using docker commands (but of course is harder).

Dockerized Neo4J maintaining Volume through restarts somehow?

I'm using Docker Compose and using the standard Neo4j image. I'm not specifying any Volumes to be mounted but when I bring docker down and bring it back up it still has old data. I can't find where/how it is storing this.
I'd like to override it so each new Docker has fresh database. Any ideas?
docker-compose.yml:
neo4j:
image: neo4j:3.0
ports:
- "7474:7474"
- "7687:7687"
environment:
- NEO4J_AUTH=neo4j/password
volumes: []
Trying to override any volumes with volumes: [] but that isn't working
You can easily purge all the data while taking down your docker-compose deployment appending the -v flag at the end of the command i.e.:
docker-compose down -v
This will stop your deployment, delete the related containers and remove the associated volumes.
EDIT
As Fryguy said in the comments, you can recreate your anonymous volumes when launching a new deployment with the -V flag (capital letter), so:
docker-compose up -V
Will recreate anonymous volumes instead of retrieving data from the previous containers.
After you issue the docker-compose down command, it should remove all of the containers. Verify that they are removed.
docker ps -a
You can remove any unattached volumes with:
docker volume prune
Another thing that I came across was that I had installed Neo4j desktop and while I assumed I was using the docker instance, I was really using a local database. I finally realized this when I shut down my docker container but was still able to hit the database.
That's the expected behavior when starting and stopping a container. The file system is preserved.
If you want to clear out your data, start a new container with
docker run
With compose, you can use the
--force-recreate
flag.

How can I reuse a Docker container as a service?

I already have a running container for both postgres and redis in use for various things. However, I started those from the command line months ago. Now I'm trying to install a new application and the recipe for this involves writing out a docker compose file which includes both postgres and redis as services.
Can the compose file be modified in such a way as to specify the already-running containers? Postgres already does a fine job of siloing any of the data, and I can't imagine that it would be a problem to reuse the running redis.
Should I even reuse them? It occurs to me that I could run multiple containers for both, and I'm not sure there would be any disadvantage to that (other than a cluttered docker ps output).
When I set container_name to the names of the existing containers, I get what I assume is a rather typical error of:
cb7cb3e78dc50b527f71b71b7842e1a1c". You have to remove (or rename) that container to be able to reuse that name.
Followed by a few that compain that the ports are already in use (5432, 6579, etc).
Other answers here on Stackoverflow suggest that if I had originally invoked these services from another compose file with the exact same details, I could do so here as well and it would reuse them. But the command I used to start them was somehow never written to my bash_history, so I'm not even sure of the details (other than name, ports, and restart always).
Are you looking for docker-compose's external_links keyword?
external_links allows you reuse already running containers.
According to docker-compose specification:
This keyword links to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services. external_links follow semantics similar to the legacy option links when specifying both the container name and the link alias (CONTAINER:ALIAS).
And here's the syntax:
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
You can give name for your container. If there is no container with the given name, then it is the first time to run the image. If the named container is found, restart the container.
In this way, you can reuse the container. Here is my sample script.
containerName="IamContainer"
if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerName}\$"; then
docker restart ${containerName}
else
docker run --name ${containerName} -d hello-world
fi
You probably don't want to keep using a container that you don't know how to create. However, the good news is that you should be able to figure out how you can create your container again by inspecting it with the command
$ docker container inspect ID
This will display all settings, the docker-compose specific ones will be under Config.Labels. For container reuse across projects, you'd be interested in the values of com.docker.compose.project and com.docker.compose.service, so that you can pass them to docker-compose --project-name and use them as the service's name in your docker-compose.yaml.

How to upgrade docker container after its image changed

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

Resources