How to migrate Docker volume between hosts? - docker

Docker's documentation states that volumes can be "migrated" - which I'm assuming means that I should be able to move a volume from one host to another host. (More than happy to be corrected on this point.) However, the same documentation page doesn't provide information on how to do this.
Digging around on SO, I have found an older question (circa 2015-ish) that states that this is not possible, but given that it's 2 years on, I thought I'd ask again.
In case it helps, I'm developing a Flask app that uses [TinyDB] + local disk as its data storage - I have determined that I didn't need anything more fancy than that; this is a project done for learning at the moment, so I've decided to go extremely lightweight. The project is structured as such:
/project_directory
|- /app
|- __init__.py
|- ...
|- run.py # assumes `data/databases/ and data/files/` are present
|- Dockerfile
|- data/
|- databases/
|- db1.json
|- db2.json
|- files/
|- file1.pdf
|- file2.pdf
I have the folder data/* inside my .dockerignore and .gitignore, so that they are not placed under version control and are ignored by Docker when building the images.
While developing the app, I am also trying to work with database entries and PDFs that are as close to real-world as possible, so I seeded the app with a very small subset of real data, that are stored on a volume that is mounted directly into data/ when the Docker container is instantiated.
What I want to do is deploy the container on a remote host, but have the remote host seeded with the starter data (ideally, this would be the volume that I've been using locally, for maximal convenience); later on as more data are added on the remote host, I'd like to be able to pull that back down so that during development I'm working with up-to-date data that my end users have entered.
Looking around, the "hacky" way I'm thinking of doing is simply using rsync, which might work out just fine. However, if there's a solution I'm missing, I'd greatly appreciate guidance!

The way I would approach this is to generate a Docker container that stores a copy of the data you want to seed your development environment with. You can then expose the data in that container as a volume, and finally mount that volume into your development containers. I'll demonstrate with an example:
Creating the Data Container
Firstly we're just going to create a Docker container that contains your seed data and nothing else. I'd create a Dockerfile at ~/data/Dockerfile and give it the following content:
FROM alpine:3.4
ADD . /data
VOLUME /data
CMD /bin/true
You could then build this with:
docker build -t myproject/my-seed-data .
This will create you a Docker image tagged as myproject/my-seed-data:latest. The image simply contains all of the data you want to seed the environment with, stored at /data within the image. Whenever we create an instance of the image as a container, it will expose all of the files within /data as a volume.
Mounting the volume into another Docker container
I imagine you're running your Docker container something like this:
docker run -d -v $(pwd)/data:/data your-container-image <start_up_command>
You could now extend that to do the following:
docker run -d --name seed-data myproject/my-seed-data
docker run -d --volumes-from seed-data your-container-image <start_up_command>
What we're doing here is first creating an instance of your seed data container. We're then creating an instance of the development container and mounting the volumes from the data container into it. This means that you'll get the seed data at /data within your development container.
This gets a little bit of a pain that you know need to run two commands, so we could go ahead and orchestrate it a bit better with something like Docker Compose
Simple Orchestration with Docker Compose
Docker Compose is a way of running more than one container at the same time. You can declare what your environment needs to look like and do things like define:
"My development container depends on an instance of my seed data container"
You create a docker-compose.yml file to layout what you need. It would look something like this:
version: 2
services:
seed-data:
image: myproject/my-seed-data:latest
my_app:
build: .
volumes_from:
- seed-data
depends_on:
- seed-data
You can then start all containers at once using docker-compose up -d my_app. Docker Compose is smart enough to firstly start an instance of your data container, and then finally your app container.
Sharing the Data Container between hosts
The easiest way to do this is to push your data container as an image to Docker Hub. Once you have built the image, it can be pushed to Docker Hub as follows:
docker push myproject/my-seed-data:latest
It's very similar in concept to pushing a Git commit to a remote repository, instead in this case you're pushing a Docker image. What this does mean however is that any environment can now pull this image and use the data contained within it. That means you can re-generate the data image when you have new seed data, push it to Docker Hub under the :latest tag and when you re-start your dev environment will have the latest data.
To me this is the "Docker" way of sharing data and it keeps things portable between Docker environments. You can also do things like have your data container generated on a regular basis by a job within a CI environment like Jenkins.

According the Docker docs you could also create a Backup and Restore it:
Backup volume
docker run --rm --volumes-from CONTAINER -v \
$(pwd):/backup ubuntu tar cvf /backup/backup.tar /MOUNT_POINT_OF_VOLUME
Restore volume from backup on another host
docker run --rm --volumes-from CONTAINER -v \
$(pwd):/LOCAL_FOLDER ubuntu bash -c "cd /MOUNT_POINT_OF_VOLUME && \
tar xvf /backup/backup.tar --strip 1"
OR (what I prefer) just copy it to local storage
docker cp --archive CONTAINER:/MOUNT_POINT_OF_VOLUME ./LOCAL_FOLDER
then copy it to the other host and start with e.g.
docker run -v ./LOCAL_FOLDER:/MOUNT_POINT_OF_VOLUME some_image

you can use this trick :
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c "cd /from ; tar -cf - . " | ssh <TARGET_HOST> 'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - " '
more information

Related

How to restart the ROS docker container with GUI enabled [duplicate]

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

Docker NodeRed committed container does not maintain flows and modules

I'm working on a project using NodeRed deployed with docker and I would like to save the state of my deployment, including flows, settings and new added modules so that I can save the image and load it on another host replicating exactly the same NodeRed instance.
I created the container using:
docker run -itd --name my-nodered node-red
After implementing the flows and installing some custom modules, with the container running I used this command:
docker commit my-nodered my-project-nodered/my-nodered:version1
docker save my-project-nodered/my-nodered:version1 > tar-archive.tar.gz
And on another machine I'd imported the image using:
docker load < tar-archive.tar.gz
And run it using:
docker run -itd my-project-nodered/my-nodered:version1
And I obtain a vanilla NodeRed docker container with a default /data directory and just the files on the data directory maintained.
What am I missing? It could be possibile that my /data directory is overwrittenm as well as my settings.js file in the home directory? And in this case, which is the best practice to achieve my target?
Thank you a lot in advance
commit will not work, as you can see that there is volume defined in the Dockerfile.
# User configuration directory volume
VOLUME ["/data"]
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
docker commit doesn't consider volumes at all, so you'll get an unchanged image with nothing preloaded in it.
You can see the offical documentation
Managing User Data
Once you have Node-RED running with Docker, we need to ensure any
added nodes or flows are not lost if the container is destroyed. This
user data can be persisted by mounting a data directory to a volume
outside the container. This can either be done using a bind mount or a
named data volume.
Node-RED uses the /data directory inside the container to store user
configuration data.
nodered-user-data-in-docker
one way is to restore the your config file on another machine, for example backup-config then
docker run -it -p 1880:1880 -v $PWD/backup-config/:/data --name mynodered nodered/node-red-docker
or if you want to full for some repo then you can try
docker run -it --rm -v "$PWD/$(wget https://raw.githubusercontent.com/openenergymonitor/oem_node-red/master/flows_emonpi.json)":/data/ nodered/node-red-docker

How should I mount docker volumes in mlflow project?

I use mlflow in a docker environment as described in this example and I start my runs with mlflow run ..
I get output like this
2019/07/17 16:08:16 INFO mlflow.projects: === Building docker image mlflow-myproject-ab8e0e4 ===
2019/07/17 16:08:18 INFO mlflow.projects: === Created directory /var/folders/93/xt2vz36s7jd1fh9bkhkk9sgc0000gn/T/tmp1lxyqqw9 for downloading remote URIs passed to arguments of type 'path' ===
2019/07/17 16:08:18 INFO mlflow.projects: === Running command 'docker run
--rm -v /Users/foo/bar/mlruns:/mlflow/tmp/mlruns -e
MLFLOW_RUN_ID=ef21de61d8a6436b97b643e5cee64ae1 -e MLFLOW_TRACKING_URI=file:///mlflow/tmp/mlruns -e MLFLOW_EXPERIMENT_ID=0 mlflow-myproject-ab8e0e4 python train.py' in run with ID 'ef21de61d8a6436b97b643e5cee64ae1' ===
I would like to mount a docker volume named my_docker_volume to the container
at
the path /data. So instead of the docker run shown above, I would like to
use
docker run --rm --mount source=my_docker_volume,target=/data -v /Users/foo/bar/mlruns:/mlflow/tmp/mlruns -e MLFLOW_RUN_ID=ef21de61d8a6436b97b643e5cee64ae1 -e MLFLOW_TRACKING_URI=file:///mlflow/tmp/mlruns -e MLFLOW_EXPERIMENT_ID=0 mlflow-myproject-ab8e0e4 python train.py
I see that I could in principle run it once without mounted volume and then
copy the docker run ... and add --mount source=my_volume,target=/data but
I'd rather use something like
mlflow run --mount source=my_docker_volume,target=/data .
but this obviously doesn't work because --mount is not a parameter for
mlflow run.
What's the recommened way of mounting a docker volume then?
A similar issue has been brought up on the mlflow issue tracker, see "Access large data from within a Docker environment". An excerpt from it says:
However, MLFlow Docker environments currently only have access to data baked into the repository or image or must download a large dataset for each run.
...
A potential solution is to enable the user to mount a volume (e.g. local directory containing the data) into the Docker container.
Looks like this is feature others would benefit from too. Best course of action here would be to contribute support for mounts, or keep track of the issue until someone else implements it.
Why do you need to mount /data folder in the first place? There's another issue, a PR containing a fix related to storing artifacts in a custom location on host machine, could it be something you're looking for?
Finally, to avoid the above problem and facilitate volume mounting, I now run my experiments using three interacting docker containers. One that runs the machine learning code, one that runs an mlflow server and one that runs a postgresql server. I closely followed this walk-through article to set things up. It works nicely and docker-compose makes volume mounting easy. Metrics, parameters and meta data are stored in a database that is mounted to a local persistent volume. Artifacts are logged in the directory /mlflow or if you prefer in a docker volume.
Note: There's a typo in the cited walk-through article
In docker-compose.yml it shouldn't be
volumes:
- ./postgres-store:/var/lib/postgresql/data
which would bind a local folder named postgres-store.
Instead, to mount the docker volume postgres_store, you should use
volumes:
- postgres-store:/var/lib/postgresql/data

Change volume configuration in docker-compose without losing the data

My docker-compose has a data container which isn't mapped to a local directory in the host machine, and I want to change it from:
volumes:
- /var/www/html
to
volumes:
- /html:/var/www/html
But when I will restart the container, it will remove the current data container and replace it with a new one.
I know that the container is actually still there, but is there an easy way to do it without the creation of a new data container.
My docker-compose version is 1.7.1 (under boot2docker).
Thanks.
Try at your own risk:
create your host directory /htmlas you wish
docker inspect {container_name} | grep Source and grab your volume path on the host system. It'll be something like /var/lib/docker/volumes/abdb15a2eff[...]/_data
copy the content of that directory to your host directory
recreate the container as you wish.
One safe way to do this is to create a backup of the data from inside the Docker image. Then restore that backup to the directory on your host machine. The Docker Volumes Tutorial mentions a process like this near the bottom.
Here's how you'd do it:
First, mount a directory from your host machine into the container if you don't already have one mounted in. Maybe a volume like ./:/backup. Next, run a backup command like this:
docker-compose run service-name tar czvf /backup/html_data.tar.gz /var/www/html
Now you have html_data.tar.gz in your current directory. Extract it wherever you want and be on your way!
(I'm assuming, based on the way you indicated your volumes, that you're using docker-compose. The process is similar for vanilla Docker.)
Alternate approach, with --volumes-from
Get the name (or hash) of the container with the data you want to copy. You can do this with docker ps. For this example, let's call it container1. Now run this command to back up its data:
docker run --rm --volumes-from container1 -v $(pwd):/backup ubuntu:latest tar czvf /backup/html_data.tar.gz /var/www/html
Note that the image you use (ubuntu:latest) is not important as long as it can tar things.

How to upgrade docker container after its image changed

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

Resources