How to rebuild and update a container without downtime with docker-compose? - docker

I enjoy a lot using docker-compose.
Eg. on my server, when I want to update my app with minor changes, I only need to git pull origin master && docker-compose restart, works perfectly.
But sometimes, I need to rebuild (eg. I added an npm dependency, need to run npm install again).
In this case, I do docker-compose build --no-cache && docker-compose restart.
I would expect this to :
create a new instance of my container
stop the existing container (after the newer has finished building)
start the new one
optionally remove the old one, but this could be done manually
But in practice it seems to restart the former one again.
Is it the expected behavior?
How can I handle a rebuild and start the new one after it is built?
Maybe I missed a specific command? Or would it make sense to have it?

from the manual docker-compose restart
If you make changes to your docker-compose.yml configuration these
changes will not be reflected after running this command.
you should be able to do
$docker-compose up -d --no-deps --build <service_name>
The --no-deps will not start linked services.

The problem is that restart will restart your current containers, which is not what you want.
As an example, I just did this
change the docker file for one of the images
call docker-compose build to build the images
call docker-compose down1 and docker-compose up
docker-compose restart will NOT work here
using docker-compose start instead also does not work
To be honest, i'm not completly sure you need to do a down first, but that should be easy to check.1 The bottomline is that you need to call up. You will see the containers of unchanged images restarting, but for the changed image you'll see recreating.
The advantage of this over just calling up --build is that you can see the building-process first before you restart.
1: from the comments; down is not needed, you can just call up --build. Down has some "down"-sides, including possible being destructive to your (volume-)data.

Use the --build flag to the up command, along with the -d flag to run your containers in the background:
docker-compose up -d --build
This will rebuild all images defined in your compose file, then restart any containers whose images have changed.
-d assumes that you don't want to keep everything running in your shell foreground. This makes it act more like restart, but it's not required.

Don't manage your application environment directly. Use deployment tool like Rancher / Kubernetes. Using one you will be able to upgrade your dockerized application without any downtime and even downgrade it should you need to.
Running Rancher is as easy as running another docker container as this tool is available in the Docker Hub.

You can use Swarm. Init swarm first by docker swarm init command and use healthcheck in docker-compose.yml.
Then run below command:
docker stack deploy -c docker-compose.yml project_name
instead of
docker-compose up -d.
When docker-compose.yml file is updated only run this command again:
docker stack deploy -c docker-compose.yml project_name
Docker Swarm will create new version of services and stop old version after that.

Though the accepted answer shall work to rebuild the container before starting the new one as a replacement, it is ok for simple use case, but the container will still be down during new container initialization process. If this is quite long, it can be an issue.
I managed to achieve rolling updates with docker-compose (along with a nginx reverse proxy), and detailed how I built that in this github issue: https://github.com/docker/compose/issues/1786#issuecomment-579794865
Hope it can help!

Run the following commands:
docker-compose pull
docker-compose up -d --no-deps --build <service_name>
As the top rated answer mentioned
docker-compose up -d --no-deps --build <service_name>
will restart a single service without taking down the whole compose.
I just wanted to add to the top answer in case anyone is unsure how to update an image without restarting the container.

Another way:
docker-compose restart in your case could be replaced with docker-compose up -d --force-recreate, see https://docs.docker.com/compose/reference/up/

Running docker-compose up while docker-compose is in the running state, will recreate container that got their configuration changed.
Thats the easiest way, and it will only affect containers that got their configuration changed.
root#docker:~# docker-compose up
traefik is up-to-date
nginx is up-to-date
Recreating php ... done

Related

Google Jib with docker-compose application, Fast way to restart application after rebuild image to Docker daemon

I Am using com.google.cloud.tools.jib version 3.2.1 in my spring boot Gradle build file.
The repo I am working with has to be run in a docker-compose application as it will only work if there are other services sharing info with it.
Am updating the code to add authentication, But that's not the issue here.
When ever I update anything, My process is as follow :
Update Code.
Gradle jibDockerBuild # to build to a Docker daemon
docker-compose down # to stop and remove old application
docker-compose up # to start the application with new services updated
Question:
Is there a smoother/faster way to do this kind of process, I mean without turning off the docker-compose application down and up with every single update?
I was thinking about something like turning my service down and then up, As a single service. But I found nothing.
Have a look at docker-compose up --help:
If you want to force Compose to stop and recreate all containers, use the
--force-recreate flag.
So, docker-compose up --force-recreate will allow you to recreate the containers without doing a docker-compose down which will also remove networks.

How to restart the ROS docker container with GUI enabled [duplicate]

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

Better Docker Compose production setting?

I have really simple web application contains those containers:
Frontend website (Nuxt.js - node app)
Backend API (PHP, Symfony)
MySQL
Every container has own Dockerfile and I can run it with Docker Compose together. It's really nice and I like the simplicity.
There is deploy script on my server. It clones GIT monorepo and run docker-compose:
DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git#bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
docker-compose down --remove-orphans && \
docker-compose up --build -d
But this solution is really slow and it makes ~3 minutes downtime. For this project I can accept few seconds downtime, it's not fatal. I don't need really zero downtime. But 3 minutes is not acceptable.
The most time-consuming is "npm build" inside containers. It's something which it must be run after every change.
What I can do better? Is Swarm or Kubernetes really only solution? Can I build containers while the old app still running? And after build just stop old and run new?
Thanks!
If you can structure things so that your images are self-contained, then you can get a fairly short downtime.
I would recommend using a unique tag for your images. A date stamp works well; you mention you have a monorepo, so you can use the commit ID in that repo for your image tag too. In your docker-compose.yml file, use an environment variable for your image names
version: '3'
services:
frontend:
image: myname/frontend:${TAG:-latest}
ports: [...]
et: cetera
Do not use volumes: to overwrite the code in your images. Do have your CI system test your images as built, running the exact image you're getting ready to deploy; no bind mounts or extra artificial test code. The question mentions "npm build inside containers"; run all of these build steps during the docker build phase and specify them in your Dockerfile, so you don't need to run these at deploy time.
When you have a new commit in your repo, build new images. This can happen on a separate system; it can happen in parallel with your running system. If you use a unique tag per image then it's more obvious that you're building a new image that's different from the running image. (In principle you can use a single ...:latest tag but I wouldn't recommend it.)
# Choose a tag; let's pick something based on a timestamp
export TAG=20200117.01
# Build the images
docker-compose build
# Push the images to a repository
# (Recommended; required if you're building somewhere
# other than the deployment system)
docker-compose push
Now you're at a point where you've built new images, but you're still running containers based on old images. You can tell Docker Compose to update things now. If you docker-compose pull images up front (or if you built them on the same system) then this just consists of stopping the existing containers and starting new ones. This is the only downtime point.
# Name the tag you want to deploy (same as above)
export TAG=20200117.01
# Pre-pull the images
docker-compose pull
# ==> During every step up to this point the existing system
# ==> is running undisturbed
# Ask Compose to replace the existing containers
# ==> This command is the only one that has any downtime
docker-compose up -d
(Why is the unique tag important? Say a mistake happens, and build 20200117.02 has a critical bug. It's very easy to set the tag back to the earlier 20200117.01 and re-run the deploy, so roll back the deployed system without doing a git revert and rebuilding the code. If you're looking at cluster managers like Kubernetes, the changed tag value is a signal to a Kubernetes Deployment object that something has updated, so this triggers an automatic redeployment.)
Only problem really was docker-compose down before docker-compose build. I deleted down command and downtime is a few seconds now. I thought, build shutdowns running containers before building automatically. I don't know why. Thanks Noé for idea! I'm idiot.
While I do think that switching to Kubernetes (or maybe Docker Swarm which I don't have experience with) would be the best option, YES you can build your docker images and then restart.
You just need to run the docker-compose build command. See below:
DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone git#bitbucket.org:adam/myproject.git $DIR/app
cd $DIR/app && \
docker-compose build && \
docker-compose down --remove-orphans && \
docker-compose up -d
This long time can come from multiple things:
Your application ignore the stop signal, docker-compose wait for them to terminate before killing them. Check that your container are well exiting without waiting the kill signal.
Your Dockerfile is wrongly ordered. Docker have built-in cache for every step but if an earlier step changed then it have do make every steps again. I recommend you to look carefuly when you copy files it's often this that break cache.
Run docker-compose build before putting down containers. Be careful about mounted volumes, if docker can't get the context it will failed

How to get docker-compose to always re-create containers from fresh images?

My docker images are built on a Jenkins CI server and are pushed to our private Docker Registry. My goal is to provision environments with docker-compose which always start the originally built state of the images.
I am currently using docker-compose 1.3.2 as well as 1.4.0 on different machines but we also used older versions previously.
I always used the docker-compose pull && docker-compose up -d commands to fetch the fresh images from the registry and start them up. I believe my preferred behaviour was working as expected up to a certain point in time, but since then docker-compose up started to re-run previously stopped containers instead of starting the originally built images every time.
Is there a way to get rid of this behaviour? Could that way be one which is wired in the docker-compose.yml configuration file to not depend "not forgetting" something on the command line upon every invocation?
ps. Besides finding a way to achieve my goal, I would also love to know a bit more about the background of this behaviour. I think the basic idea of Docker is to build an immutable infrastructure. The current behaviour of docker-compose just seem to plain clash with this approach.. or do I miss some points here?
docker-compose up --force-recreate is one option, but if you're using it for CI, I would start the build with docker-compose rm -f to stop and remove the containers and volumes (then follow it with pull and up).
This is what I use:
docker-compose rm -f
docker-compose pull
docker-compose up --build -d
# Run some tests
./tests
docker-compose stop -t 1
The reason containers are recreated is to preserve any data volumes that might be used (and it also happens to make up a lot faster).
If you're doing CI you don't want that, so just removing everything should get you want you want.
Update: use up --build which was added in docker-compose 1.7
The only solution that worked for me was the --no-cache flag:
docker-compose build --no-cache
This will automatically pull a fresh image from the repo. It also won't use the cached version that is prebuilt with any parameters you've been using before.
By current official documentation there is a shortcut that stops and removes containers, networks, volumes, and images created by up, if they are already stopped or partially removed and so on, then it will do the trick too:
docker-compose down
Then if you have new changes on your images or Dockerfiles use:
docker-compose build --no-cache
Finally:docker-compose up
In one command: docker-compose down && docker-compose build --no-cache && docker-compose up
docker-compose up --build # still use image cache
OR
docker-compose build --no-cache # never use cache
You can pass --force-recreate to docker compose up, which should use fresh containers.
I think the reasoning behind reusing containers is to preserve any changes during development. Note that Compose does something similar with volumes, which will also persist between container recreation (a recreated container will attach to its predecessor's volumes). This can be helpful, for example, if you have a Redis container used as a cache and you don't want to lose the cache each time you make a small change. At other times it's just confusing.
I don't believe there is any way you can force this from the Compose file.
Arguably it does clash with immutable infrastructure principles. The counter-argument is probably that you don't use Compose in production (yet). Also, I'm not sure I agree that immutable infra is the basic idea of Docker, although it's certainly a good use case/selling point.
docker-compose up --build --force-recreate
I claimed 3.5gb space in ubuntu AWS through this.
clean docker
docker stop $(docker ps -qa) && docker system prune -af --volumes
build again
docker build .
docker-compose build
docker-compose up
Also if the compose has several services and we only want to force build one of those:
docker-compose build --no-cache <service>
together with --force-recreate,
you might want to consider using this flag too:
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
I'm not sure from which version this flag is available, so check your docker-compose up --help if you have it or not
$docker-compose build
If there is something new it will be rebuilt.

How to upgrade docker container after its image changed

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

Resources