How to move Docker containers between different hosts? - docker

I cannot find a way of moving docker running containers from one host to another.
Is there any way I can push my containers to repositories like we do for images ?
Currently, I am not using data volumes to store the data associated with applications running inside containers. So some data resides inside containers, which I want to persist before redesigning the setup.

Alternatively, if you do not wish to push to a repository:
Export the container to a tarball
docker export <CONTAINER ID> > /home/export.tar
Move your tarball to new machine
Import it back
cat /home/export.tar | docker import - some-name:latest

You cannot move a running docker container from one host to another.
You can commit the changes in your container to an image with docker commit, move the image onto a new host, and then start a new container with docker run. This will preserve any data that your application has created inside the container.
Nb: It does not preserve data that is stored inside volumes; you need to move data volumes manually to new host.

What eventually worked for me, after lot's of confusing manuals and confusing tutorials, since Docker is obviously at time of my writing at peak of inflated expectations, is:
Save the docker image into archive:
docker save image_name > image_name.tar
copy on another machine
on that other docker machine, run docker load in a following way:
cat image_name.tar | docker load
Export and import, as proposed in another answers does not export ports and variables, which might be required for your container to run. And you might end up with stuff like "No command specified" etc... When you try to load it on another machine.
So, difference between save and export is that save command saves whole image with history and metadata, while export command exports only files structure (without history or metadata).
Needless to say is that, if you already have those ports taken on the docker hyper-visor you are doing import, by some other docker container, you will end-up in conflict, and you will have to reconfigure exposed ports.
Note: In order to move data with docker, you might be having persistent storage somewhere, which should also be moved alongside with containers.

Use this script:
https://github.com/ricardobranco777/docker-volumes.sh
This does preserve data in volumes.
Example usage:
# Stop the container
docker stop $CONTAINER
# Create a new image
docker commit $CONTAINER $CONTAINER
# Save image
docker save -o $CONTAINER.tar $CONTAINER
# Save the volumes (use ".tar.gz" if you want compression)
docker-volumes.sh $CONTAINER save $CONTAINER-volumes.tar
# Copy image and volumes to another host
scp $CONTAINER.tar $CONTAINER-volumes.tar $USER#$HOST:
# On the other host:
docker load -i $CONTAINER.tar
docker create --name $CONTAINER [<PREVIOUS CONTAINER OPTIONS>] $CONTAINER
# Load the volumes
docker-volumes.sh $CONTAINER load $CONTAINER-volumes.tar
# Start container
docker start $CONTAINER

From Docker documentation:
docker export does not export the contents of volumes associated
with the container. If a volume is mounted on top of an existing
directory in the container, docker export will export the contents
of the underlying directory, not the contents of the volume. Refer
to Backup, restore, or migrate data
volumes
in the user guide for examples on exporting data in a volume.

I tried many solutions for this, and this is the one that worked for me :
1.commit/save container to new image :
++ commit the container:
# docker stop
# docker commit CONTAINER_NAME
# docker save --output IMAGE_NAME.tar IMAGE_NAME:TAG
ps:"Our container CONTAINER_NAME has a mounted volume at '/var/home'" ( you have to inspect your container to specify its volume path : # docker inspect CONTAINER_NAME )
++ save its volume : we will use an ubuntu image to do the thing.
# mkdir backup
# docker run --rm --volumes-from CONTAINER_NAME -v ${pwd}/backup:/backup ubuntu
bash -c “cd /var/home && tar cvf /backup/volume_backup.tar .”
Now when you look at ${pwd}/backup , you will find our volume under tar format.
Until now, we have our conatainer's image 'IMAGE_NAME.tar' and its volume 'volume_backup.tar'.
Now you can , recreate the same old container on a new host.

docker export | gzip > .tar.gz
#new host
gunzip < /mnt/usb/.tar.gz | docker import -
docker run -i -p 80:80 /bin/bash

Related

Difference between Docker cp, docker export and docker save

I need to do some process with the file from a docker container even the SO files, I know that in order to achieve this I can use docker export, from an image, for example for mysql, I can create a container with the command:
docker create -ti --name mysql_dummy mysql bash, and then export the container: docker export <containerName > <path>/mysql.tar
I can do the same with the save command:
docker save mysql > path/mysql.tar
And with the docker cp, using the same container created above:
docker cp mysql_dummy:/. <path>
My question is what is the difference between these 3 way, I noticed that the save command, save the image created a folder structure with different hashes, such as:
I'm curious if we can consolidate all of this layers in just one copy of the image.
Which command is the most recommendable and what are the differences, thanks!

docker: get files from one container to another

I have 2 docker containers which I created from 2 Dockerfiles.
docker run container1 # It updates a txt (update.txt) file every minutes and store it in the same container
docker run container2 --link container1 # A web server which in intended to read the updated file in container1
Now I want to access the file update.txt in container2 but I can't do that. I don't want to just copy the file since it will become static but I want to read the dynamically updated file to read the latest updates. Can anyone suggest a way out?
Use named volume to store update.txt in that volume on host.
Mount this volume in both containers.
All changes that container 1 writes then will be accessible in container 2.
Firstly, create a docker volume by using the below command
$ docker volume create --name sharedVolume
sharedVolume
And then start the first container by mounting the above-created volume and write data in that location where volume will be mounted.
$ docker run -it -v sharedVolume:/dataToWrite ubuntu
root#1021d9260d7b:/# echo "DATA Written" >> /dataToWrite/Example.txt
root#1021d9260d7b:/# cat /dataToWrite/Example.txt
DATA Written
Now, start the second container and mount the same volume you created above and check whether the same file present in the second container or not
$ docker run -it -v sharedVolume:/dataToWrite alpine
/ # cat /dataToWrite/Example.txt
DATA Written
As you can see above, the first container is ubuntu and the second container is alpine. Contents which are written in the first container is present in second container.

docker persistent storage via volume

All that read are saying that we can create docker persistent storage via the VOLUME control, e.g., here and here.
However, I have the following in my Dockerfile:
VOLUME ["/home", "/root"]
but nothing was kept in there (I tried touch abc, and when I exit and get back in again the file is not there).
I see that the official usage has no other special controls or treatments, yet it can provide persistent storage, even the apt-cacher-ng service container is stopped.
How is that possible? I've checked out the following but still don't have a clue:
Docker volume persistent
Persistent Docker volume
Docker volume vs. persistent volume on official Docker beginner tutorial
In conclusion, as explained here:
The first mechanism will create an implicit storage sandbox for the container that requested host-based persistence. ... The key thing to understand is that the data stored in the sandbox is not available to other containers, except the one that requested it.
How can I make persistent storage works? Why my changes are wiped each time, while the apt-cacher-ng service container can maintain its persistent storage even it is stopped and restarted?
What is the "other special controls or treatments" that I failed to see here?
Explain with plain simple Dockerfile (not docker-compose.yml ) please.
The problem is that the VOLUME instruction creates an external storage when the docker run is used without the "mount" option for that specific folder. In other words, every time you create a container for that image, docker will create a new volume with a random label but since you're not giving those volumes a name, the new containers cannot re-use the existing generated volumes.
e.g.
FROM ubuntu
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
Given this Dockerfile, if you simply build and run this docker image using the following commands:
echo "There are currently $(docker volume ls | wc -l) volumes"
docker build -t my_volume_test:latest .
docker run --name another_test my_volume_test:latest
echo "There are currently $(docker volume ls | wc -l) volumes"
you'll see that the amount of volumes present on your machine has increased. That container is now using a volume to store the data but that specific volume does not have a name or label so it's only bound to that specific container. If you delete and recreate that container, docker will generate a new volume with a random name unless you manage to mount the volume you've created earlier.
If you want to make it easy, I suggest to first create the volume and mount it then. e.g.
docker rm -f another_test
docker volume create my-vol
docker run \
--name another_test \
--mount source=my-vol,target=/myvol \
my_volume_test:latest
# alternative
docker rm -f another_test
docker run \
--name another_test \
-v my-vol:/myvol \
my_volume_test:latest
In this case you can create and remove as many containers you want but they'll all use the same volume.
check the VOLUME reference and Use volumes for more info.

How to stop/relaunch docker container without losing the changes?

I did the following and lost all the changed data in my Docker container.
docker build -t <name:tag> .
docker run *-p 8080:80* --name <container_name> <name:tag>
docker exec (import and process some files, launch a server to host them)
Then I wanted to run it on a different port. docker stop & docker run does not work. Instead I did
docker stop
docker rm <container_name>
docker run (same parameters as before)
After the restart I saw the changes that happened in the container at 1-3 had disappeared, and had to re-run the import.
How do I do this correctly next time?
what you have to do is build the image from the container you just stopped after making changes. Because your old command still using the old image which doesn't have new changes(you have made changes in container which you just stopped not in image )
docker commit --help
Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Create a new image from a container's changes
docker commit -a me new_nginx myrepo/nginx:latest
then you can start container with the new image you just built
but if you dont want create image with the changes you made(like you dont want to put config containing password in the image) you can use volume mount
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
Manage data in containers
Every time you do a docker run it will spin up a fresh container based on your image. And once a container is started, there are very few things that docker allows you to change with the docker update. So instead, you should preserve your data in an external volume that needs to persist between instances of a container. E.g.
docker run -p 8080:80 -v app-data:/data --name <container_name> <name:tag>
The volume name (app-data) and mount point in the container (/data) can be changed for your own requirements. Then when you destroy and restart a new container, you can mount the same volume in the new container.

What is the purpose of VOLUME in Dockerfile

I'm trying to go deeper in my understanding of Docker's volume, and I'm having an hard time to figure out the differences / use-case of:
The docker volume create command
The docker run -v /path:/host_path
The VOLUME entry in the Dockerfile file
I particularly don't understand what happens if you combine the VOLUME entry with the -v flag.
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is started from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
If you had persisted some content in a volume, but since then deleted the container (which by default does not deleted its associated volume, unless you are using docker rm -v), you can re-attach said volume to a new container (declaring the same volume).
See "Docker - How to access a volume not attached to a container?".
With docker volume create, this is easy to reattached a named volume to a container.
docker volume create --name aname
docker run -v aname:/apath --name acontainer
...
# modify data in /apath
...
docker rm acontainer
# let's mount aname volume again
docker run -v aname:/apath --name acontainer
ls /apath
# you find your data back!
VOLUME instruction becomes interesting when you combine it with volumes-from runtime parameter.
Given the following Dockerfile:
FROM busybox
VOLUME /myvolume
Build an image with:
docker build -t my-busybox .
And spin up a container with:
docker run --rm -it --name my-busybox-1 my-busybox
The first thing to notice is you will have a folder in this image named myvolume. But it is not particularly interesting since when we exit the container the volume will be removed as well.
Create an empty file in this folder, so run the following in the container:
cd myvolume
touch hello.txt
Now spin up a new container, but share the same volume with my-busybox-1:
docker run --rm -it --volumes-from my-busybox-1 --name my-busybox-2 my-busybox
You will see that my-busybox-2 contains the file hello.txt in myvolume folder.
Once you exit both containers, the volume will be removed as well.
#radium226
Specifying VOLUME in Dockerfile makes sure the folder is to be treated as a volume(i.e., outside container) at runtime, as opposed to be a regular directory inside the container. Note the performance and accessibility implications.
If having forgot to specify "-v" in "docker run" command line, the above is still true. It's just the volume name becomes anonymous. But there are still ways to access or recover data from such anonymous volumes.
Using MYSQL from docker hub:
Running the below command as an example:
$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
Therefore, a directory that persists when the container is killed is mounted is available that also provided higher performance for some operations like databases actions.

Resources