Difference between Docker cp, docker export and docker save - docker

I need to do some process with the file from a docker container even the SO files, I know that in order to achieve this I can use docker export, from an image, for example for mysql, I can create a container with the command:
docker create -ti --name mysql_dummy mysql bash, and then export the container: docker export <containerName > <path>/mysql.tar
I can do the same with the save command:
docker save mysql > path/mysql.tar
And with the docker cp, using the same container created above:
docker cp mysql_dummy:/. <path>
My question is what is the difference between these 3 way, I noticed that the save command, save the image created a folder structure with different hashes, such as:
I'm curious if we can consolidate all of this layers in just one copy of the image.
Which command is the most recommendable and what are the differences, thanks!

Related

How could I get files from a docker container running the official etcd image if there is no shell?

I have a docker container that is running the etcd docker image by CoreOS which can be found here: https://quay.io/repository/coreos/etcd. What I want to do is copy all the files that are saved in etcd's data directory locally. I tried to connect to the container using docker exec -it etcd /bin/sh but it seems like there is no shell (/bin/bash, /bin/sh) on there or at least it can't be found on the $PATH variable. How can I either get onto the image or get all the data files inside of etcd copied locally?
You can export the contents of an image easily:
docker export <CONTAINER ID> > /some_file.tar
Ideally you should use volumes so that all your data is stored outside the container. Then you can access those files like any other file.
Docker has the cp command for copying files between container and host:
docker cp <id>:/container/source /host/destination
You specify the container ID or name in the source, and you can flip the command round to copy from your host into the container:
docker cp /host/source <id>:/container/destination

Equivalent of local host files for running Bluemix containers

When running a docker container locally you can run it with a command like this:
docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
This will use the file /some/nginx.conf in place of /etc/nginx/nginx.conf within your running docker container. This is very handy if you don't want to permanently enshrine your configuration files inside of an image.
However, when running Bluemix containers there is no local filesystem as everything is on a remote host. Is there an equivalent option available?
Without this it seems like the best options are either to build a dedicated image with your configuration or to put the entire configuration as a user provided service. Is this a correct assumption?
You can create a volume and add the configuration files you want to persist on it. The volume is not deleted when a container instance is removed and it can be used by multiple containers.
To create a volume you can use the following command:
$ cf ic volume create my_volume
Then you can create a new container and mount the volume to a path in the container, for example:
$ cf ic run -v my_volume:/path/to/mount --name my_container my_image
You can find more details in the following documentation link:
https://console.ng.bluemix.net/docs/containers/container_creating_ov.html#container_volumes_ov

How to override default docker container command or revert to previous container state?

I have a docker image running a wordpress installation. The image by executes the apache server as default command. So when you stop the apache service the container exits.
The problem comes after messing up the apache server config. The container cannot start and I cannot recover the image contents.
My options are to either override the command that the container runs or revert last file system changes to a previous state.
Is any of these things possible? Alternatives?
When you start a container with docker run, you can provide a command to run inside the container. This will override any command specified in the image. For example:
docker run -it some/container bash
If you have modified the configuration inside the container, it would not affect the content of the image. So you can "revert the filesystem changes" just by starting a new container from the original image...in which case you still have the original image available.
The only way to that changes inside a container affect an image are if you use the docker commit command to generate a new image containing the changes you made in the container.
If you just want to copy the contents out you can use the command below with a more specific path.
sudo docker cp containername:/var/ /varbackup/
https://docs.docker.com/reference/commandline/cli/#cp
The file system is also accessible from the host. Run the command below and in the volumes section at the bottom it should have a path to where your file system modifications are stored. This is not a good permanent solution.
docker inspect containername
If you re-create the container later you should look into keeping your data outside of the container and linking it into the container as a virtual path when you create the container. If you link your apache config file into the container this way you can edit it while the container is not running
Managing Data in Containers
http://docs.docker.com/userguide/dockervolumes/
Edit 1: Not suggesting this as a best practice but it should work.
This should display the path to the apache2.conf on the host.
Replace some-wordpress with your container name.
CONTAINER_ID=$(docker inspect -f '{{.Id}}' some-wordpress)
sudo find /var/lib/docker/ -name apache2.conf | grep $CONTAINER_ID
There are different ways of overriding the default command of a docker image. Here you have two:
If you have an image with a default CMD command, you can simply override it in docker run giving as last argument the command (with its argument) you wish to run (Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...])
Create a wrapper image with BASE image the one you want to override the CMD or ENTRYPOINT. Example
FROM my_image
CMD ["my-new-cmd"]
Also, you can try to revert the changes in different ways:
If you have the Dockerfile of the image you want to revert, simple rewrite the changes into Dockerfile and run again docker build process.
If you don't have the Dockerfile and you built the image committing the changes, you can use docker history <IMAGE_NAME>:tag, locate the IMAGE_ID of the commit you want, and run that commit or tag that commit with the name (and tag) you wish (using -f option if you are overriding a tag name). Example:
$ docker history docker_io_package:latest
$ docker tag -f c7b38f258a80 docker_io_package:latest
If it requires starting a command with a set of arguments, for example
ls -al /bin
try to make it like that
docker run --entrypoint ls -it debian /bin -al
where ls goes after --entrypoint and all arguments are placed after the image name

How to move Docker containers between different hosts?

I cannot find a way of moving docker running containers from one host to another.
Is there any way I can push my containers to repositories like we do for images ?
Currently, I am not using data volumes to store the data associated with applications running inside containers. So some data resides inside containers, which I want to persist before redesigning the setup.
Alternatively, if you do not wish to push to a repository:
Export the container to a tarball
docker export <CONTAINER ID> > /home/export.tar
Move your tarball to new machine
Import it back
cat /home/export.tar | docker import - some-name:latest
You cannot move a running docker container from one host to another.
You can commit the changes in your container to an image with docker commit, move the image onto a new host, and then start a new container with docker run. This will preserve any data that your application has created inside the container.
Nb: It does not preserve data that is stored inside volumes; you need to move data volumes manually to new host.
What eventually worked for me, after lot's of confusing manuals and confusing tutorials, since Docker is obviously at time of my writing at peak of inflated expectations, is:
Save the docker image into archive:
docker save image_name > image_name.tar
copy on another machine
on that other docker machine, run docker load in a following way:
cat image_name.tar | docker load
Export and import, as proposed in another answers does not export ports and variables, which might be required for your container to run. And you might end up with stuff like "No command specified" etc... When you try to load it on another machine.
So, difference between save and export is that save command saves whole image with history and metadata, while export command exports only files structure (without history or metadata).
Needless to say is that, if you already have those ports taken on the docker hyper-visor you are doing import, by some other docker container, you will end-up in conflict, and you will have to reconfigure exposed ports.
Note: In order to move data with docker, you might be having persistent storage somewhere, which should also be moved alongside with containers.
Use this script:
https://github.com/ricardobranco777/docker-volumes.sh
This does preserve data in volumes.
Example usage:
# Stop the container
docker stop $CONTAINER
# Create a new image
docker commit $CONTAINER $CONTAINER
# Save image
docker save -o $CONTAINER.tar $CONTAINER
# Save the volumes (use ".tar.gz" if you want compression)
docker-volumes.sh $CONTAINER save $CONTAINER-volumes.tar
# Copy image and volumes to another host
scp $CONTAINER.tar $CONTAINER-volumes.tar $USER#$HOST:
# On the other host:
docker load -i $CONTAINER.tar
docker create --name $CONTAINER [<PREVIOUS CONTAINER OPTIONS>] $CONTAINER
# Load the volumes
docker-volumes.sh $CONTAINER load $CONTAINER-volumes.tar
# Start container
docker start $CONTAINER
From Docker documentation:
docker export does not export the contents of volumes associated
with the container. If a volume is mounted on top of an existing
directory in the container, docker export will export the contents
of the underlying directory, not the contents of the volume. Refer
to Backup, restore, or migrate data
volumes
in the user guide for examples on exporting data in a volume.
I tried many solutions for this, and this is the one that worked for me :
1.commit/save container to new image :
++ commit the container:
# docker stop
# docker commit CONTAINER_NAME
# docker save --output IMAGE_NAME.tar IMAGE_NAME:TAG
ps:"Our container CONTAINER_NAME has a mounted volume at '/var/home'" ( you have to inspect your container to specify its volume path : # docker inspect CONTAINER_NAME )
++ save its volume : we will use an ubuntu image to do the thing.
# mkdir backup
# docker run --rm --volumes-from CONTAINER_NAME -v ${pwd}/backup:/backup ubuntu
bash -c “cd /var/home && tar cvf /backup/volume_backup.tar .”
Now when you look at ${pwd}/backup , you will find our volume under tar format.
Until now, we have our conatainer's image 'IMAGE_NAME.tar' and its volume 'volume_backup.tar'.
Now you can , recreate the same old container on a new host.
docker export | gzip > .tar.gz
#new host
gunzip < /mnt/usb/.tar.gz | docker import -
docker run -i -p 80:80 /bin/bash

Docker.IO Filesystem Consistancy

I created a docker container, and then I created a file and exited the container.
When I restart the container with:
docker run -i -t ubuntu /bin/bash
the file is nowhere to be found. I checked /var/lib/docker/ and there is another folder created which has my file in it. I know it's something to do with Union FS.
How do I start the same container again with my file in it?
How do I export a container with file change?
I don't know if this will answer your question completely, but...
Doing
docker run -i -t ubuntu /bin/bash
will not restart any container. Instead, it will create and start a new container based on the ubuntu image.
If you started a container and stopped it you can use docker start ${CONTAINER_ID}. If you did not stop it yet you can use restart.
You can also commit (export) the container to a new image: see http://docs.docker.io/en/latest/commandline/command/commit/ for the correct syntax. docker export is a option as well, but all that will do is archive your container. By creating a new image using docker commit you can create multiple instances (containers) of it afterwards, all having your file in it.

Resources