I am trying to export a docker container that uses a mounted local volume as its root, which I run with docker run --privileged -v /path/to/local/files:/root --name cse303dev -it cse303 to mount that local directory.
What is the best way of exporting the container AND all of the contents of that local mounted directory into a simple tar file of some sort? And then how would I easily re-import this container and run it so that I can see and use all of those files copied from the local machine that exported it? Is this possible?
You almost never "export" a container per se. Containers are generally intended to be freely destroyed and recreated.
In your case you already have the data you care about stored outside the container (in a bind-mounted folder, which is the easy case), so you can just copy that directory tree to wherever else and then run a new copy of the container there.
(cd /path/to/local/files; tar cvzf ~/local-files.tar.gz .)
scp local-files.tar.gz there:
ssh there
mkdir files
(cd files; tar xvzf ../local-files.tar.gz)
docker run -v $PWD/files:/root cse303
This is trickier if you're storing the data in a named volume. The Docker documentation describes how to back up the contents of a named volume and you'd have to go through that procedure.
If you want to export the entire contents of the container (including the mounted volumes, which might be a bad idea depending upon what you have mounted), then you want to run tar inside the container and pipe out the data to a file:
docker run --privileged -v /path/to/local/files:/root ${IMAGE_NAME} \
tar -cf - -C / --exclude=proc --exclude=sys . | gzip > myfile.tgz
I would highly recommend excluding /proc and /sys (as in the above example) or you will likely encounter issues.
I tried to share data between the docker container and the host, for example by adding the parameter -v /Users/name/Desktop/Tutorials:/cntk/Tutorials to the docker run command, but I noticed that it also deletes all the files on the docker contained in /cntk/Tutorials.
My question is how to make the same link, but having instead all the files in /cntk/Tutorials copied to the host (at /Users/name/Desktop/Tutorials)
Thank you
Unfortunately that it is not possible, take a look here. That is because this is how mounting works in Linux.
It is not correct to say that the files were deleted. They are still present in the underlying image, but the act of mounting another directory at the same path has obscured them. They exist, but are not accessible in this condition.
One way you can accomplish this is by mounting a volume into your container at a different path, and then copying the container's files to that path. Something like this.
Mount a host volume using a different path than the one the container already has for the files you are interested in.
docker run -v /Users/name/Desktop/Tutorials:/cntk/Tutorials2 [...]
Now, execute a command that will copy the files already in the docker image, into the mounted volume from the outside host.
docker exec <container-id> cp -r /cntk/Tutorials /cntk/Tutorials2
The docker cp command allows you to copy files/folders on demand between host and the container:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
docker cp ContainerName:/home/data.txt . <== copy from container to host
docker cp ./test.txt ContainerName:/test.txt <== copy from host to container
docker cp ContainerName:/test.txt ./test2.txt <== copy from container to host
For details run docker cp --help
I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.
My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.
I am trying to build a backup and restore solution for the Docker containers that we work with.
I have Docker base image that I have created, ubuntu:base, and do not want have to rebuild it each time with a Docker file to add files to it.
I want to create a script that runs from the host machine and creates a new container using the ubuntu:base Docker image and then copies files into that container.
How can I copy files from the host to the container?
The cp command can be used to copy files.
One specific file can be copied TO the container like:
docker cp foo.txt container_id:/foo.txt
One specific file can be copied FROM the container like:
docker cp container_id:/foo.txt foo.txt
For emphasis, container_id is a container ID, not an image ID. (Use docker ps to view listing which includes container_ids.)
Multiple files contained by the folder src can be copied into the target folder using:
docker cp src/. container_id:/target
docker cp container_id:/src/. target
Reference: Docker CLI docs for cp
In Docker versions prior to 1.8 it was only possible to copy files from a container to the host. Not from the host to a container.
Get container name or short container id:
$ docker ps
Get full container id:
$ docker inspect -f '{{.Id}}' SHORT_CONTAINER_ID-or-CONTAINER_NAME
Copy file:
$ sudo cp path-file-host /var/lib/docker/aufs/mnt/FULL_CONTAINER_ID/PATH-NEW-FILE
EXAMPLE:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8e703d7e303 solidleon/ssh:latest /usr/sbin/sshd -D cranky_pare
$ docker inspect -f '{{.Id}}' cranky_pare
or
$ docker inspect -f '{{.Id}}' d8e703d7e303
d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5
$ sudo cp file.txt /var/lib/docker/aufs/mnt/**d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5**/root/file.txt
The cleanest way is to mount a host directory on the container when starting the container:
{host} docker run -v /path/to/hostdir:/mnt --name my_container my_image
{host} docker exec -it my_container bash
{container} cp /mnt/sourcefile /path/to/destfile
Typically there are three types:
From a container to the host
docker cp container_id:./bar/foo.txt .
Also docker cp command works both ways too.
From the host to a container
docker exec -i container_id sh -c 'cat > ./bar/foo.txt' < ./foo.txt
Second approach to copy from host to container:
docker cp foo.txt mycontainer:/foo.txt
From a container to a container mixes 1 and 2
docker cp container_id1:./bar/foo.txt .
docker exec -i container_id2 sh -c 'cat > ./bar/foo.txt' < ./foo.txt
The following is a fairly ugly way of doing it but it works.
docker run -i ubuntu /bin/bash -c 'cat > file' < file
If you need to do this on a running container you can use docker exec (added in 1.3).
First, find the container's name or ID:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9b7400ddd8f ubuntu:latest "/bin/bash" 2 seconds ago Up 2 seconds elated_hodgkin
In the example above we can either use b9b7400ddd8f or elated_hodgkin.
If you wanted to copy everything in /tmp/somefiles on the host to /var/www in the container:
$ cd /tmp/somefiles
$ tar -cv * | docker exec -i elated_hodgkin tar x -C /var/www
We can then exec /bin/bash in the container and verify it worked:
$ docker exec -it elated_hodgkin /bin/bash
root#b9b7400ddd8f:/# ls /var/www
file1 file2
Create a new dockerfile and use the existing image as your base.
FROM myName/myImage:latest
ADD myFile.py bin/myFile.py
Then build the container:
docker build .
The solution is given below,
From the Docker shell,
root#123abc:/root# <-- get the container ID
From the host
cp thefile.txt /var/lib/docker/devicemapper/mnt/123abc<bunch-o-hex>/rootfs/root
The file shall be directly copied to the location where the container sits on the filesystem.
Another solution for copying files into a running container is using tar:
tar -c foo.sh | docker exec -i theDockerContainer /bin/tar -C /tmp -x
Copies the file foo.sh into /tmp of the container.
Edit: Remove reduntant -f, thanks to Maartens comment.
To copy a file from host to running container
docker exec -i $CONTAINER /bin/bash -c "cat > $CONTAINER_PATH" < $HOST_PATH
Based on Erik's answer and Mikl's and z0r's comments.
This is a direct answer to the question 'Copying files from host to Docker container' raised in this question in the title.
Try docker cp. It is the easiest way to do that and works even on my Mac. Usage:
docker cp /root/some-file.txt some-docker-container:/root
This will copy the file some-file.txt in the directory /root on your host machine into the Docker container named some-docker-container into the directory /root. It is very close to the secure copy syntax. And as shown in the previous post, you can use it vice versa. I.e., you also copy files from the container to the host.
And before you downlink this post, please enter docker cp --help. Reading the documentation can be very helpful, sometimes...
If you don't like that way and you want data volumes in your already created and running container, then recreation is your only option today. See also How can I add a volume to an existing Docker container?.
I tried most of the (upvoted) solutions here but in docker 17.09 (in 2018) there is no longer /var/lib/docker/aufs folder.
This simple docker cp solved this task.
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
How to get container_name?
docker ps
There is a NAMES section. Don't use aIMAGE.
With Docker 1.8, docker cp is able to copy files from host to container. See the Docker blog post Announcing Docker 1.8: Content Trust, Toolbox, and Updates to Registry and Orchestration.
To copy files/folders between a container and the local filesystem, type the command:
docker cp {SOURCE_FILE} {DESTINATION_CONTAINER_ID}:/{DESTINATION_PATH}
For example,
docker cp /home/foo container-id:/home/dir
To get the contianer id, type the given command:
docker ps
The above content is taken from docker.com.
Assuming the container is already running, type the given command:
# cat /path/to/host/file/ | docker exec -i -t <container_id> bash -c "/bin/cat > /path/to/container/file"
To share files using shared directory, run the container by typing the given command:
# docker run -v /path/to/host/dir:/path/to/container/dir ...
Note: Problems with permissions might arise as container's users are not the same as the host's users.
This is the command to copy data from Docker to Host:
docker cp container_id:file path/filename /hostpath
docker cp a13fb9c9e674:/tmp/dgController.log /tmp/
Below is the command to copy data from host to docker:
docker cp a.txt ccfbeb35116b:/home/
Container Up Syntax:
docker run -v /HOST/folder:/Container/floder
In docker File
COPY hom* /myFolder/ # adds all files starting with "hom"
COPY hom?.txt /myFolder/ # ? is replaced with any single character, e.g., "home.txt"
In a docker environment, all containers are found in the directory:
/var/lib/docker/aufs/required-docker-id/
To copy the source directory/file to any part of the container, type the given command:
sudo cp -r mydir/ /var/lib/docker/aufs/mnt/required-docker-id/mnt/
Docker cp command is a handy utility that allows to copy files and folders between a container and the host system.
If you want to copy files from your host system to the container, you should use docker cp command like this:
docker cp host_source_path container:destination_path
List your running containers first using docker ps command:
abhishek#linuxhandbook:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
8353c6f43fba 775349758637 "bash" 8 seconds ago Up 7
seconds ubu_container
You need to know either the container ID or the container name. In my case, the docker container name is ubu_container. and the container ID is 8353c6f43fba.
If you want to verify that the files have been copied successfully, you can enter your container in the following manner and then use regular Linux commands:
docker exec -it ubu_container bash
Copy files from host system to docker container
Copying with docker cp is similar to the copy command in Linux.
I am going to copy a file named a.py to the home/dir1 directory in the container.
docker cp a.py ubu_container:/home/dir1
If the file is successfully copied, you won’t see any output on the screen. If the destination path doesn’t exist, you would see an error:
abhishek#linuxhandbook:~$ sudo docker cp a.txt ubu_container:/home/dir2/subsub
Error: No such container:path: ubu_container:/home/dir2
If the destination file already exists, it will be overwritten without any warning.
You may also use container ID instead of the container name:
docker cp a.py 8353c6f43fba:/home/dir1
If the host is CentOS or Fedora, there is a proxy NOT in /var/lib/docker/aufs, but it is under /proc:
cp -r /home/user/mydata/* /proc/$(docker inspect --format "{{.State.Pid}}" <containerid>)/root
This cmd will copy all contents of data directory to / of container with id "containerid".
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
The destination path must be pre-exist
tar and docker cp are a good combo for copying everything in a directory.
Create a data volume container
docker create --name dvc --volume /path/on/container cirros
To preserve the directory hierarchy
tar -c -C /path/on/local/machine . | docker cp - dvc:/path/on/container
Check your work
docker run --rm --volumes-from dvc cirros ls -al /path/on/container
Many that find this question may actually have the problem of copying files into a Docker image while it is being created (I did).
In that case, you can use the COPY command in the Dockerfile that you use to create the image.
See the documentation.
In case it is not clear to someone like me what mycontainer in #h3nrik answer means, it is actually the container id. To copy a file WarpSquare.mp4 in /app/example_scenes/1440p60 from an exited docker container to current folder I used this.
docker cp `docker ps -q -l`:/app/example_scenes/1440p60/WarpSquare.mp4 .
where docker ps -q -l pulls up the container id of the last exited instance. In case it is not an exited container you can get it by docker container ls or docker ps
docker cp SRC_PATH CONTAINER_ID:DEST_PATH
For example, I want to copy my file xxxx/download/jenkins to tomcat
I start to get the id of the container Tomcat
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
63686740b488 tomcat "catalina.sh run" 12 seconds ago Up 11 seconds 0.0.0.0:8080->8080/tcp peaceful_babbage
docker cp xxxx/download/jenkins.war 63686740b488:usr/local/tomcat/webapps/
This is a onliner for copying a single file while running a tomcat container.
docker run -v /PATH_TO_WAR/sample.war:/usr/local/tomcat/webapps/myapp.war -it -p 8080:8080 tomcat
This will copy the war file to webapps directory and get your app running in no time.
My favorite method:
CONTAINERS:
CONTAINER_ID=$(docker ps | grep <string> | awk '{ print $1 }' | xargs docker inspect -f '{{.Id}}')
file.txt
mv -f file.txt /var/lib/docker/devicemapper/mnt/$CONTAINER_ID/rootfs/root/file.txt
or
mv -f file.txt /var/lib/docker/aufs/mnt/$CONTAINER_ID/rootfs/root/file.txt
The best way for copying files to the container I found is mounting a directory on host using -v option of docker run command.
There are good answers, but too specific. I find out docker ps is good way to get container id you're interested in. Then do
mount | grep <id>
to see where the volume is mounted. That's
/var/lib/docker/devicemapper/mnt/<id>/rootfs/
for me, but it might be a different path depending on the OS and configuration. Now simply copy files to that path.
Using -v is not always practical.
Try docker cp.
Usage:
docker cp CONTAINER:PATH HOSTPATH
It copies files/folders from PATH to the HOSTPATH.