I am developing a Docker image. I started with a base image and was working inside it interactively, using bash. I installed a bunch of stuff, and the install (which included compiling a lot of code) took over 20 minutes, so to save my work, I used:
$ docker commit 0f08ac958391 myproject:wip
Now when I try to run the image:
$ docker run --rm -it myproject:wip
docker: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay2/95aa9a9ea7cc0b1ba302adbd287e4d7059ee4fbe64183404df3bc65df332ee63/merged/run/host-services/ssh-auth.sock.
What is going on? How do I fix this?
Note about related/duplicate questions: while there are other questions about this error message, none of the answers directly explain why the error happens in this situation or what to do about it. In fact, most of the questions have no answers at all.
When I ran the base image, I included a mount for the SSH agent socket:
$ docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:dev /bin/bash
This bind mounts a file from the host (actually the Docker daemon VM) to a file in the Docker container. When I committed the running image, the image contained the file /run/host-services/ssh-auth.sock. The image also contained an empty volume reference to /run/host-services/ssh-auth.sock. This means that when I ran
$ docker run --rm -it myproject:wip
It was equivalent to running
$ docker run -v /run/host-services/ssh-auth.sock --rm -it myproject:wip
Unfortunately, what that command does is create an anonymous volume and mount it into the directory /run/host-services/ssh-auth.sock in the container. This works if the container has such a directory or even if it does not. What causes it to fail is if the target name is taken up by a file. Docker will not mount a volume over a file.
The solution is to explicitly provide a mapping from a host file to the target volume. Any host file will do, but in my case it is best to use the original. So this works:
docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:wip
It's CentOS 7, already installed vi and vim in my CentOS and I can use them. I run docker in CentOS, when I excute this line below:
docker exec -it mysolr /bin/bash
I cannot use vi/vim in the solr container:
bash: vim: command not found
Why is that and how do I fix it so I can use vi/vim to edit file in docker container?
A typical Docker image contains a minimal set of libraries and utilities to run one specific program. Additionally, Docker container filesystems are not long-lived: it is extremely routine to delete and recreate a container, for instance to use a newer version of a base image.
The upshot of this is that you never want to directly edit files in a Docker container, and most images aren't set up with "rich" editing tools. (BusyBox contains a minimal vi and so most Alpine-based images will too.) If you make some change, it will be lost as soon as you delete the container. (Similarly, you usually can install vim or emacs or whatever, but it will get lost as soon as the container is deleted: installing software in a running container isn't usually a best practice.)
There are two good ways to deal with this, depending on what kind of file it is.
If the file is part of the application, like a source file, edit, debug, and test it outside of Docker space. Once you're convinced it's right (by running unit tests and by running the program locally), docker build a new image with it, and docker run a new container with the new image.
ed config.py
pytest
docker build -t imagename .
docker run -d -p ... --name containername imagename
...
ed config.py
pytest
docker build -t imagename .
docker stop containername
docker run -d -p ... --name containername imagename
If the file is configuration that needs to be injected when the application starts, the docker run -v option is a good way to push it in. You can directly edit the config file on your host, but you'll probably need to restart (or delete and recreate) the container for it to notice.
ed config.txt
docker run \
-v $PWD/config.txt:/etc/whatever/config.txt \
--name containername -p ... \
imagename
...
ed config.txt
docker stop containername
docker rm containername
docker run ... imagename
I've built a docker image with a python script that works with two different commands. The first one creates a file that is used when executing the second one.
As far as I know, I must use a Docker volume to store data between executions so I've created a volume with:
docker volume create myvol
To then use it when running the container
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
But then, when executing the other command it seems than the Docker directory /data is empty...
$ docker run predict -v myvol:/data
Error: /data/model.h5 not found
Is there any point that I'm missing?
The docker command line is order sensitive. The syntax is:
docker $args_to_docker run $args_to_run $image_name $override_to_cmd
In your command you pass the -v option after the image name, so it becomes the CMD value in your container:
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
That runs the cmd fit -v myvol:/data inside the container.
The solution is to change the order if you want the -v to be an option to run and define a volume:
$ docker run -v myvol:/data myimg fit
$ docker run -v myvol:/data predict
Make sure you use the -v or --mount argument to the Docker run command. This will make sure that the data is really stored outside of the container, and you'll lose nothing.
See: https://docs.docker.com/storage/volumes/ for details.
When running these commands you need to remember that the first part of the -v option is always <path_to_directory_on_host>:<path_to_directory_on_guest>. You are able to use both absolute path's and relative paths to the directory on your host machine. So first thing is to create a directory on your host called data, move model.h5 into that directory, then mount it with the -v switch.
so if I had my data directory in C:\data on a windows directory I would use:
docker run <img_name> -v C:\data:/data
if I was on unix and my data directory was in /usr/data and I wanted it to be mounted on /data in my guest container then I would use
docker run <img_name> -v /usr/data:/data
I have the strangest situation using Docker on WSL (Windows Subsystem for Linux, Ubuntu 16.04). I'm trying to bind mount /home/username (or just $HOME for convenience) as a volume in a container, and instead of finding the content of my home directory in the container, I get some other volume entirely.
What's stranger is that this 'other volume' persists from one container to the other, whenever I try to bind mount $HOME or /home/username. If I touch a new file, it appears in all other containers I mount $HOME into. All other bind mounts to any other directory work correctly.
E.g. these all share the same mystery folder:
docker run -it --rm -v /home/username:/test alpine sh
docker run -it --rm -v $HOME:/test alpine sh
docker run -it --rm -v $HOME:/test -v $HOME:/test2 alpine sh
When I do a docker volume ls there's no volume called /home/username, so that rules out accidentally having a docker-hosted volume with the same name.
What is this mysterious volume I'm mounting, and why is docker not mounting my $HOME directory correctly?
I used the instructions in https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly#ensure-volume-mounts-work to set up everything.
Then I had to explicitly export HOME=/c/Users/rfay so that the Docker daemon on Windows could access it. But that worked. The basic magic is that your pathing in WSL has to be something that the Docker daemon can translate in native Windows.
There is no need to change the mound point as suggested by #rfay. Rather if you use a little command foo you can use pwd and sed to fix the value for you.
docker run -it -v $(pwd | sed 's/^\/mnt//'):/var/myfolder -w "/var/myfolder" centos:7
pwd will return the current working folder, usually in the format '/mnt/c/code/myfolder'. Piping this to sed and replacing '/mnt' with nothing will leave you with a path such as '/c/code/myfolder' which is the desired path for docker for windows. you need to wrap the whole thing in $() to cause it to be executed in place.
I find this works really well.
My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.