How to re-mount a docker volume without overriding existing files? - docker

When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.

$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.

Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.

Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.

Related

Docker inside docker: file missing

I am launching docker inside another docker container and I'm trying to make files visible inside "deepest" container.
My first container is build on python:3.8-slim image, entrypoint is ["python"] and is called test-client.
I launch it as docker run --rm -it -v /home/.../inputs:/inputs -v /var/run/docker.sock:/var/run/docker.sock --network ... test-client start_client.py ....
Now inner container.
Inside start_client.py I run it with docker==5.0.3 library.
def check_docker():
import time
inputs = Mount('/inputs', 'inputs')
client = docker.from_env()
client.images.pull('apline')
time.sleep(30) # I will explain this later
output = client.containers.run(
'apline', 'ls inputs -al',
mounts=[inputs]
).decode('utf-8')
for line in output.split('\n'):
print(line)
So. I used time.sleep to have time to dive into first container and check if needed file is presed. Yes it is, my file is inside first container. But output of deepest container sees no files inside inputs directory.
What am I doing wrong?
You can't directly mount a directory from one container to another. In the mounts option you show (and in docker run -v and Compose volumes:) the host path is always a path on the system where the Docker daemon is running. If you're bind-mounting the host's Docker socket, these paths will be paths on the host; if $DOCKER_HOST points into a VM or at a remote machine, the paths will be paths on that system and not your local one.
But, in your specific example, the directory you're trying to remount is already a mount itself. If you mount the same host location into both containers, then you'll be able to see the files. I'd suggest specifying this in an environment variable
inputs = Mount('/inputs', os.getenv('INPUT_SOURCE', 'input'))
and when you run the container, pass that directory in as a variable
INPUT_SOURCE="$PWD/inputs"
docker run --rm -it \
-e INPUT_SOURCE \
-v "$INPUT_SOURCE:/inputs" \
--network ... \
test-client \
start_client.py ...
If you use a bare string input in the Mount object as you've done, it will mount (and automatically create) a named volume. You can use your container to inspect this
docker run --rm -v inputs:/inputs test-client \
-e 'print(os.listdir("/inputs"))'
(you can use a simpler shell syntax if you remove the ENTRYPOINT ["python"] line from your Dockerfile).

How to mount volume inside child docker created by parent docker sharing docker.sock

I am trying to create a wrapper container to build and run a set of containers using a docker-compose I cannot modify. The docker-compose mounts several volumes, but when starting the docker-compose from inside of the wrapper docker, the volumes are still mounted from the host since the docker .sock is volume mounted to be the host's docker.sock.
I would like to not have to use full docker-in-docker due to all the problems associated with it outlined in jpetazzo's article.
I would also like to avoid volume-from since I cannot edit the docker-compose file mentioned previously.
Is there a way to get this snippet to correctly use the parent docker's file instead of going to the host filesystem and mounting it from there?
FROM docker:latest
RUN mkdir -p /tmp/parent/ && echo "This is from the parent docker" > /tmp/parent/parent.txt
CMD docker run -v /tmp/parent/parent.txt:/root/parent.txt --rm ubuntu:18.04 bash -c "cat /root/parent.txt"
when run with a command akin to this:
docker build -t parent . && docker run --rm -v /var/run/docker.sock:/var/run/docker.sock parent
Make your paths the same on the host and inside of the docker image, e.g.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/user:/home/user -w /home/user/project parent_image ...
By mounting the volume as /home/user in the same location inside the image, a command like docker-compose up with relative bind mounts will use the container path names when talking to the docker socket, which will match the paths on the host.

What is the right way to add data to an existing named volume in Docker?

I was using Docker in the old way, with a volume container:
docker run -d --name jenkins-data jenkins:tag echo "data-only container for Jenkins"
But now I changed to the new way by creating a named volume:
docker volume create --name my-jenkins-volume
I bound this new volume to a new Jenkins container.
The only thing I've left is a folder in which I have the /var/jenkins_home of my previous jenkins container. (by using docker cp)
Now I want to fill my new named volume with the content of that folder.
Can I just copy the content of that folder to /var/lib/jenkins/volume/my-jenkins-volume/_data?
You can certainly copy data directly into /var/lib/docker/volumes/my-jenkins-volume/_data, but by doing this you are:
Relying on physical access to the docker host. This technique won't work if you're interacting with a remote docker api.
Relying on a particular aspect of the volume implementation would could change in the future, breaking any processes you have that rely on it.
I think you are better off relying on things you can accomplish using the docker api, via the command line client. The easiest solution is probably just to use a helper container, something like:
docker run -v my-jenkins-volume:/data --name helper busybox true
docker cp . helper:/data
docker rm helper
You don't need to start some container to add data to already existing named volume, just create a container and copy data there:
docker container create --name temp -v my-jenkins-volume:/data busybox
docker cp . temp:/data
docker rm temp
You can reduce the accepted answer to one line using, e.g.
docker run --rm -v `pwd`:/src -v my-jenkins-volume:/data busybox cp -r /src /data
Here are steps for copying contents of ~/data to docker volume named my-vol
Step 1. Attach the volume to a "temporary" container. For that run in terminal this command :
docker run --rm -it --name alpine --mount type=volume,source=my-vol,target=/data alpine
Step 2. Copy contents of ~/data into my-vol . For that run this commands in new terminal window :
cd ~/data
docker cp . alpine:/data
This will copy contents of ~/data into my-vol volume. After copy exit the temporary container.
You can add this BASH function to your .bashrc to copy files to a existing Docker volume without running a container
# Usage: copy-to-docker-volume SRC_PATH DEST_VOLUME_NAME [DEST_PATH]
copy-to-docker-volume() {
SRC_PATH=$1
DEST_VOLUME_NAME=$2
DEST_PATH="${3:-}"
# create smallest Docker image possible
echo -e 'FROM scratch\nLABEL empty=""' | docker build -t empty -
# create temporary container to be able to mount volume
CONTAINER_ID=$(docker container create -v my-volume:/data empty cmd)
# copy files to volume
docker cp "${SRC_PATH}" "${CONTAINER_ID}":"/data/${DEST_PATH}"
# remove temporary container
docker rm "${CONTAINER_ID}"
}
Example
# create volume as destination
docker volume create my-volume
# create directory to copy
mkdir my-dir
echo "hello file1" > my-dir/my-file-1
# copy directory to volume
copy-to-docker-volume my-dir my-volume
# list directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# show file content on volume
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-1
# create another file to copy
echo "hello file2" > my-file-2
# copy file to directory on volume
copy-to-docker-volume my-file-2 my-volume my-dir
# list (updated) directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# check volume content
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-2
If you don't want to create a docker and you can access as privileged user to , simply do (on Linux systems):
docker volume create my_named_volume
sudo cp -p . /var/lib/docker/volumes/my_named_volume/_data/
Furthermore, it also allows you to access data in docker runtime or also with docker containers stopped.
If you don't want to create a temp helper container on windows docker desktop (backed by wsl2) then
copy the files to below location
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\my-volume\_data
here my-volume is the name of your named volume. browse the above path from address bar in your file explorer. This is a internal network created by wsl in windows.
Note: it might be better to use docker API like mentioned by larsks, but I have not faced any issues on windows.
Similarly on linux files can be copied to
/var/lib/docker/volumes/my-volume/_data/

Docker volume initialization - copying data from image to container

https://docs.docker.com/engine/userguide/dockervolumes/ says:
"Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization."
However this is not exactly what I'm observing. Here's my scenario:
I create a container that contains some data in /opt/data
I commit this container and create an image out of it
I create another container using the image I've just prepared and create a volume that points /opt/data to a local catalog.
According to the docs, I expected that files under /opt/data of the image will be copied to the locally created volume. It's not happening..
<local>:~$ docker run --name test -it ubuntu bash
root#76f42fce6ab7:/# mkdir /opt/data
root#76f42fce6ab7:/# echo "foo" > /opt/data/my-data
$ docker commit test test-with-data
<local>:~$ docker run -it -v /tmp/test-volume:/opt/data test-with-data bash
root#731b483527ad:/# ls /opt/data
root#731b483527ad:/#
root#731b483527ad:/# exit
Is there something I don't understand here?
It's because you've specified a host directory. If you don't specify a host directory and instead let Docker manage the volume, it works as you expect:
$ docker run --name test -it debian bash
root#ac99b805a689:/# mkdir /opt/data
root#ac99b805a689:/# echo "foo" > /opt/data/my-data
root#ac99b805a689:/# exit
exit
$ docker commit test test-with-data
a35463157fbee6180ed91c458288cf528da93a23bf340f44c3d2a7ff355fa2b1
$ docker run -it -v /opt/data/ test-with-data bash
root#73f70c3b5518:/# ls /opt/data
my-data
root#73f70c3b5518:/# cat /opt/data/my-data
foo

How to copy files from host to Docker container?

I am trying to build a backup and restore solution for the Docker containers that we work with.
I have Docker base image that I have created, ubuntu:base, and do not want have to rebuild it each time with a Docker file to add files to it.
I want to create a script that runs from the host machine and creates a new container using the ubuntu:base Docker image and then copies files into that container.
How can I copy files from the host to the container?
The cp command can be used to copy files.
One specific file can be copied TO the container like:
docker cp foo.txt container_id:/foo.txt
One specific file can be copied FROM the container like:
docker cp container_id:/foo.txt foo.txt
For emphasis, container_id is a container ID, not an image ID. (Use docker ps to view listing which includes container_ids.)
Multiple files contained by the folder src can be copied into the target folder using:
docker cp src/. container_id:/target
docker cp container_id:/src/. target
Reference: Docker CLI docs for cp
In Docker versions prior to 1.8 it was only possible to copy files from a container to the host. Not from the host to a container.
Get container name or short container id:
$ docker ps
Get full container id:
$ docker inspect -f '{{.Id}}' SHORT_CONTAINER_ID-or-CONTAINER_NAME
Copy file:
$ sudo cp path-file-host /var/lib/docker/aufs/mnt/FULL_CONTAINER_ID/PATH-NEW-FILE
EXAMPLE:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8e703d7e303 solidleon/ssh:latest /usr/sbin/sshd -D cranky_pare
$ docker inspect -f '{{.Id}}' cranky_pare
or
$ docker inspect -f '{{.Id}}' d8e703d7e303
d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5
$ sudo cp file.txt /var/lib/docker/aufs/mnt/**d8e703d7e3039a6df6d01bd7fb58d1882e592a85059eb16c4b83cf91847f88e5**/root/file.txt
The cleanest way is to mount a host directory on the container when starting the container:
{host} docker run -v /path/to/hostdir:/mnt --name my_container my_image
{host} docker exec -it my_container bash
{container} cp /mnt/sourcefile /path/to/destfile
Typically there are three types:
From a container to the host
docker cp container_id:./bar/foo.txt .
Also docker cp command works both ways too.
From the host to a container
docker exec -i container_id sh -c 'cat > ./bar/foo.txt' < ./foo.txt
Second approach to copy from host to container:
docker cp foo.txt mycontainer:/foo.txt
From a container to a container mixes 1 and 2
docker cp container_id1:./bar/foo.txt .
docker exec -i container_id2 sh -c 'cat > ./bar/foo.txt' < ./foo.txt
The following is a fairly ugly way of doing it but it works.
docker run -i ubuntu /bin/bash -c 'cat > file' < file
If you need to do this on a running container you can use docker exec (added in 1.3).
First, find the container's name or ID:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9b7400ddd8f ubuntu:latest "/bin/bash" 2 seconds ago Up 2 seconds elated_hodgkin
In the example above we can either use b9b7400ddd8f or elated_hodgkin.
If you wanted to copy everything in /tmp/somefiles on the host to /var/www in the container:
$ cd /tmp/somefiles
$ tar -cv * | docker exec -i elated_hodgkin tar x -C /var/www
We can then exec /bin/bash in the container and verify it worked:
$ docker exec -it elated_hodgkin /bin/bash
root#b9b7400ddd8f:/# ls /var/www
file1 file2
Create a new dockerfile and use the existing image as your base.
FROM myName/myImage:latest
ADD myFile.py bin/myFile.py
Then build the container:
docker build .
The solution is given below,
From the Docker shell,
root#123abc:/root# <-- get the container ID
From the host
cp thefile.txt /var/lib/docker/devicemapper/mnt/123abc<bunch-o-hex>/rootfs/root
The file shall be directly copied to the location where the container sits on the filesystem.
Another solution for copying files into a running container is using tar:
tar -c foo.sh | docker exec -i theDockerContainer /bin/tar -C /tmp -x
Copies the file foo.sh into /tmp of the container.
Edit: Remove reduntant -f, thanks to Maartens comment.
To copy a file from host to running container
docker exec -i $CONTAINER /bin/bash -c "cat > $CONTAINER_PATH" < $HOST_PATH
Based on Erik's answer and Mikl's and z0r's comments.
This is a direct answer to the question 'Copying files from host to Docker container' raised in this question in the title.
Try docker cp. It is the easiest way to do that and works even on my Mac. Usage:
docker cp /root/some-file.txt some-docker-container:/root
This will copy the file some-file.txt in the directory /root on your host machine into the Docker container named some-docker-container into the directory /root. It is very close to the secure copy syntax. And as shown in the previous post, you can use it vice versa. I.e., you also copy files from the container to the host.
And before you downlink this post, please enter docker cp --help. Reading the documentation can be very helpful, sometimes...
If you don't like that way and you want data volumes in your already created and running container, then recreation is your only option today. See also How can I add a volume to an existing Docker container?.
I tried most of the (upvoted) solutions here but in docker 17.09 (in 2018) there is no longer /var/lib/docker/aufs folder.
This simple docker cp solved this task.
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
How to get container_name?
docker ps
There is a NAMES section. Don't use aIMAGE.
With Docker 1.8, docker cp is able to copy files from host to container. See the Docker blog post Announcing Docker 1.8: Content Trust, Toolbox, and Updates to Registry and Orchestration.
To copy files/folders between a container and the local filesystem, type the command:
docker cp {SOURCE_FILE} {DESTINATION_CONTAINER_ID}:/{DESTINATION_PATH}
For example,
docker cp /home/foo container-id:/home/dir
To get the contianer id, type the given command:
docker ps
The above content is taken from docker.com.
Assuming the container is already running, type the given command:
# cat /path/to/host/file/ | docker exec -i -t <container_id> bash -c "/bin/cat > /path/to/container/file"
To share files using shared directory, run the container by typing the given command:
# docker run -v /path/to/host/dir:/path/to/container/dir ...
Note: Problems with permissions might arise as container's users are not the same as the host's users.
This is the command to copy data from Docker to Host:
docker cp container_id:file path/filename /hostpath
docker cp a13fb9c9e674:/tmp/dgController.log /tmp/
Below is the command to copy data from host to docker:
docker cp a.txt ccfbeb35116b:/home/
Container Up Syntax:
docker run -v /HOST/folder:/Container/floder
In docker File
COPY hom* /myFolder/ # adds all files starting with "hom"
COPY hom?.txt /myFolder/ # ? is replaced with any single character, e.g., "home.txt"
In a docker environment, all containers are found in the directory:
/var/lib/docker/aufs/required-docker-id/
To copy the source directory/file to any part of the container, type the given command:
sudo cp -r mydir/ /var/lib/docker/aufs/mnt/required-docker-id/mnt/
Docker cp command is a handy utility that allows to copy files and folders between a container and the host system.
If you want to copy files from your host system to the container, you should use docker cp command like this:
docker cp host_source_path container:destination_path
List your running containers first using docker ps command:
abhishek#linuxhandbook:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
8353c6f43fba 775349758637 "bash" 8 seconds ago Up 7
seconds ubu_container
You need to know either the container ID or the container name. In my case, the docker container name is ubu_container. and the container ID is 8353c6f43fba.
If you want to verify that the files have been copied successfully, you can enter your container in the following manner and then use regular Linux commands:
docker exec -it ubu_container bash
Copy files from host system to docker container
Copying with docker cp is similar to the copy command in Linux.
I am going to copy a file named a.py to the home/dir1 directory in the container.
docker cp a.py ubu_container:/home/dir1
If the file is successfully copied, you won’t see any output on the screen. If the destination path doesn’t exist, you would see an error:
abhishek#linuxhandbook:~$ sudo docker cp a.txt ubu_container:/home/dir2/subsub
Error: No such container:path: ubu_container:/home/dir2
If the destination file already exists, it will be overwritten without any warning.
You may also use container ID instead of the container name:
docker cp a.py 8353c6f43fba:/home/dir1
If the host is CentOS or Fedora, there is a proxy NOT in /var/lib/docker/aufs, but it is under /proc:
cp -r /home/user/mydata/* /proc/$(docker inspect --format "{{.State.Pid}}" <containerid>)/root
This cmd will copy all contents of data directory to / of container with id "containerid".
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
The destination path must be pre-exist
tar and docker cp are a good combo for copying everything in a directory.
Create a data volume container
docker create --name dvc --volume /path/on/container cirros
To preserve the directory hierarchy
tar -c -C /path/on/local/machine . | docker cp - dvc:/path/on/container
Check your work
docker run --rm --volumes-from dvc cirros ls -al /path/on/container
Many that find this question may actually have the problem of copying files into a Docker image while it is being created (I did).
In that case, you can use the COPY command in the Dockerfile that you use to create the image.
See the documentation.
In case it is not clear to someone like me what mycontainer in #h3nrik answer means, it is actually the container id. To copy a file WarpSquare.mp4 in /app/example_scenes/1440p60 from an exited docker container to current folder I used this.
docker cp `docker ps -q -l`:/app/example_scenes/1440p60/WarpSquare.mp4 .
where docker ps -q -l pulls up the container id of the last exited instance. In case it is not an exited container you can get it by docker container ls or docker ps
docker cp SRC_PATH CONTAINER_ID:DEST_PATH
For example, I want to copy my file xxxx/download/jenkins to tomcat
I start to get the id of the container Tomcat
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
63686740b488 tomcat "catalina.sh run" 12 seconds ago Up 11 seconds 0.0.0.0:8080->8080/tcp peaceful_babbage
docker cp xxxx/download/jenkins.war 63686740b488:usr/local/tomcat/webapps/
This is a onliner for copying a single file while running a tomcat container.
docker run -v /PATH_TO_WAR/sample.war:/usr/local/tomcat/webapps/myapp.war -it -p 8080:8080 tomcat
This will copy the war file to webapps directory and get your app running in no time.
My favorite method:
CONTAINERS:
CONTAINER_ID=$(docker ps | grep <string> | awk '{ print $1 }' | xargs docker inspect -f '{{.Id}}')
file.txt
mv -f file.txt /var/lib/docker/devicemapper/mnt/$CONTAINER_ID/rootfs/root/file.txt
or
mv -f file.txt /var/lib/docker/aufs/mnt/$CONTAINER_ID/rootfs/root/file.txt
The best way for copying files to the container I found is mounting a directory on host using -v option of docker run command.
There are good answers, but too specific. I find out docker ps is good way to get container id you're interested in. Then do
mount | grep <id>
to see where the volume is mounted. That's
/var/lib/docker/devicemapper/mnt/<id>/rootfs/
for me, but it might be a different path depending on the OS and configuration. Now simply copy files to that path.
Using -v is not always practical.
Try docker cp.
Usage:
docker cp CONTAINER:PATH HOSTPATH
It copies files/folders from PATH to the HOSTPATH.

Resources