How to copy multiple files into a docker data volume - docker

It may sound trivial but I couldn't find a easy way to copy multiple files into the root folder of a docker volume. I am using Ubuntu Xenial 16.04 and Docker 1.12.1. For example if I have an Ubuntu container with the volume /my_data:
docker run --name my_container -v /my_data -d ubuntu:latest
In my host machine I have a folder called /tmp/my_data/ with multiple files inside, and I would like to copy all those files into the volume /my_data in my_container. I have tried the following approaches but none of them work:
docker cp /tmp/my_data my_container:/
docker cp /tmp/my_data/* my_container:/my_data/
Does someone know a work around for this issue?

Actually it was easier than I though, just need to add a dot in the host path and it will work as expected, copying all files and folders within /my_data folder
docker cp /tmp/my_data/. my_container:/my_data

As a workaround you can create a loop:
for i in /tmp/my_data/*;do docker cp /tmp/my_data/"$i" my_container:/my_data/;done
*Note: The specific workaround wont copy hidden files or folders inside the my_data folder.

Related

file mounts as directory instead of file in docker-in-docker (dind)

When this command docker run --rm -v $(pwd)/api_tests.conf:/usr/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation is ran, api_tests.conf file is mounting as a directory in container instead of file.
I went through Single file volume mounted as directory in Docker and few other similar questions on stack overflow but unable to get the right solution.
I have tested the same code in local mac laptop and here file from local machine mounts to container as a file but locally i don't have docker-in-docker setup.
I have Dockerfile as below.
FROM alpine:latest
MAINTAINER Basavaraj
RUN apk add --no-cache python3 \
&& pip3 install --upgrade pip
WORKDIR /api-automation
COPY . /api-automation
RUN pip --no-cache-dir install .
ENTRYPOINT "some command"
and I have the build.sh file as below,
#!/bin/bash
docker pull local.artifactory.swg-devops.com/api-automation
# creating file with name "api_tests.conf" by adding configuration data
echo "configuration data" > api_tests.conf
# it displays all the configuration data written to api_tests.conf
cat $(pwd)/api_tests.conf
docker run --rm -v $(pwd)/api_tests.conf:/usr/.aiops/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation
Now we are calling build.sh file from gocd environment.
Looks like docker run command executed in docker-in-docker(dind) and as a result client which spawns the docker container on a different host and the file (api_tests.conf) being created does not exist on that different host.
because of this file (api_tests.conf) is mounting as empty directory in container.
what are the different solutions to mount the file in docker-in-docker environment?
Can we share the file (api_tests.conf) which we created to host where the docker container is spawned?
I think the problem you're having is most likely because of using dind, although it's worth pointing out that this issue would also occur if you had mounted the docker socket into another container as well.
This is because when you ask the docker daemon to mount a directory, you're docker client (cli) actually mount the file/directory itself, it's just passing a request to the docker daemon to mount this location from its local file system. And this is where the problem is, because usually this isn't where you think it is, if you're using dind or sharing the docker.socket, and hence the file/directory doesn't exist from the daemons point of view.
So in your case the $(pwd) is possibly being expanded to some well known/existing directory path, and then the docker daemon is mounting this directory portion, since the file doesn't exist. That's my guess at least, as I've seen similar behaviour before when using dind/docker.socket sharing in other set ups.
One crazy solution to this would be to bind mount the files you want into the dind container at startup, and then you could try subsequently bind mounting those files from within the dind container into any subsequent containers. However bear in mind this is precisely the kind of file system usage that's warned against in the dind documentation because of instability and potential data loss, so be warned.
Hope this helps.

Docker mount volume to reflect container files in host

The use case is that I want to download and image that contains python code files. Assume the image does not have any text editor installed. So i want to mount a drive on host, so that files in the container show up in this host mount and i can use different editors installed on my host to update the code. Saving the changes are to be reflected in the image.
if i run the following >
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
the /host/empty/dir is still empty, and browsing the container dir also shows it as empty. What I want is the file contents of /container/folder/with/code/files to show up in /host/empty/dir
Sébastien Helbert answer is correct. But there is still a way to do this in 2 steps.
First run the container to extract the files:
docker run --rm -it myimage
In another terminal, type this command to copy what you want from the container.
docker cp <container_id>:/container/folder/with/code/files /host/empty/dir
Now stop the container. It will be deleted (--rm) when stopped.
Now if you run your original command, it will work as expected.
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
There is another way to access the files from within the container without copying it but it's very cumbersome.
Your /host/empty/dir is always empty because the volume binding replaces (overrides) the container folder with your empty host folder. But you can not do the opposite, that is, you take a container folder to replace your host folder.
However, there is a workaround by manually copying the files from your container folder to your host folder. before using them as you have suggested.
For exemple :
run your docker image with a volume maaping between you host folder and a temp folder : docker run -v /host/empty/dir:/some-temp-folder -it myimage
copy your /container/folder/with/code/files content into /some-temp-folder to fill you host folder with you container folder
run you container with a volum mapping on /host/empty/dir but now this folder is no longer empty : run -v /host/empty/dir:/container/folder/with/code/files -it myimage
Note that steps 1 & 2 may be replaced by : Copying files from Docker container to host

docker mount data from container to host

I created own Dockerfile, during building I inserted to /opt/wilfly/log my log4j.xml.
Now I need create volume /mnt/data/logs/application:/opt/wildfly/log
I run command
sudo docker run --name=myapp -v /mnt/data/logs/application:/opt/wildfly/log -d -i -t application
But when I look in docker container, folder /opt/wilfly/log is empty. In this folder should by log4j.xml.
Thank you.
Maybe you should move it into another directory.
For example move log4j.xml to /opt/wilfly/ and set logging path to /opt/wilfly/log.
When you run the container, log4j.xml will not disappear.
When you mount the data, the folder from your host "override" your mounted folder within the container.
Thus, there are some options you can do:
copy the log4j.xml into your local /mnt/data/logs/application folder and run the container as you did.
remove the -v /mnt/data/logs/application:/opt/wildfly/log and use the original log4j.xml that you were added during the image build.
Please note that you can also mount only the file if you like (rather than the entire floder): -v /mnt/data/logs/application/log4j.xml:/opt/wildfly/log/log4j.xml but it won't change the behavior - the file from your host will be mounted into the container and not in the opposite direction.

Docker add files to VOLUME

I have a Dockerfile which copies some files into the container and after that creates a VOLUME.
...
ADD src/ /var/www/html/
VOLUME /var/www/html/files
...
In the src folder is an files folder and in this files folder are some files I need to have copied to the VOLUME the first time the container gets started.
I thought the first time the container gets created it uses the content of the original dir specified in the volume but this is not the case.
So how can I get the files into this folder?
Do I need to create an extra folder and copy it with a runscript (I hope not)?
Whatever you put in your Dockerfile is just evaluated at build time (and not when you are creating a new container).
If you want to make file from the host available in your container use a data volume:
docker run -v /host_dir:/container_dir ...
In case you just want to copy files from the host to a container as a one-off operation you can use:
docker cp /host_dir mycontainer:/container_dir
The issue is with your ADD statement. Also you might not understand how volumes are accessed. Compare your efforts with the demo below:
FROM alpine #, or your favorite tiny image
ADD src/files /var/www/html/files
VOLUME /var/www/html/files
Build an image called 'dataimg':
docker build -t dataimg .
Use the dataimg image to create a data container named 'datacon':
docker run --name datacon dataimg /bin/cat
Mount the volume from datacon in your nginx container:
docker run --volumes-from datacon nginx ls -la /var/www/html/files
And you'll see the listing of /var/www/html/files reflects the contents of src/files

How to mount current directory as read-only but still allow changes inside the container?

I have a situation where:
I want to mount a directory ~/tmp/mycode to /mycode readonly
I want to be able to edit the files in the directory, so I can't just run -v /my/local/path/tmp/mycode:/mycode
I want it to not persist changes on the host filesystem though so I can't mount it read/write
~/tmp/mycode is rather large
Basically I want to be able to edit the files in the mounted volume but not have those changes persisted.
My current workflow is to create a dummy container using a dockerfile:
ADD . /mycode
and then execute that container.
However as the repository grows, this step takes longer and longer to perform, because the only way I can think is to make a complete copy of ~/tmp/mycode in order to be able to manipulate the files in the container.
I've also thought about mounting the directory and copying it inside the container and committing that container, but that has the same issue.
Is there a way to run a docker container to allow file edits without persisting them on the host short of copying the whole directory?
I am using the latest docker for mac, currently Version 17.03.1-ce-mac5 (16048).
This is fairly trivial to do with docker and overlay:
docker run --name myenv --privileged -v /my/local/path/tmp/mycode:/mnt/rocode:ro -it ubuntu /bin/bash
docker exec -d myenv /sbin/mount -t overlay overlay -o lowerdir=/mnt/rocode,upperdir=/mycode,workdir=/mnt/code-workdir /mycode
This should mount the code from your directory read only and create the overlay inside the container so that /mnt/rocode is read only, but /mycode is writable.
Make sure that your kernel is 3.18+ and that you have overlay in your /proc/filesystems.

Resources