Similar to Copying files from host to Docker container, except docker cp doesn't seem to work for multiple files
$ docker cp data/a.txt sandbox_web_1:/usr/src/app/data/
works fine, but
$ docker cp data/*txt sandbox_web_1:/usr/src/app/data/
docker: "cp" requires 2 arguments.
See 'docker cp --help'.
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
Use '-' as the source to read a tar archive from stdin
and extract it to a directory destination in a container.
Use '-' as the destination to stream a tar archive of a
container source to stdout.
Using docker 1.11.1 on Ubuntu 14.04x64
There is a proposal for docker cp to support wildcards (7710), but it is not implemented yet.
So that leaves you with bash scripting, using docker cp for each file:
for f in data/*txt; do docker cp $f sandbox_web_1:/usr/src/app/data/; done
The following command should copy whole directory data with its content of a data directory to your desired destination:
docker cp data/ sandbox_web_1:/usr/src/app/
Tested on Docker version 1.12.1, but I haven't found any changes to a cp command in a the release 1.12.1
I am on Docker version 18.09 and found that I was able to copy all the files from my current local directory to the container's root by running:
docker cp ./ image_name:
Docker cp works perfectly fine when we are inside the directory whose contents we need to copy in bulk into the container. Apparently, the asterisk wildcard (*) is not supported for copying multiple files with docker cp command.
Copied the contents not the directory.
Also, given is the docker version for reference. Although, not much changes to the docker cp until lately from this post.
[root#stnm001 hadoop-configs]# docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]# docker cp ./ "xyz-downloader:/etc/hadoop/conf"
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]# ls
capacity-scheduler.xml container-executor.cfg dfs.exclude hadoop-metrics2.properties hadoop-policy.xml hdfs-site.xml slaves ssl-server.xml yarn.exclude
configuration.xsl core-site.xml hadoop-env.sh hadoop-metrics.properties hbase-site.xml log4j.properties ssl-client.xml yarn-env.sh yarn-site.xml
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]# docker exec -it xyz-downloader bash
[root#xyz-downloader /]#
[root#xyz-downloader /]#
[root#xyz-downloader /]# cd /etc/hadoop/conf/
[root#xyz-downloader conf]# ls -ltrh
total 100K
-rw-r--r-- 1 root root 318 May 20 06:57 container-executor.cfg
-rw-r--r-- 1 root root 1.4K May 20 06:57 configuration.xsl
-rw-r--r-- 1 root root 3.6K May 20 06:57 capacity-scheduler.xml
-rw-r--r-- 1 root root 1.8K May 20 06:57 hadoop-metrics2.properties
-rw-r--r-- 1 root root 3.6K May 20 06:57 hadoop-env.sh
-rw-r--r-- 1 root root 0 May 20 06:57 dfs.exclude
-rw-r--r-- 1 root root 1.4K May 20 06:57 core-site.xml
-rw-r--r-- 1 root root 5.2K May 20 06:57 hdfs-site.xml
-rw-r--r-- 1 root root 9.1K May 20 06:57 hadoop-policy.xml
-rw-r--r-- 1 root root 2.5K May 20 06:57 hadoop-metrics.properties
-rw-r--r-- 1 root root 891 May 20 06:57 ssl-client.xml
-rw-r--r-- 1 root root 102 May 20 06:57 slaves
-rw-r--r-- 1 root root 15K May 20 06:57 log4j.properties
-rw-r--r-- 1 root root 4.7K May 20 06:57 yarn-env.sh
-rw-r--r-- 1 root root 891 May 20 06:57 ssl-server.xml
-rw-r--r-- 1 root root 11K May 20 06:57 yarn-site.xml
-rw-r--r-- 1 root root 0 May 20 06:57 yarn.exclude
-rw-r--r-- 1 root root 2.7K May 20 06:58 hbase-site.xml
[root#xyz-downloader conf]#
Mixing #Aaron's answer with the fact that () gives a subshell in bash, I was able to accomplish this via:
(cd data && docker cp ./ sandbox_web_1:user/src/app/)
It takes advantage of the fact that ./ is almost the only parameter that can pass multiple files to docker cp.
Related
I have a problem, that I cannot grasp at all. I'm running my Jenkins pipeline in a Docker container on the master node. Now I added another node and want to run the pipeline there as well.
However, using the same image I get different file permissions in the container:
### master
> docker image ls node:10.20.1-stretch
REPOSITORY TAG IMAGE ID CREATED SIZE
node 10.20.1-stretch c5f1efe092a0 13 days ago 912MB
> docker run --rm -ti -u 1000:1000 node:10.20.1-stretch ls -la /home/node
total 20
drwxr-xr-x 2 1000 1000 4096 May 15 20:31 .
drwxr-xr-x 3 0 0 4096 May 15 20:31 ..
-rw-r--r-- 1 1000 1000 220 May 15 2017 .bash_logout
-rw-r--r-- 1 1000 1000 3526 May 15 2017 .bashrc
-rw-r--r-- 1 1000 1000 675 May 15 2017 .profile
### node 1
> docker image ls node:10.20.1-stretch
REPOSITORY TAG IMAGE ID CREATED SIZE
node 10.20.1-stretch c5f1efe092a0 13 days ago 912MB
> docker run --rm -ti -u 1000:1000 node:10.20.1-stretch ls -la /home/node
total 20
drwxr-xr-x 2 0 0 4096 May 26 05:42 .
drwxr-xr-x 1 0 0 4096 May 26 05:42 ..
-rw-r--r-- 1 0 0 220 May 26 05:42 .bash_logout
-rw-r--r-- 1 0 0 3526 May 26 05:42 .bashrc
-rw-r--r-- 1 0 0 675 May 26 05:42 .profile
I observed a similar behavior for the /tmp directory, which has chmod 1777 on master and 1755 on node 1.
# master
> docker -v
Docker version 19.03.9, build 9d988398e7
> dockerd -v
Docker version 19.03.9, build 9d988398e7
# node 1
> docker -v
Docker version 19.03.10, build 9424aeaee9
> dockerd -v
Docker version 19.03.10, build 9424aeaee9
I assume the wrong behavior is on node 1, as the /home/node directory and all of its children are owned by root:root there, but the same directory is owned by node:node on the master. However, I already upgraded the Docker version on node 1 from 19.03.8 to 19.03.10 and nothing changed.
It there anything I don't understand about Docker containers? I have been working with them for a while, but never observed such a behavior.
I have change the storage driver from overlay2 to aufs. Now I have the correct permissions.
Inside of docker image has several files in /tmp directory.
Example
/tmp # ls -al
total 4684
drwxrwxrwt 1 root root 4096 May 19 07:09 .
drwxr-xr-x 1 root root 4096 May 19 08:13 ..
-rw-r--r-- 1 root root 156396 Apr 24 07:12 6359688847463040695.jpg
-rw-r--r-- 1 root root 150856 Apr 24 06:46 63596888545973599910.jpg
-rw-r--r-- 1 root root 142208 Apr 24 07:07 63596888658550828124.jpg
-rw-r--r-- 1 root root 168716 Apr 24 07:12 63596888674472576435.jpg
-rw-r--r-- 1 root root 182211 Apr 24 06:51 63596888734768961426.jpg
-rw-r--r-- 1 root root 322126 Apr 24 06:47 6359692693565384673.jpg
-rw-r--r-- 1 root root 4819 Apr 24 06:50 635974329998579791105.png
When I type the command to run this image -> container.
sudo docker run -v /home/media/simple_dir2:/tmp -d simple_backup
Expected behavior is if I run ls -al /home/media/simple_dir2
then the files show up.
But actual behavior is nothing exists in /home/media/simple_dir2.
On the other hand, if I run the same image without the volume option such as:
sudo docker run -d simple_backup
And enter that container using:
sudo docker exec -it <simple_backup container id> /bin/sh
ls -al /tmp
Then the files exist.
TL;DR
I want to mount a volume (directory) on the host, and have it filled with the files which are inside of the docker image.
My env
Ubuntu 18.04
Docker 19.03.6
From: https://docs.docker.com/storage/bind-mounts/
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.
"So, if host os's directory is empty, then container's directory will override is that right?"
Nope, it doesn't compare them for which one has files; it just overrides the folder on the container with the one on the host no matter what.
I have build a docker image using the Dockerfile:
--Dockerfile
FROM scratch
ADD archlinux.tar /
ENV LANG=en_US.UTF-8
CMD ["/usr/bin/bash"]
--building the docker image:
docker build -t archlinux/base .
then checking the images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
archlinux/base latest 7f4e7832243a 43 minutes ago 399MB
then go into the overlay2 folder and check what happens
root# cd var/lib/docker/overlay2
# ls -al
0d5db16fa33657d952e4d7921d9239b5a17ef579e03ecdd5046b63fc47d15038
now i try to run:
$ docker run -it archlinux/base
Now check the /var/lib/overlay2 folder
# ls -al
total 24
drwx------ 6 root root 4096 Mar 3 15:58 .
drwx--x--x 15 simha users 4096 Mar 3 07:25 ..
drwx------ 3 root root 4096 Mar 3 16:01 0d5db16fa33657d952e4d7921d9239b5a17ef579e03ecdd5046b63fc47d15038
drwx------ 4 root root 4096 Mar 3 16:01 500ef7ee5672b73c778e2080dda0ad7a9101d6b65e5bdb0b52f4e5d2f22fa2b3
drwx------ 4 root root 4096 Mar 3 15:58 500ef7ee5672b73c778e2080dda0ad7a9101d6b65e5bdb0b52f4e5d2f22fa2b3-init
drwx------ 2 root root 4096 Mar 3 15:58 l
Now i see more folders.
Why there was only one folder before the run and later it shows many folders in the overlay2.
If the check the images using docker command it shows the same as previous:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
archlinux/base latest 7f4e7832243a 43 minutes ago 399MB
How to understand the image and their layers in overlay2
First note that the contents of the /var/lib/docker/ directory is for the internal soup of Docker and should not be messed with.
In that directory, the contents of the overlay2 directory is to store the docker layers that constitute your docker images and containers. What is important to remember is that overlay2 is a file system using union mounts. In short, it will merge two folders into one. So when using the filesystem you might see one folder, but under the hood there are more. This is how docker makes layers.
I prepared Docker image containing libraries needed for building my other project. I want to have a directory /myLibs with libraries from different projects, e.g:
/myLibs:
projectA
projectB
projectC
Dockerfile:
FROM my-base:1.0
VOLUME /myLibs
COPY projectA/bin/*.so* /myLibs/projectA/bin/
CMD /bin/bash
Built:
docker build -t my-libs:1.0 .
Then I want to update libs in this image every time when I rebuild projectA. So I prepared Dockerfile:
FROM my-libs:1.0 # I changed parent image, because /myLibs/projectB and /myLibs/projectC should remain
VOLUME /myLibs
RUN ls -al /myLibs && rm -rf /myLibs/projectA && ls -al /myLibs
RUN ls -al /myLibs
COPY projectA/bin/*.so* /myLibs/projectA/bin/
CMD /bin/bash
As a result I have old projectA libs in my volume:
Step 4 : RUN ls -al /myLibs && rm -rf /myLibs/projectA && ls -al /myLibs
---> Running in 1e3e25084e69
total 12
drwxr-xr-x 3 root root 4096 Jul 16 13:52 .
drwxr-xr-x 75 root root 4096 Jul 16 13:52 ..
drwxr-xr-x 4 root root 4096 Jul 16 13:51 projectA
total 8
drwxr-xr-x 2 root root 4096 Jul 16 13:52 .
drwxr-xr-x 75 root root 4096 Jul 16 13:52 ..
---> d5973da5965c
Removing intermediate container 1e3e25084e69
Step 5 : RUN ls -al /myLibs
---> Running in 1d93575b50c2
total 12
drwxr-xr-x 3 root root 4096 Jul 16 13:52 .
drwxr-xr-x 75 root root 4096 Jul 16 13:52 ..
drwxr-xr-x 4 root root 4096 Jul 16 13:51 projectA
---> 6d2a48a5b67b
How can I remove files from volume?
If you want to change the files on rebuild, you probably don't want to do it in the volume. The volume is generally for data you want to persist. Remember the volume mounting will occur after the container builds, so what's probably happening is the volume with the old data is mounting over any changes you are making in the image (re)build.
What are you using /myLibs for? If they are read-only files you want to set up in the build, you might be better off not using a volume and make them part of the image. If you want to modify them, it's probably better to manage that after the build - there is no real reason to rebuild the image if you are just changing files in a networked volume.
First time docker user here, I'm using this image: https://github.com/dgraziotin/osx-docker-lamp
I want to make the apache in that container to use a configuration file from the host system. How do I do that?
I know I can use nsenter, but I think my changes will get deleted when the container is turned off.
Thank you
The best solution is using VOLUME.
docker pull dgraziotin/lamp
You need to copy /etc/apache2/ from container to current directory in host computer. Then you can do this:
cd ~
mkdir conf
docker run -i -t --rm -v ~/conf:/tmp/conf dgraziotin/lamp:latest /bin/bash
On container do:
ls /tmp/conf
cd /etc/apache2/
tar -cf /tmp/conf/apache-conf.tar *
exit
On host computer:
cd conf
tar -xf apache-conf.tar
cd ..
# alter your configuration in this file and save
vi conf/apache2.conf
# run your container : daemon mode
docker run -d -p 9180:80 --name web-01 -v ~/conf:/etc/apache2 dgraziotin/lamp:latest
docker ps
To list conf content on Container use:
docker exec web-01 ls -lAt /etc/apache2/
total 72
-rw-r--r-- 1 root root 1779 Jul 17 20:24 envvars
drwxr-xr-x 2 root root 4096 Apr 10 11:46 mods-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:45 sites-available
-rw-r--r-- 1 root root 7136 Apr 10 11:45 apache2.conf
drwxr-xr-x 2 root root 4096 Apr 10 11:45 mods-available
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 sites-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-available
-rw-r--r-- 1 root root 320 Jan 7 2014 ports.conf
-rw-r--r-- 1 root root 31063 Jan 3 2014 magic
Use docker exec web-01 cat /etc/apache2/apache2.conf to list content inside Container.
One the WEB page to test your environment.
I hope this help you.
You should use a Dockerfile to generate a new image containing your desired configuration. For example:
FROM dgraziotin/lamp
COPY my-config-file /some/configuration/file
This assumes that there is a file my-config-file located in the same directory as the Dockerfile. Then run:
docker build -t myimage
And once the build completes you will have an image named myimage available locally.