Inside of docker image has several files in /tmp directory.
Example
/tmp # ls -al
total 4684
drwxrwxrwt 1 root root 4096 May 19 07:09 .
drwxr-xr-x 1 root root 4096 May 19 08:13 ..
-rw-r--r-- 1 root root 156396 Apr 24 07:12 6359688847463040695.jpg
-rw-r--r-- 1 root root 150856 Apr 24 06:46 63596888545973599910.jpg
-rw-r--r-- 1 root root 142208 Apr 24 07:07 63596888658550828124.jpg
-rw-r--r-- 1 root root 168716 Apr 24 07:12 63596888674472576435.jpg
-rw-r--r-- 1 root root 182211 Apr 24 06:51 63596888734768961426.jpg
-rw-r--r-- 1 root root 322126 Apr 24 06:47 6359692693565384673.jpg
-rw-r--r-- 1 root root 4819 Apr 24 06:50 635974329998579791105.png
When I type the command to run this image -> container.
sudo docker run -v /home/media/simple_dir2:/tmp -d simple_backup
Expected behavior is if I run ls -al /home/media/simple_dir2
then the files show up.
But actual behavior is nothing exists in /home/media/simple_dir2.
On the other hand, if I run the same image without the volume option such as:
sudo docker run -d simple_backup
And enter that container using:
sudo docker exec -it <simple_backup container id> /bin/sh
ls -al /tmp
Then the files exist.
TL;DR
I want to mount a volume (directory) on the host, and have it filled with the files which are inside of the docker image.
My env
Ubuntu 18.04
Docker 19.03.6
From: https://docs.docker.com/storage/bind-mounts/
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.
"So, if host os's directory is empty, then container's directory will override is that right?"
Nope, it doesn't compare them for which one has files; it just overrides the folder on the container with the one on the host no matter what.
Related
I have been working with a docker container for a few months now and was unaware of the fact that everything I was creating (folders, files) were created under the root user of my container. Now I want to reclaim ownership over all of these files so that I can have the permissions to move or write into them while I am outside of the container.
To make it a bit more concrete/clear, I have a local user named johndoe, and a local folder under the path of /home/johndoe/pythoncodes which is owned by johndoe. I mount this local folder to my docker container when I run the command
docker run -v /home/johndoe/pythoncodes:/home/johndoe/pythoncodes ...
Then when inside my container, I created a folder at /home/johndoe/pythoncodes/ProjectRepo. ProjectRepo is now owned by root from the container and so when I leave the container and go back to being the johndoe user, I no longer have the permissions to do anything with this folder (e.g. if I try to run git init I get a permission error that doesn't allow the creation of the .git folder.
I have seen answers on how to create a container that logs me in as my local user and have gotten this to work as well by using the adduser flag, but this only seem helpful for creating new files and doesn't help me with all of these files that have been already created as root.
but this only seem helpful for creating new files and doesn't help me with all of these files that have been already created as root
You could directly use chown from within the docker container to change the ownership of these bind mounts. But for this to work you will need to mount two folders which contain the username and password information for your user, /etc/passwd and /etc/group (below, :ro means 'read-only').
$ docker run -idt -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro --name try ubuntu:16.04 /bin/bash
$ docker exec -it try mkdir -p /tmp/abc/newfolder
$ cd abc
$ ls -alh
total 12K
drwxr-xr-x 3 atg atg 4.0K Jul 7 16:43 .
drwxr-xr-x 60 atg atg 4.0K Jul 7 16:42 ..
drwxr-xr-x 2 root root 4.0K Jul 7 16:43 newfolder
$ sudo chown -R atg:atg .
[sudo] password for atg:
$ ls -alh
total 12K
drwxr-xr-x 3 atg atg 4.0K Jul 7 16:43 .
drwxr-xr-x 60 atg atg 4.0K Jul 7 16:42 ..
drwxr-xr-x 2 atg atg 4.0K Jul 7 16:43 newfolder
I have build a docker image using the Dockerfile:
--Dockerfile
FROM scratch
ADD archlinux.tar /
ENV LANG=en_US.UTF-8
CMD ["/usr/bin/bash"]
--building the docker image:
docker build -t archlinux/base .
then checking the images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
archlinux/base latest 7f4e7832243a 43 minutes ago 399MB
then go into the overlay2 folder and check what happens
root# cd var/lib/docker/overlay2
# ls -al
0d5db16fa33657d952e4d7921d9239b5a17ef579e03ecdd5046b63fc47d15038
now i try to run:
$ docker run -it archlinux/base
Now check the /var/lib/overlay2 folder
# ls -al
total 24
drwx------ 6 root root 4096 Mar 3 15:58 .
drwx--x--x 15 simha users 4096 Mar 3 07:25 ..
drwx------ 3 root root 4096 Mar 3 16:01 0d5db16fa33657d952e4d7921d9239b5a17ef579e03ecdd5046b63fc47d15038
drwx------ 4 root root 4096 Mar 3 16:01 500ef7ee5672b73c778e2080dda0ad7a9101d6b65e5bdb0b52f4e5d2f22fa2b3
drwx------ 4 root root 4096 Mar 3 15:58 500ef7ee5672b73c778e2080dda0ad7a9101d6b65e5bdb0b52f4e5d2f22fa2b3-init
drwx------ 2 root root 4096 Mar 3 15:58 l
Now i see more folders.
Why there was only one folder before the run and later it shows many folders in the overlay2.
If the check the images using docker command it shows the same as previous:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
archlinux/base latest 7f4e7832243a 43 minutes ago 399MB
How to understand the image and their layers in overlay2
First note that the contents of the /var/lib/docker/ directory is for the internal soup of Docker and should not be messed with.
In that directory, the contents of the overlay2 directory is to store the docker layers that constitute your docker images and containers. What is important to remember is that overlay2 is a file system using union mounts. In short, it will merge two folders into one. So when using the filesystem you might see one folder, but under the hood there are more. This is how docker makes layers.
Using a Docker application, I want to run an app as Daemon:
docker run -v $(pwd)/:/src -dit --name DOCKER_NAME my-app
And then execute a Python script from the mounted drive:
docker exec -w /src DOCKER_NAME python my_script.py
This Python script generates some files and figures, that I would later want to use. However, I have an issue that the files generated from within the Docker app have different rights than my outer environment.
[2D] drwxrwxr-x 5 jenkins_slave jenkins_slave 4096 Mar 21 10:47 .
[2D] drwxrwxr-x 24 jenkins_slave jenkins_slave 4096 Mar 21 10:46 ..
[2D] drwxrwxr-x 2 jenkins_slave jenkins_slave 4096 Mar 21 10:46 my_script.py
[2D] -rw-r--r-- 1 root root 268607 Mar 21 10:46 spaider_2d_0_000.png
[2D] -rw-r--r-- 1 root root 271945 Mar 21 10:46 spaider_2d_0_001.png
[2D] -rw-r--r-- 1 root root 283299 Mar 21 10:46 spaider_2d_0_010.png
In the above example, the latter 3 files are generated from within the Docker mount.
Can I in any way specify that the Docker app should be run with same credentials as the outer environment, and/or the generated files should have certain permissions?
Use Docker's -u/--user instruction to set user and group to run the container.
For example, if I would like to run the container not by root but by myself, I can do the following:
user=$(id -u)
group=$(cut -d: -f3 < <(getent group $(whoami)))
docker run -it -u "$user:$group" <CONTAINER_NAME> <COMMAND>
Inside the container you will find the user ID has changed to the one as in the host.
$ whoami
whoami: unknown uid 1000
Yes the username becomes unknown, but I guess you will not bother with it. You are doing this to set the correct permissions, not to get a nicely displayed name, right?
P.S., Docs here: https://docs.docker.com/engine/reference/run/#user
Similar to Copying files from host to Docker container, except docker cp doesn't seem to work for multiple files
$ docker cp data/a.txt sandbox_web_1:/usr/src/app/data/
works fine, but
$ docker cp data/*txt sandbox_web_1:/usr/src/app/data/
docker: "cp" requires 2 arguments.
See 'docker cp --help'.
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
Use '-' as the source to read a tar archive from stdin
and extract it to a directory destination in a container.
Use '-' as the destination to stream a tar archive of a
container source to stdout.
Using docker 1.11.1 on Ubuntu 14.04x64
There is a proposal for docker cp to support wildcards (7710), but it is not implemented yet.
So that leaves you with bash scripting, using docker cp for each file:
for f in data/*txt; do docker cp $f sandbox_web_1:/usr/src/app/data/; done
The following command should copy whole directory data with its content of a data directory to your desired destination:
docker cp data/ sandbox_web_1:/usr/src/app/
Tested on Docker version 1.12.1, but I haven't found any changes to a cp command in a the release 1.12.1
I am on Docker version 18.09 and found that I was able to copy all the files from my current local directory to the container's root by running:
docker cp ./ image_name:
Docker cp works perfectly fine when we are inside the directory whose contents we need to copy in bulk into the container. Apparently, the asterisk wildcard (*) is not supported for copying multiple files with docker cp command.
Copied the contents not the directory.
Also, given is the docker version for reference. Although, not much changes to the docker cp until lately from this post.
[root#stnm001 hadoop-configs]# docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]# docker cp ./ "xyz-downloader:/etc/hadoop/conf"
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]# ls
capacity-scheduler.xml container-executor.cfg dfs.exclude hadoop-metrics2.properties hadoop-policy.xml hdfs-site.xml slaves ssl-server.xml yarn.exclude
configuration.xsl core-site.xml hadoop-env.sh hadoop-metrics.properties hbase-site.xml log4j.properties ssl-client.xml yarn-env.sh yarn-site.xml
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]#
[root#stnm001 hadoop-configs]# docker exec -it xyz-downloader bash
[root#xyz-downloader /]#
[root#xyz-downloader /]#
[root#xyz-downloader /]# cd /etc/hadoop/conf/
[root#xyz-downloader conf]# ls -ltrh
total 100K
-rw-r--r-- 1 root root 318 May 20 06:57 container-executor.cfg
-rw-r--r-- 1 root root 1.4K May 20 06:57 configuration.xsl
-rw-r--r-- 1 root root 3.6K May 20 06:57 capacity-scheduler.xml
-rw-r--r-- 1 root root 1.8K May 20 06:57 hadoop-metrics2.properties
-rw-r--r-- 1 root root 3.6K May 20 06:57 hadoop-env.sh
-rw-r--r-- 1 root root 0 May 20 06:57 dfs.exclude
-rw-r--r-- 1 root root 1.4K May 20 06:57 core-site.xml
-rw-r--r-- 1 root root 5.2K May 20 06:57 hdfs-site.xml
-rw-r--r-- 1 root root 9.1K May 20 06:57 hadoop-policy.xml
-rw-r--r-- 1 root root 2.5K May 20 06:57 hadoop-metrics.properties
-rw-r--r-- 1 root root 891 May 20 06:57 ssl-client.xml
-rw-r--r-- 1 root root 102 May 20 06:57 slaves
-rw-r--r-- 1 root root 15K May 20 06:57 log4j.properties
-rw-r--r-- 1 root root 4.7K May 20 06:57 yarn-env.sh
-rw-r--r-- 1 root root 891 May 20 06:57 ssl-server.xml
-rw-r--r-- 1 root root 11K May 20 06:57 yarn-site.xml
-rw-r--r-- 1 root root 0 May 20 06:57 yarn.exclude
-rw-r--r-- 1 root root 2.7K May 20 06:58 hbase-site.xml
[root#xyz-downloader conf]#
Mixing #Aaron's answer with the fact that () gives a subshell in bash, I was able to accomplish this via:
(cd data && docker cp ./ sandbox_web_1:user/src/app/)
It takes advantage of the fact that ./ is almost the only parameter that can pass multiple files to docker cp.
First time docker user here, I'm using this image: https://github.com/dgraziotin/osx-docker-lamp
I want to make the apache in that container to use a configuration file from the host system. How do I do that?
I know I can use nsenter, but I think my changes will get deleted when the container is turned off.
Thank you
The best solution is using VOLUME.
docker pull dgraziotin/lamp
You need to copy /etc/apache2/ from container to current directory in host computer. Then you can do this:
cd ~
mkdir conf
docker run -i -t --rm -v ~/conf:/tmp/conf dgraziotin/lamp:latest /bin/bash
On container do:
ls /tmp/conf
cd /etc/apache2/
tar -cf /tmp/conf/apache-conf.tar *
exit
On host computer:
cd conf
tar -xf apache-conf.tar
cd ..
# alter your configuration in this file and save
vi conf/apache2.conf
# run your container : daemon mode
docker run -d -p 9180:80 --name web-01 -v ~/conf:/etc/apache2 dgraziotin/lamp:latest
docker ps
To list conf content on Container use:
docker exec web-01 ls -lAt /etc/apache2/
total 72
-rw-r--r-- 1 root root 1779 Jul 17 20:24 envvars
drwxr-xr-x 2 root root 4096 Apr 10 11:46 mods-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:45 sites-available
-rw-r--r-- 1 root root 7136 Apr 10 11:45 apache2.conf
drwxr-xr-x 2 root root 4096 Apr 10 11:45 mods-available
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 sites-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-available
-rw-r--r-- 1 root root 320 Jan 7 2014 ports.conf
-rw-r--r-- 1 root root 31063 Jan 3 2014 magic
Use docker exec web-01 cat /etc/apache2/apache2.conf to list content inside Container.
One the WEB page to test your environment.
I hope this help you.
You should use a Dockerfile to generate a new image containing your desired configuration. For example:
FROM dgraziotin/lamp
COPY my-config-file /some/configuration/file
This assumes that there is a file my-config-file located in the same directory as the Dockerfile. Then run:
docker build -t myimage
And once the build completes you will have an image named myimage available locally.