Pass along user credentials in Docker - docker

Using a Docker application, I want to run an app as Daemon:
docker run -v $(pwd)/:/src -dit --name DOCKER_NAME my-app
And then execute a Python script from the mounted drive:
docker exec -w /src DOCKER_NAME python my_script.py
This Python script generates some files and figures, that I would later want to use. However, I have an issue that the files generated from within the Docker app have different rights than my outer environment.
[2D] drwxrwxr-x 5 jenkins_slave jenkins_slave 4096 Mar 21 10:47 .
[2D] drwxrwxr-x 24 jenkins_slave jenkins_slave 4096 Mar 21 10:46 ..
[2D] drwxrwxr-x 2 jenkins_slave jenkins_slave 4096 Mar 21 10:46 my_script.py
[2D] -rw-r--r-- 1 root root 268607 Mar 21 10:46 spaider_2d_0_000.png
[2D] -rw-r--r-- 1 root root 271945 Mar 21 10:46 spaider_2d_0_001.png
[2D] -rw-r--r-- 1 root root 283299 Mar 21 10:46 spaider_2d_0_010.png
In the above example, the latter 3 files are generated from within the Docker mount.
Can I in any way specify that the Docker app should be run with same credentials as the outer environment, and/or the generated files should have certain permissions?

Use Docker's -u/--user instruction to set user and group to run the container.
For example, if I would like to run the container not by root but by myself, I can do the following:
user=$(id -u)
group=$(cut -d: -f3 < <(getent group $(whoami)))
docker run -it -u "$user:$group" <CONTAINER_NAME> <COMMAND>
Inside the container you will find the user ID has changed to the one as in the host.
$ whoami
whoami: unknown uid 1000
Yes the username becomes unknown, but I guess you will not bother with it. You are doing this to set the correct permissions, not to get a nicely displayed name, right?
P.S., Docs here: https://docs.docker.com/engine/reference/run/#user

Related

Docker: Change permission of all files previously created as root within the container to local user

I have been working with a docker container for a few months now and was unaware of the fact that everything I was creating (folders, files) were created under the root user of my container. Now I want to reclaim ownership over all of these files so that I can have the permissions to move or write into them while I am outside of the container.
To make it a bit more concrete/clear, I have a local user named johndoe, and a local folder under the path of /home/johndoe/pythoncodes which is owned by johndoe. I mount this local folder to my docker container when I run the command
docker run -v /home/johndoe/pythoncodes:/home/johndoe/pythoncodes ...
Then when inside my container, I created a folder at /home/johndoe/pythoncodes/ProjectRepo. ProjectRepo is now owned by root from the container and so when I leave the container and go back to being the johndoe user, I no longer have the permissions to do anything with this folder (e.g. if I try to run git init I get a permission error that doesn't allow the creation of the .git folder.
I have seen answers on how to create a container that logs me in as my local user and have gotten this to work as well by using the adduser flag, but this only seem helpful for creating new files and doesn't help me with all of these files that have been already created as root.
but this only seem helpful for creating new files and doesn't help me with all of these files that have been already created as root
You could directly use chown from within the docker container to change the ownership of these bind mounts. But for this to work you will need to mount two folders which contain the username and password information for your user, /etc/passwd and /etc/group (below, :ro means 'read-only').
$ docker run -idt -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro --name try ubuntu:16.04 /bin/bash
$ docker exec -it try mkdir -p /tmp/abc/newfolder
$ cd abc
$ ls -alh
total 12K
drwxr-xr-x 3 atg atg 4.0K Jul 7 16:43 .
drwxr-xr-x 60 atg atg 4.0K Jul 7 16:42 ..
drwxr-xr-x 2 root root 4.0K Jul 7 16:43 newfolder
$ sudo chown -R atg:atg .
[sudo] password for atg:
$ ls -alh
total 12K
drwxr-xr-x 3 atg atg 4.0K Jul 7 16:43 .
drwxr-xr-x 60 atg atg 4.0K Jul 7 16:42 ..
drwxr-xr-x 2 atg atg 4.0K Jul 7 16:43 newfolder

Files inside a docker image disappear when mounting a volume

Inside of docker image has several files in /tmp directory.
Example
/tmp # ls -al
total 4684
drwxrwxrwt 1 root root 4096 May 19 07:09 .
drwxr-xr-x 1 root root 4096 May 19 08:13 ..
-rw-r--r-- 1 root root 156396 Apr 24 07:12 6359688847463040695.jpg
-rw-r--r-- 1 root root 150856 Apr 24 06:46 63596888545973599910.jpg
-rw-r--r-- 1 root root 142208 Apr 24 07:07 63596888658550828124.jpg
-rw-r--r-- 1 root root 168716 Apr 24 07:12 63596888674472576435.jpg
-rw-r--r-- 1 root root 182211 Apr 24 06:51 63596888734768961426.jpg
-rw-r--r-- 1 root root 322126 Apr 24 06:47 6359692693565384673.jpg
-rw-r--r-- 1 root root 4819 Apr 24 06:50 635974329998579791105.png
When I type the command to run this image -> container.
sudo docker run -v /home/media/simple_dir2:/tmp -d simple_backup
Expected behavior is if I run ls -al /home/media/simple_dir2
then the files show up.
But actual behavior is nothing exists in /home/media/simple_dir2.
On the other hand, if I run the same image without the volume option such as:
sudo docker run -d simple_backup
And enter that container using:
sudo docker exec -it <simple_backup container id> /bin/sh
ls -al /tmp
Then the files exist.
TL;DR
I want to mount a volume (directory) on the host, and have it filled with the files which are inside of the docker image.
My env
Ubuntu 18.04
Docker 19.03.6
From: https://docs.docker.com/storage/bind-mounts/
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.
"So, if host os's directory is empty, then container's directory will override is that right?"
Nope, it doesn't compare them for which one has files; it just overrides the folder on the container with the one on the host no matter what.

Docker - Can mount an NFS share into a container but not a sub-directory of it

I have an NFS share with the following properties:
Mounted on my host on /nfs/external_disk
Owner user is test_user with UID 1234
Group is test_group with GID 2222
Permissions is 750
I have a small Dockerfile with the following content
ARG tag=lts
from jenkins/jenkins:${tag}
user root
# Create a new user and new group that matches what is on the host.
ARG username=test_user
ARG groupname=test_group
ARG uid=1234
ARG gid=2222
RUN groupadd -g ${gid} ${groupname} && \
mkdir -p /users && \
useradd -l -m -u ${uid} -g ${groupname} -s /bin/bash -d /users/${username} ${username}
user ${username}
After building the image (named custom_jenkins), and when I run the following command, the container is started properly and I see the original Jenkins homer stuff now copied to the share.
docker run -td --rm -v /nfs/external_disk:/var/jenkins_home custom_jenkins
However if I want to mount a sub-directory of the NFS share, say ${NFS_SHARE}/jenkins_home, then I get an error:
docker run -td --rm -v /nfs/external_disk/jenkins_home:/var/jenkins_home custom_jenkins
docker: Error response from daemon: error while creating mount source path '/nfs/external_disk/jenkins_home': mkdir /nfs/external_disk/jenkins_home: permission denied.
Now even if I attempt to create the sub-directory myself before starting the container, I still get the same error. Even when I set the permissions of the sub-directory to be 777.
Note that I am running as test_user which has the same UID/GID as in the container and it actually owns the NFS share.
I have a feeling that when docker attempts to create a sub-directory, it attempts to create it as some different user (e.g. the "docker" user) which causes it to fail while creating the folder since it has no access inside the share.
Can anyone help? thanks in advance.
I tried to reproduce. Works just fine for me. Perhaps I am missing some constraint. Hope this helps anyway. Note at step 6 the owner and the group for the file that I created from the container. This might answer one of your questions.
Step 1: I created a NFS share somewhere in my LAN
Step 2: I mounted the share on the machine that's running the docker engine
sudo mount 192.168.0.xxx:/i-data/b4024d5b/nfs/NFS /mnt/nsa320/
neo#neo-desktop:nsa320$ mount | grep NFS
192.168.0.xxx:/i-data/b4024d5b/nfs/NFS on /mnt/nsa320 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.xxx,mountvers=3,mountport=3775,mountproto=udp,local_lock=none,addr=192.168.0.xxx)
Step 3: I created some sample files and a sub-directory:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/
total 12
drwxrwxrwx 3 root root 4096 Jul 21 22:54 .
drwxr-xr-x 3 root root 4096 Jul 21 22:41 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:45 dummyFile
-rw-r--r-- 1 root root 0 Jul 21 22:53 fileCreatedFromContainer << THIS WAS CREATED FROM A CONTAINER THAT WAS NOT LAUNCHED WITH THE --user OPTION
drwxr-xr-x 2 neo neo 4096 Jul 21 22:54 subfolder
Step 4: Launched a dummy container and mounted the sub-directory (1000 is the UID of the user neo in the my OS):
docker run -d -v /mnt/nsa320/subfolder:/var/externalMount --user 1000 alpine tail -f /dev/null
Step 5: Connected in container to check the mount(I can read and write in the subfolder located on the NFS)
neo#neo-desktop:nsa320$ docker exec -ti ded1dc79773e sh
/ $ ls /var/externalMount/
fileInSubfolder
/ $ touch /var/externalMount/fileInSubfolderCreatedFromContainer
Step 6: Back on the host, to whom does the file that I created from the container belongs to:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/subfolder/
total 8
drwxr-xr-x 2 neo neo 4096 Jul 21 23:23 .
drwxrwxrwx 3 root root 4096 Jul 21 22:54 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:54 fileInSubfolder
-rw-r--r-- 1 neo root 0 Jul 21 23:23 fileInSubfolderCreatedFromContainer
Maybe off-topic: whoami executed in the container returns just the UID:
$ whoami
whoami: unknown uid 1000

Docker run with "-v" create another shared directory

I have a strange problem running a Docker container
It is working OK if I run:
docker run -it -v /home/drleo/pythonCourses:/home/pythonCurses /redpmorg/python-courses
But if I run container with publish option the Docker will create a new folder in my /home/drleo directory with the SAME name: pythonCourses, owned by root but obviously empty:
docker run -it -p 127.0.0.1:8080:8080 -v /home/drleo/pythonCourses:/home/pythonCurses /redpmorg/python-courses
-rw-r--r-- 1 drleo drleo 675 May 6 2016 .profile
drwxr-xr-x 2 drleo drleo 4096 May 6 2016 Public
drwxr-xr-x 2 root root 4096 Feb 16 13:08 pyhtonCourses
drwxrwxr-x 2 drleo drleo 4096 Feb 16 13:08 pythonCourses
-rwxrwxr-x 1 drleo drleo 71 Jan 20 22:35 reset-network
The question is why? Thanks!
You seem to have a type somewhere. python != pyhton.
Double check your command history.

Edit apache configuration in docker

First time docker user here, I'm using this image: https://github.com/dgraziotin/osx-docker-lamp
I want to make the apache in that container to use a configuration file from the host system. How do I do that?
I know I can use nsenter, but I think my changes will get deleted when the container is turned off.
Thank you
The best solution is using VOLUME.
docker pull dgraziotin/lamp
You need to copy /etc/apache2/ from container to current directory in host computer. Then you can do this:
cd ~
mkdir conf
docker run -i -t --rm -v ~/conf:/tmp/conf dgraziotin/lamp:latest /bin/bash
On container do:
ls /tmp/conf
cd /etc/apache2/
tar -cf /tmp/conf/apache-conf.tar *
exit
On host computer:
cd conf
tar -xf apache-conf.tar
cd ..
# alter your configuration in this file and save
vi conf/apache2.conf
# run your container : daemon mode
docker run -d -p 9180:80 --name web-01 -v ~/conf:/etc/apache2 dgraziotin/lamp:latest
docker ps
To list conf content on Container use:
docker exec web-01 ls -lAt /etc/apache2/
total 72
-rw-r--r-- 1 root root 1779 Jul 17 20:24 envvars
drwxr-xr-x 2 root root 4096 Apr 10 11:46 mods-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:45 sites-available
-rw-r--r-- 1 root root 7136 Apr 10 11:45 apache2.conf
drwxr-xr-x 2 root root 4096 Apr 10 11:45 mods-available
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 sites-enabled
drwxr-xr-x 2 root root 4096 Apr 10 11:44 conf-available
-rw-r--r-- 1 root root 320 Jan 7 2014 ports.conf
-rw-r--r-- 1 root root 31063 Jan 3 2014 magic
Use docker exec web-01 cat /etc/apache2/apache2.conf to list content inside Container.
One the WEB page to test your environment.
I hope this help you.
You should use a Dockerfile to generate a new image containing your desired configuration. For example:
FROM dgraziotin/lamp
COPY my-config-file /some/configuration/file
This assumes that there is a file my-config-file located in the same directory as the Dockerfile. Then run:
docker build -t myimage
And once the build completes you will have an image named myimage available locally.

Resources