File ownership after docker cp - docker

How can I control which user owns the files I copy in and out of a container?
The docker cp command says this about file ownership:
The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command. However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.
It says that files copied to a container are created as the root user, but that's not what I see. I create two files owned by user id 1005 and 1006. Those owners are translated into the container's user namespace. The -a option seems to make no difference when I copy the file into a container.
$ sudo chown 1005:1005 test.txt
$ ls -l test.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 12:43 test.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -vsandbox1:/data alpine echo OK
OK
$ docker cp test.txt run1:/data/test1005.txt
$ docker cp -a test.txt run1:/data/test1005a.txt
$ sudo chown 1006:1006 test.txt
$ docker cp test.txt run1:/data/test1006.txt
$ docker cp -a test.txt run1:/data/test1006a.txt
$ docker run --rm -vsandbox1:/data alpine ls -l /data
total 16
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005a.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006a.txt
When I copy files out of the container, they are always owned by me. Again, the -a option seems to do nothing.
$ docker run --rm -vsandbox1:/data alpine cp /data/test1006.txt /data/test1007.txt
$ docker run --rm -vsandbox1:/data alpine chown 1007:1007 /data/test1007.txt
$ docker cp run1:/data/test1006.txt .
$ docker cp run1:/data/test1007.txt .
$ docker cp -a run1:/data/test1006.txt test1006a.txt
$ docker cp -a run1:/data/test1007.txt test1007a.txt
$ ls -l test*.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 12:43 test.txt
$

You can also change the ownership by logging in as root user into the container :
docker exec -it --user root <container-id> /bin/bash
chown -R <username>:<groupname> <folder/file>

In addition to #Don Kirkby's answer, let me provide a similar example in bash/shell script for the case that you want to copy something into a container while applying different ownership and permissions than those of the original file.
Let's create a new container from a small image that will keep running by itself:
docker run -d --name nginx nginx:alpine
Now wel'll create a new file which is owned by the current user and has default permissions:
touch foo.bar
ls -ahl foo.bar
>> -rw-rw-r-- 1 my-user my-group 0 Sep 21 16:45 foo.bar
Copying this file into the container will set ownership and group to the UID of my user and preserve the permissions:
docker cp foo.bar nginx:/foo.bar
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -rw-rw-r-- 1 4098 4098 0 Sep 21 14:45 /foo.bar
Using a little tar work-around, however, I can change the ownership and permissions that are applied inside of the container.
tar -cf - foo.bar --mode u=+r,g=-rwx,o=-rwx --owner root --group root | docker cp - nginx:/
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -r-------- 1 root root 0 Sep 21 14:45 /foo.bar
tar options explained:
c creates a new archive instead of unpacking one.
f - will write to stdout instead of a file.
foo.bar is the input file to be packed.
--mode specifies the permissions for the target. Similar to chown, they can be given in symbolic notation or as an octal number.
--owner sets the new owner of the file.
--group sets the new group of the file.
docker cp - reads the file that is to be copied into the container from stdin.
This approach is useful when a file needs to be copied into a created container before it starts, such that docker exec is not an option (which can only operate on running containers).

Just a one-liner (similar to #ramu's answer), using root to make the call:
docker exec -u 0 -it <container-id> chown node:node /home/node/myfile

In order to get complete control of file ownership, I used the tar stream feature of docker cp:
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or to STDOUT.
I launch the docker cp process, then stream a tar file to or from the process. As the tar entries go past, I can adjust the ownership and permissions however I like.
Here's a simple example in Python that copies all the files from /outputs in the sandbox1 container to the current directory, excludes the current directory so its permissions don't get changed, and forces all the files to have read/write permissions for the user.
from subprocess import Popen, PIPE, CalledProcessError
import tarfile
def main():
export_args = ['sudo', 'docker', 'cp', 'sandbox1:/outputs/.', '-']
exporter = Popen(export_args, stdout=PIPE)
tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
tar_file.extractall('.', members=exclude_root(tar_file))
exporter.wait()
if exporter.returncode:
raise CalledProcessError(exporter.returncode, export_args)
def exclude_root(tarinfos):
print('\nOutputs:')
for tarinfo in tarinfos:
if tarinfo.name != '.':
assert tarinfo.name.startswith('./'), tarinfo.name
print(tarinfo.name[2:])
tarinfo.mode |= 0o600
yield tarinfo
main()

Related

Entrypoint can't execute command

I don't understand why my entrypoint can't execute my command. My entrypoint look like this:
#!/bin/bash
...
exec "$#"
My script is existing I can run it when I go inside my container:
drwxrwxrwx 1 root root 512 mars 25 09:07 .
drwxrwxrwx 1 root root 512 mars 25 09:07 ..
-rwxrwxrwx 1 root root 128 mars 25 10:05 entrypoint.sh
-rwxrwxrwx 1 root root 481 mars 25 09:07 init-dev.sh
-rwxrwxrwx 1 root root 419 mars 25 10:02 migration.sh
root#0c0062fbf916:/app/scripts# pwd
/app/scripts
And when I run my container : docker run my_container "scripts/migration.sh"
I got this error:
scripts/entrypoint.sh: line 8: /app/scripts/migration.sh: No such file or directory
I have the same error if I just run ls -all
docker run my_container "ls -all"
exec: ls -all: not found
I'm switching linux to windows <-> windows to linux so I checked to change lf to crlf but there is no changes
Your first command doesn't work because your scripts are in /app/scripts (note the plural), but you're trying to run run script/migration.sh. Additionally, it's not clear what the current working directory is in your container: even if you wrote scripts/migration.sh, that would only work if either (a) your Dockerfile contains a WORKDIR /app, or if your docker run command line includes -w /app. You would be better off using a fully qualified path:
docker run mycontainer /app/scripts/migration.sh
Your second example (docker run my_container "ls -all") is over-quoted and would never work. You need to write docker run my_container ls -all, except that -all isn't actually an option that ls accepts, although it will work by virtue of being the combination of the -a and -l options.

docker volume permission issue

I am trying to launch an app, deployed using wildfly18 in a docker container, which internally connects to my host postgresql database installation. During the container creation process, I am also mapping my container's wildfly log directory to my local i.e "host" directory via a named volume, created using the docker volume create command.
The issue is, I get a "permission denied" error when the app runs and the container tries to create log files inside the mapped volume.
My Dockerfile contents are as below:
FROM jboss/base-jdk:8
ENV WILDFLY_VERSION 18.0.1.Final
ENV WILDFLY_SHA1=ef0372589a0f08c36b15360fe7291721a7e3f7d9
ENV JBOSS_HOME /opt/jboss/wildfly
USER root
RUN cd $HOME \
&& curl -O https://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.tar.gz \
&& sha1sum wildfly-$WILDFLY_VERSION.tar.gz | grep $WILDFLY_SHA1 \
&& tar xf wildfly-$WILDFLY_VERSION.tar.gz \
&& mv $HOME/wildfly-$WILDFLY_VERSION $JBOSS_HOME \
&& rm wildfly-$WILDFLY_VERSION.tar.gz
COPY ./bin $JBOSS_HOME/bin
COPY ./standalone/configuration/* $JBOSS_HOME/standalone/configuration/
COPY ./modules/com $JBOSS_HOME/modules/com
COPY ./modules/system/layers/base/org/ $JBOSS_HOME/modules/system/layers/base/org/
COPY ./standalone/waffle_resource $JBOSS_HOME/standalone/waffle_resource
COPY ./standalone/waffle_resource/waffle.ear $JBOSS_HOME/standalone/deployments/
COPY ./standalone/waffle_resource/waffle-war.ear $JBOSS_HOME/standalone/deployments/
RUN chown -R jboss:jboss ${JBOSS_HOME} && chmod -R g+rw ${JBOSS_HOME}
ENV LAUNCH_JBOSS_IN_BACKGROUND true
USER jboss
EXPOSE 8989 9990
WORKDIR $JBOSS_HOME/bin
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0"]
As you can see above, I am using user JBOSS inside the container to kick off wildfly.
The commands used to create an image and run a container and also to create a volume are as below:
docker image build -t viaduct/wildfly .
docker volume create viaduct-wildfly-logs
docker run -d -v viaduct-wildfly-logs:/opt/jboss/wildfly/standalone/log --network=host \
-e "DB_DBNAME=dbname" \
-e "DB_PORT=5432" \
-e "DB_USER=xyz" \
-e "DB_PASS=" \
-e "DB_HOST=127.0.0.1" \
--name petes viaduct/wildfly
I verified the permissions within the container and my local "host" directory created by docker volume create command. Also, it's worth noting,
I am running wildlfy as user JBOSS
.
The containers permissions are as below:
[jboss#localhost /]$ ll /opt/jboss/wildfly/standalone/
total 4
drwxrwxr-x 1 jboss jboss 62 Sep 18 00:24 configuration
drwxr-xr-x 6 jboss jboss 84 Sep 18 00:23 data
drwxrwxr-x 1 jboss jboss 64 Sep 18 00:24 deployments
drwxrwxr-x 1 jboss jboss 17 Nov 15 2019 lib
*drwxr-xr-x 2 root root 6 Sep 17 23:48 log*
drwxrwxr-x 1 jboss jboss 4096 Sep 18 00:24 tmp
drwxrwxr-x 1 jboss jboss 98 Sep 18 00:23 waffle_resource
[jboss#localhost /]$ exit
and the local volume permissions are as below:
[root#localhost xyz]# cd /var/lib/docker/volumes/
[root#localhost volumes]# ll
drwxrwsr-x 3 root root 19 Sep 18 11:48 viaduct-wildfly-logs
The docker volume create command creates directory in my local machine as below:
/var/lib/docker/volumes/viaduct-wildfly-logs/_data
and the permissions for each subdirectories by default are as follows, which definitely is for maintained for security reasons:
drwx--x--x 14 root root 182 Sep 14 09:32 docker
drwx------ 7 root root 285 Sep 18 11:48 volumes
drwxrwsr-x 3 root root 19 Sep 18 11:48 viaduct-wildfly-logs
To start with, please let me know whether my strategy is correct?
Secondly, let me know the best way to fix the permission issue?
You need to create a user with the same UID/GID and give the permission on the host folder for this volume.
The server is run as the jboss user which has the uid/gid set to 1000. doc

Docker - Can mount an NFS share into a container but not a sub-directory of it

I have an NFS share with the following properties:
Mounted on my host on /nfs/external_disk
Owner user is test_user with UID 1234
Group is test_group with GID 2222
Permissions is 750
I have a small Dockerfile with the following content
ARG tag=lts
from jenkins/jenkins:${tag}
user root
# Create a new user and new group that matches what is on the host.
ARG username=test_user
ARG groupname=test_group
ARG uid=1234
ARG gid=2222
RUN groupadd -g ${gid} ${groupname} && \
mkdir -p /users && \
useradd -l -m -u ${uid} -g ${groupname} -s /bin/bash -d /users/${username} ${username}
user ${username}
After building the image (named custom_jenkins), and when I run the following command, the container is started properly and I see the original Jenkins homer stuff now copied to the share.
docker run -td --rm -v /nfs/external_disk:/var/jenkins_home custom_jenkins
However if I want to mount a sub-directory of the NFS share, say ${NFS_SHARE}/jenkins_home, then I get an error:
docker run -td --rm -v /nfs/external_disk/jenkins_home:/var/jenkins_home custom_jenkins
docker: Error response from daemon: error while creating mount source path '/nfs/external_disk/jenkins_home': mkdir /nfs/external_disk/jenkins_home: permission denied.
Now even if I attempt to create the sub-directory myself before starting the container, I still get the same error. Even when I set the permissions of the sub-directory to be 777.
Note that I am running as test_user which has the same UID/GID as in the container and it actually owns the NFS share.
I have a feeling that when docker attempts to create a sub-directory, it attempts to create it as some different user (e.g. the "docker" user) which causes it to fail while creating the folder since it has no access inside the share.
Can anyone help? thanks in advance.
I tried to reproduce. Works just fine for me. Perhaps I am missing some constraint. Hope this helps anyway. Note at step 6 the owner and the group for the file that I created from the container. This might answer one of your questions.
Step 1: I created a NFS share somewhere in my LAN
Step 2: I mounted the share on the machine that's running the docker engine
sudo mount 192.168.0.xxx:/i-data/b4024d5b/nfs/NFS /mnt/nsa320/
neo#neo-desktop:nsa320$ mount | grep NFS
192.168.0.xxx:/i-data/b4024d5b/nfs/NFS on /mnt/nsa320 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.xxx,mountvers=3,mountport=3775,mountproto=udp,local_lock=none,addr=192.168.0.xxx)
Step 3: I created some sample files and a sub-directory:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/
total 12
drwxrwxrwx 3 root root 4096 Jul 21 22:54 .
drwxr-xr-x 3 root root 4096 Jul 21 22:41 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:45 dummyFile
-rw-r--r-- 1 root root 0 Jul 21 22:53 fileCreatedFromContainer << THIS WAS CREATED FROM A CONTAINER THAT WAS NOT LAUNCHED WITH THE --user OPTION
drwxr-xr-x 2 neo neo 4096 Jul 21 22:54 subfolder
Step 4: Launched a dummy container and mounted the sub-directory (1000 is the UID of the user neo in the my OS):
docker run -d -v /mnt/nsa320/subfolder:/var/externalMount --user 1000 alpine tail -f /dev/null
Step 5: Connected in container to check the mount(I can read and write in the subfolder located on the NFS)
neo#neo-desktop:nsa320$ docker exec -ti ded1dc79773e sh
/ $ ls /var/externalMount/
fileInSubfolder
/ $ touch /var/externalMount/fileInSubfolderCreatedFromContainer
Step 6: Back on the host, to whom does the file that I created from the container belongs to:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/subfolder/
total 8
drwxr-xr-x 2 neo neo 4096 Jul 21 23:23 .
drwxrwxrwx 3 root root 4096 Jul 21 22:54 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:54 fileInSubfolder
-rw-r--r-- 1 neo root 0 Jul 21 23:23 fileInSubfolderCreatedFromContainer
Maybe off-topic: whoami executed in the container returns just the UID:
$ whoami
whoami: unknown uid 1000

docker ADD --chown bug or feature?

I am having a problem adding a file to an image and setting ownership via --chown flag. Specifically, here is a dockerfile adding a simple text file:
FROM fedora:24
ARG user_name=slave
ARG user_uid=1000
ARG user_home=/home/$user_name/
RUN useradd -l -u ${user_uid} -ms /bin/bash $user_name
WORKDIR ${user_home}
USER ${user_name}
ADD --chown=1397765041:1397765041 test.txt ./
CMD ls -l
This results in expected ownership of text.txt as can be seen:
$ docker run --rm -it bm/tmp:latest
total 4
-rw-r--r-- 1 some_user 1397765041 6 Oct 21 20:00 test.txt
Cool. Now if I change test.txt to a tar file (for example boost_1_57_0.tar.bz2), and rebuild, this is what I get:
$ docker run --rm -it bm/tmp:latest
total 4
drwx------ 8 501 root 4096 Oct 31 2014 boost_1_57_0
Here is how I am building (probably doesn't matter tho):
docker build -t bm/tmp --build-arg user_name=some_user --build-arg user_uid=1397765041 .
As we can see, ownership is NOT as expected in this case. It seems the behavior of --chown differs from the two cases shown above. I know that ADD automatically extracts tars. I don't know how the ownership is being set in the case where the file is a tar file. Anyone?
Unfortunately, ADD --chown only works for regular files. ADD with a tarball uses the ownership and permissions listed inside in tarball.
Workarounds:
Run tar yourself with --owner/--owner-map/--group/--group-map.
chown -R after the ADD.

Creating a docker container for Jupyter

I want to give a docker container to my students such that they are able to conduct experiments. I thought I use the following dockerfile:
FROM jupyter/datascience-notebook:latest
ADD ./config.json /home/jovyan/.jupyter/jupyter_notebook_config.json
ADD ./books /home/jovyan/work
So, the standard container will include a few notebooks I have created and stored in the books folder. I then build and run this container locally with
#!/usr/bin/env bash
docker build -t aaa .
docker run --rm -p "8888:8888" -v $(pwd)/books:/home/joyvan/work aaa
I build the container aaa and share again the folder books with it (although books has been copied into the image at compile time). I now open the container on port 8888. I can edit the files in the /home/joyvan/work folder but this stuff is not getting transported back to the host. Something goes terrible wrong. Is it because I add the files during the docker build and then share them again in the -v ...?
I have played with various options. I have added the local user to the users group. I do chown on all files in books. All my files show up as root:root in the container. I am then joyvan in the container and do not have write access to those files. How would I make sure the files are owned by joyvan?
EDIT:
Some other elements :
tom#thomas-ThinkPad-T450s:~/babynames$ docker exec -it cranky_poincare /bin/bash
jovyan#5607ac2bcaae:~$ id
jovyan uid=1000(jovyan) gid=100(users) groups=100(users)
jovyan#5607ac2bcaae:~$ cd work/
jovyan#5607ac2bcaae:~/work$ ls
test.txt text2.txt
jovyan#5607ac2bcaae:~/work$ ls -ltr
total 4
-rw-rw-r-- 1 root root 5 Dec 12 19:05 test.txt
-rw-rw-r-- 1 root root 0 Dec 12 19:22 text2.txt
on the host:
tom#thomas-ThinkPad-T450s:~/babynames/books$ ls -ltr
total 4
-rw-rw-r-- 1 tom users 5 Dez 12 20:05 test.txt
-rw-rw-r-- 1 tom users 0 Dez 12 20:22 text2.txt
tom#thomas-ThinkPad-T450s:~/babynames/books$ id tom
uid=1001(tom) gid=1001(tom) groups=1001(tom),27(sudo),100(users),129(docker)
You can try:
FROM jupyter/datascience-notebook:latest
ADD ./config.json /home/jovyan/.jupyter/jupyter_notebook_config.json
ADD ./books /home/jovyan/work
RUN chown joyvan /books
if that user already exists, but with RUN you can execute all commands in your docker file.

Resources