I'm trying to understand some weird behavior that I'm seeing on docker. I want to run a container which writes cached data to a mounted volume from the host which I can later reuse for future executions of the container and that can also be read from the host.
On the host, I see that my user has the user id 1000:
# This is on the host
$ id
uid=1000(juan-munoz) gid=1000(juan-munoz) groups=1000(juan-munoz)...
I'm running the container without any special flags for the user so it runs as root:
# This is on the container
$ id
uid=0(root) gid=0(root) groups=0(root)
Also, there is already a user with id=1000:
# The image is provided by AWS and apparently it includes a user with this ID
$ id -nu 1000
ec2-user
I have mounted a directory with -v /some/local/directory:/var/mounted. Locally, this directory is owned by my user (id=1000):
# On the host
ls -ld /some/local/directory
drwx------ 2 juan-munoz juan-munoz 4.0K Jun 21 16:21 /some/local/directory
In the container, if I go check the directory I see that it's currently owned by root. This is the first part that I don't understand.
# ls -ld /var/mounted
drwx------ 2 root root 4096 Jun 22 03:36 /var/mounted
Why root? I would have expected that with the mount the user id would be respected.
If I then try to change the user to 1000, this happens:
# Inside the container
$ chown -R 1000:1000 /var/mounted
ls -ld /var/mounted
drwx------ 2 ec2-user 1000 4096 Jun 22 03:36 /var/mounted
Which looks good to me, but when I do that, if I look at what happened on the host I see the following:
# On the host
ls -ld /some/local/directory
drwx------ 2 100999 100999 4.0K Jun 21 20:36 /some/local/directory
So either the host or the container has a messed up owner. If I chown 1000:1000 on the host, the container sees it as root, if I chown 1000:1000 on the container, the host sees it as 100999.
What am I doing wrong here?
Repro steps
$ mkdir $HOME/testing
$ docker run -it --name=ubuntu-test --entrypoint="/bin/bash" --rm -v $HOME/testing:$HOME/testing ubuntu:latest
# Inside the container
$ cd /home/myusername/testing
$ touch file.txt
root#b98b7a5445e3:/home/juan-munoz/testing# ls -l
total 0
-rw-r--r-- 1 root root 0 Jun 30 23:52 file.txt
# Outside the container
$ ls -l $HOME/testing
total 0
-rw-r--r-- 1 juan-munoz juan-munoz 0 Jun 30 16:52 file.txt
Observed behavior:
In some computers, from outside the container, the file is owned by the local user. In others, it's owned by root.
Expected behavior:
To be consistent across computers
I encountered the same issue under Ubuntu 22.04 with Docker Desktop installed. After the Docker Desktop is uninstalled and the Docker Engine is reinstalled, things just go the way I want them to: having consistent ownership of mounted directories between the host and the container.
Here is my use case: I want to start an Ubuntu 20.04 container and use it as the compiling environment for my application. The Dockerfile is:
FROM ubuntu:20.04
ARG USER=docker
ARG UID=1000
ARG GID=1000
# create a new user with same UID & PID but no password
RUN groupadd --gid ${GID} ${USER} && \
useradd --create-home ${USER} --uid=${UID} --gid=${GID} --groups root && \
passwd --delete ${USER}
# add user to the sudo group and set sudo group to no passoword
RUN apt update && \
apt install -y sudo && \
adduser ${USER} sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# prevent tzdata to ask for configuration
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt -y install tzdata
RUN apt install -y build-essential git cmake
# setup default user when enter the container
USER ${UID}:${GID}
WORKDIR /home/${USER}
The building script:
#!/bin/bash
set -e
UID=$(id -u)
GID=$(id -g)
docker build --build-arg USER="$USER" \
--build-arg UID="$UID" \
--build-arg GID="$GID" \
--tag "ubuntu-env" \
--file ./Dockerfile \
--progress=plain \
.
Since I use it as the compiling environment, the whole home directory is mounted for convenience:
#!/bin/bash
set -e
docker run -it \
--name "build-env" \
--user "${USER}" \
--workdir "${PWD}" \
--env TERM=xterm-256color \
--volume="$HOME":"$HOME" \
--detach \
"ubuntu-env" \
/bin/bash
Somehow, with the Docker Desktop installed, the owner of the mounted home directory is always the root rather than the host user. After it is uninstalled, the mounted directory now gets the expected owner in the container, aka the host user.
Related
so - I'm building a docker container using apache2 on the inside - but I'm having issues with permissions and I don't know how to solve it...
If I run the container with no --user specifications, it runs fine - but I want to externally be able to assign it to a user and limit that user to only reading and writing to a particular directory (the one I map in with -v).
However when I run the docker container with --user to that user, the external permissions seem all correct - but internally - apache2 then goes bang saying it can't bind to port 80 - and other things fail writing to code internally.
How do I map users - like mapping ports or volumes. What I want to achieve is that the container externally only has the permissions of user X in the outer system - but internally it's root, running as the root user id and so on.
One possible way to achieve externally only has the permissions of user X in the outer system - but internally it's root is the use of sudo.
Create an 'internal' user which the UID should be same as the 'external' user:
FROM alpine
RUN apk add --no-cache apache2 sudo \
&& adduser -S newuser -s /bin/ash -D -H -u 1000 \
&& echo "newuser ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/newuser \
&& chmod 0440 /etc/sudoers.d/newuser
USER newuser
EXPOSE 80
ENTRYPOINT ["bin/ash","-c","sudo httpd -DFOREGROUND"]
Example on the host an there's a directory dedicated to a user:
drwx------ 2 ec2-user ec2-user 24 Aug 29 11:53 html
The directory can be mount as usual:
docker built -t myapache2 .
docker run -it -v ${PWD}/html:/html -p 8080:80 myapache2
docker exec -t 31c6cc627813 ls -l /html
total 4
-rwx------ 1 ec2-user 1000 58 Aug 29 03:53 index.html
curl localhost:8080
<html><body><h1>It works!</h1></body></html>
Alpine is used here but you can of course use any distro that suits your need.
NOTE: httpd site & folder permission is another topic and is not cover here.
I try to start a new container from ubuntu 18.04 docker image. I do as follows:
pull the docker image
docer pull ubuntu:18.04
create a new container
docker run -ti -v $(pwd):/home/shared --name ubuntu_test ubuntu:18.04
and then log out.
start the created container
docker start ubuntu_test
log in as root user, update OS and install vim
docker exec -ti ubuntu_test /bin/bash and apt update, apt install -y vim
then log out.
log in as non-root user
docker exec -ti -u daemon ubuntu_test /bin/bash
Then I found that I have no permission to create new files or new folders.
I do not want to log in as root user since there could be some problems with mpirun.
Is there any solution for this problem ? Thank you for any help.
It is not like you (non-root user) don't have permissions to write or read. It is that everything on the system (files/folders) belong to the root user and no other user can modify anything by default.
You can create a new user as well as home folder for that user when you are building the image and then the user will be able to modify stuff in its home (standard linux stuff).
Example Dockerfile
FROM ubuntu
RUN groupadd --gid 1000 someuser \
&& useradd --uid 1000 --gid someuser --shell /bin/bash --create-home someuser
test with
docker build -t utest .
docker container run -it -u someuser utest /bin/bash
cd /home/someuser
touch myfile
If you need to add some other folders under that user's administration other than its home, you can use chown -R someuser:someuser <folder> which will recursively change ownership of the specified folder and everything in it to that of the new user.
Example: changing ownership of /etc folder
FROM ubuntu
RUN groupadd --gid 1000 someuser \
&& useradd --uid 1000 --gid someuser --shell /bin/bash --create-home someuser
RUN chown -R someuser:someuser /etc
Is there any way to mount a named volume as a non-root user? I am trying to avoid having to run a chown in each Dockerfile but I need the mount to be writable by a non-root user to be able to write the artifacts created by a build in the image
This is what I'm trying
docker run --rm -it -v /home/bob/dev/:/src/dev -v builds:/mnt/build --name build hilikus/build /bin/bash
but for the second mount I get
[user#42f237282128 ~]$ ll /mnt
total 4
drwxr-xr-x 2 root root 4096 Sep 18 19:29 build
My other mount (/src/dev/) is owned by user, not by root so it gives what I need; however, I haven't been able to do the same with the named volume.
The named volume initializes to the contents of your image at that location, so you need to set the permissions inside your Dockerfile:
$ cat df.vf-uid
FROM busybox
RUN mkdir -p /data && echo "hello world" > /data/hello && chown -R 1000 /data
$ docker build -t test-vf -f df.vf-uid .
Sending build context to Docker daemon 23.06 MB
Step 1 : FROM busybox
---> 2b8fd9751c4c
Step 2 : RUN mkdir -p /data && echo "hello world" > /data/hello && chown -R 1000 /data
---> Using cache
---> 41390b132940
Successfully built 41390b132940
$ docker run -v test-vol:/data --rm -it test-vf ls -alR /data
/data:
total 12
drwxr-xr-x 2 1000 root 4096 Sep 19 15:26 .
drwxr-xr-x 19 root root 4096 Sep 19 15:26 ..
-rw-r--r-- 1 1000 root 12 Aug 22 11:43 hello
If you use the new --mount syntax instead of the old -v/--volume syntax it is supposedly possible to assign a uid to the volume's contents via docker volume create somename --opt -o=uid=1000 or something similar.
See https://docs.docker.com/engine/reference/commandline/volume_create/#driver-specific-options
I haven't fully tested this to run as non-root or using the dockremap dynamic user with the userns-map option but hope to soon.
I have tried this post and it did NOT help.
I have created jenkins user and add it to docker group.
I have also switched the user in dockerFile (see below).
I started the container as following
docker run -u jenkins -d -t -p 8080:8080 -v /var/jenkins:/jenkins -P docker-registry:5000/bar/helloworld:001
The container starts fine. but when I look at the process, this is what I have
root 13575 1 1 09:34 ? 00:05:56 /usr/bin/docker daemon -H fd://
root 28409 13575 0 16:13 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 8080
The first one is daemon. so I guess it is ok to be root.
but the second one (which I have switched to jenkins user by issuing sudo su jenkins) is showing root. I started the docker as jenkins user. why this process belongs to root?
Here is my dockerfile
#copy jenkins war file to the container
ADD http://mirrors.jenkins-ci.org/war/1.643/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
RUN useradd -d /home/jenkins -m -s /bin/bash jenkins
USER jenkins
ENV HOME /home/jenkins
WORKDIR /home/jenkins
# Maven settings
RUN mkdir .m2
ADD settings.xml .m2/settings.xml
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
CMD [""]
EDIT2
I am certain the container the running. I could attach to the container.
I can also browse through web-ui of jenkins, which is only possible if the container has started without errors (jenkins run inside the container)
Here is my command inside the container
ps -ef | grep java
jenkins 1 0 7 19:29 ? 00:00:28 java -jar /opt/jenkins.war
ls -l /jenkins
drwxr-xr-x 2 jenkins jenkins 4096 Jan 11 18:54 jobs
But from the host file system, I see that the newly created "jobs" directory shows as user "admin"
ls -l /var/jenkins/
drwxr-xr-x 2 admin admin 4096 Jan 11 10:54 jobs
Inside the container, the jenkins process (war) is started by "jenkins" user. Once the jenkins starts, it writes to host file system under "admin" user.
Here is my entire dockerFile (NOTE: I don't use the one from here)
FROM centos:7
RUN yum install -y sudo
RUN yum install -y -q unzip
RUN yum install -y -q telnet
RUN yum install -y -q wget
RUN yum install -y -q git
ENV mvn_version 3.2.2
# get maven
RUN wget --no-verbose -O /tmp/apache-maven-$mvn_version.tar.gz http://archive.apache.org/dist/maven/maven-3/$mvn_version/binaries/apache-maven-$mvn_version-bin.tar.gz
# verify checksum
RUN echo "87e5cc81bc4ab9b83986b3e77e6b3095 /tmp/apache-maven-$mvn_version.tar.gz" | md5sum -c
# install maven
RUN tar xzf /tmp/apache-maven-$mvn_version.tar.gz -C /opt/
RUN ln -s /opt/apache-maven-$mvn_version /opt/maven
RUN ln -s /opt/maven/bin/mvn /usr/local/bin
RUN rm -f /tmp/apache-maven-$mvn_version.tar.gz
ENV MAVEN_HOME /opt/maven
# set shell variables for java installation
ENV java_version 1.8.0_11
ENV filename jdk-8u11-linux-x64.tar.gz
ENV downloadlink http://download.oracle.com/otn-pub/java/jdk/8u11-b12/$filename
# download java, accepting the license agreement
RUN wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" -O /tmp/$filename $downloadlink
# unpack java
RUN mkdir /opt/java-oracle && tar -zxf /tmp/$filename -C /opt/java-oracle/
ENV JAVA_HOME /opt/java-oracle/jdk$java_version
ENV PATH $JAVA_HOME/bin:$PATH
# configure symbolic links for the java and javac executables
RUN update-alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 20000 && update-alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 20000
# copy jenkins war file to the container
ADD http://mirrors.jenkins-ci.org/war/1.643/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
#RUN useradd jenkins
#RUN chown -R jenkins:jenkins /home/jenkins
#RUN chmod -R 700 /home/jenkins
#USER jenkins
RUN useradd -d /home/jenkins -m -s /bin/bash jenkins
#RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
ENV HOME /home/jenkins
WORKDIR /home/jenkins
# Maven settings
RUN mkdir .m2
ADD settings.xml .m2/settings.xml
USER root
RUN chown -R jenkins:jenkins .m2
USER jenkins
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
CMD [""]
The second process
root 28409 13575 0 16:13 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 8080
is NOT the process for your jenkins container but an internal process of the Docker engine to manage the network.
If, using the ps command, you cannot find the process which is supposed to run in your docker container, that means your docker container isn't running.
To ease figuring this out, start your container with the following command (adding --name test):
docker run --name test -u jenkins -d -t -p 8080:8080 -v /var/foo:/foo -P docker-registry:5000/bar/helloworld:001
Then type docker ps, you should see your container running. If not, type docker ps -a and you should see with which exit code it crashed.
If you need to know why it crashed, display its logs with docker logs test.
To look for the Jenkins process that runs from the official Jenkins docker image, use the following command:
ps aux | grep java
EDIT
why does the files seem to be owned by admin from the docker host point of view?
In your docker image, the jenkins user has UID 1000. You can easily verify this with the following command: docker run --rm -u jenkins --entrypoint /bin/id docker-registry:5000/bar/helloworld:001
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
On your docker host, UID 1000 is for the admin user. You can verify this with id admin which in your case shows:
uid=1000(admin) gid=1000(admin) groups=1000(admin),10(wheel)
The users which are available in a Docker container are not the ones from the docker host. However it might happen by coincidence that they have the same UID. This is why the ls -l command run on the docker host will tell you the files are owned by the admin user.
In fact the files are owned by the user of UID 1000 which happens to be named admin on the docker host and jenkins on your docker image.
I have a docker with a php application on it
I have a share volume, for example
/home/me/dev/site <=> /var/www/site
I can write something in my host, it will be sync with the container
if I launch
sudo docker exec test touch /var/www/site/test.txt
It works
But if my server is trying to create a file as www-data this is not working because of the rights.
Is there a way to give access to my shared volumes to www-data ?
I am using boot2docker
(Bind-mounted) volumes in Docker will maintain the permissions that are set on
the Docker host itself. You can use this to set the permissions on those
files and directories before using them in the container.
Some background;
Permissions in Linux are based on user and group ids ('uid' / 'gid'). Even
though you see a user- and group name as owner, those names aren't actually
important in Linux, they are only there to make it easier for you to see who's the owner of a file (they are looked up from the /etc/passwd file).
You can set any uid/gid on a file; a user doesn't have to exist when setting those permissions. For example;
touch foobar && sudo chown 1234:5678 foobar && ls -la foobar
# UID and GID are set to 1234 / 5678, even though there's no such user
-rw-rw-r-- 1 1234 5678 0 Mar 25 19:14 foobar
Checking permissions (inside and outside a container)
As mentioned above, Docker maintains ownership of the host when using
a volume. This example shows that permissions and ownership in the volume are the
same outside and inside a container;
# (First create a dummy site)
mkdir -p volume-example/site && cd volume-example
echo "<html><body>hello world</body></html>" > site/index.html
# Permissions on the Docker host;
ls -n site
# total 4
# -rw-rw-r-- 1 1002 1002 38 Mar 25 19:15 index.html
# And, permissions inside a nginx container, using it as volume;
sudo docker run --rm -v $(pwd)/site:/var/www nginx ls -n /var/www
# total 4
# -rw-rw-r-- 1 1002 1002 38 Mar 25 19:15 index.html
Setting the permissions
As explained, a user doesn't have to exist in order to use them, so even if
we don't have a www-data user on the Docker host, we can still set the correct
permissions if we know the "uid" and "gid" of that user inside the container;
Let's see what the uid and gid of the www-data user is inside the container;
sudo docker run --rm nginx id www-data
# uid=33(www-data) gid=33(www-data) groups=33(www-data)
First check the state before changing the permissions. This time we
run the nginx container as user www-data;
sudo docker run \
--rm \
--volume $(pwd)/site:/var/www \
--user www-data nginx touch /var/www/can-i-write.txt
# touch: cannot touch `/var/www/can-i-write.txt': Permission denied
Next, set the permissions on the local directory, and see if we are able to write;
sudo chown -R 33:33 site
sudo docker run \
--rm \
--volume $(pwd)/site:/var/www \
--user www-data nginx touch /var/www/can-i-write.txt
Success!
Add the following lines to your dockerfile and rebuild your image
RUN usermod -u 1000 www-data
RUN usermod -G staff www-data