File permission in docker container with volume mount - docker

I'm trying to let a docker container access a letsencrypt certificate from the host file system.
I do not want to run the docker container as root, but rather as a user with very specific access rights.
Neither do I want to change the permissions of the certificate.
All I want, is for the given user, to have access to read the certificate inside the docker container.
The certificate has the following setup:
-rw-r----- 1 root cert-group
The user who's going to run the docker container, is in the cert-group:
uid=113(myuser) gid=117(myuser) groups=117(myuser),999(cert-group),998(docker)
This works as long as we're on the host - I am able to read the file as expected with the user "myuser".
Now I want to do this within a docker container with the certificate mounted as a volume.
I have done multiple test cases, but none with any luck.
A simple docker-compose file for testing:
version: '3.7'
services:
test:
image: alpine:latest
volumes:
- /etc/ssl/letsencrypt/cert.pem:/cert.pem:ro
command: >
sh -c 'ls -l / && cat /etc/passwd && cat /etc/group && cat /cert.pem'
user: "113:117"
restart: "no"
This ouputs a lot, but most important is:
test_1 | -rw-r----- 1 root ping 3998 Jul 15 09:51 cert.pem
test_1 | cat: can't open '/cert.pem': Permission denied
test_1 | ping:x:999:
Here I assume that "ping" is an internal group for docker alpine, however, im getting some mixed information about how this collaborates with the host.
From this article https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf my takeaway is, that there's a single kernel handling all permissions (the host) and therefore if the same uid and gid is used, the permissions would inherit from the host. However, even though that the running user is 113:117, which on the host is part of the group 999 it still doesnt give me access to read the file.
Next I found this article https://medium.com/#nielssj/docker-volumes-and-file-system-permissions-772c1aee23ca where especially this bullet point caught my attention:
The container OS enforces file permissions on all operations made in
the container runtime according to its own configuration. For example,
if a user A exists in both host and container, adding user A to group
B on the host will not allow user A to write to a directory owned by
group B inside the container unless group B is created inside the
container as well and user A is added to it.
This made me think, that maybe a custom Dockerfile was needed, to add the user inside docker, and make the user part of 999 (which is known as ping as earlier stated):
FROM alpine:latest
RUN adduser -S --uid 113 -G ping myuser
USER myuser
Running this gives me the exact same result, now with myuser appended to passwd though:
test_1 | myuser:x:113:999:Linux User,,,:/home/myuser:/sbin/nologin
This is just a couple of things that I've tried.
Another is syncing /etc/passwd and /etc/group with volumes found in some other blog
volumes:
- /etc/passwd:/etc/passwd
- /etc/group:/etc/group
This makes it visually look correct inside the container, but it doesnt change the end result - still permission denied.
Any help or pointers in the right direction would be really appreciated since I'm running out of ideas.

Docker containers do not know the uid/gid of the user running the container on the host. All requests to run containers go through the docker socket, and then to the docker engine that is often running as root, and no uid/gid's are passed in those API calls. The docker engine is just running the container as the user specified in the Dockerfile or as part of the container create command (in this case, from the docker-compose.yml).
Once inside the container, the mapping from uid/gid to names is done with the /etc/passwd and /etc/group file that is inside the container. Importantly, at the filesystem level, uid/gid values are not being mapped between the container and the host (with the exception of user namespaces, but if implemented properly, that would only make this problem worse). And all filesystem operations happen at the uid/gid level, not based on names. So when you do a host volume mount, the uid/gid's are passed directly through.
The issue you are encountering here is how you are telling the container to pick the uid/gid to run the container processes. By specifying user: "113:117" you have told the container to not only specify the uid (113), but also the gid (117) of the process. When that's done, none of the secondary groups from /etc/group are assigned to the user. To get those secondary groups assigned, you want to only specify the uid, user: "113", which will then lookup the group assignments from the /etc/passwd and /etc/group file inside the container. E.g.:
user: "113"
Unfortunately, the lookup for group membership is done by docker before any volumes are mounted, so you have the following scenario.
First, create an image with an example user assigned to a few groups:
$ cat df.users
FROM alpine:latest
RUN addgroup -g 4242 group1 \
&& addgroup -g 8888 group2 \
&& adduser -u 1000 -D -H test \
&& addgroup test group1 \
&& addgroup test group2
$ docker build -t test-users -f df.users .
...
Next, run that image, comparing the id on the host to the id inside the container:
$ id
uid=1000(bmitch) gid=1000(bmitch) groups=1000(bmitch),24(cdrom),25(floppy),...
$ docker run -it --rm -u bmitch -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
docker: Error response from daemon: unable to find user bmitch: no matching entries in passwd file.
Woops, docker doesn't see the entry from /etc/passwd, lets try with the test user we created in the image:
$ docker run -it --rm -u test -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch) groups=4242,8888
That works, and assigns the groups from the /etc/group file in the image, not the one we mounted. We can also see that uid works too:
$ docker run -it --rm -u 1000 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch) groups=4242,8888
As soon as we specify the gid, the secondary groups are gone:
$ docker run -it --rm -u 1000:1000 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch)
And if we run without overriding the /etc/passwd and /etc/group file, we can see the correct permissions:
$ docker run -it --rm -u test test-users:latest id
uid=1000(test) gid=1000(test) groups=4242(group1),8888(group2)
Likely the best option is to add a container user with the group membership matching the uid/gid values from the host. For host volumes, I've also solved this problem with a base image that dynamically adjusts the user or group inside the container to match the uid/gid of the file mounted in a volume. This is done as root, and then gosu is used to drop permissions back to the user. You can see that at sudo-bmitch/docker-base on github, specifically the fix-perms script that I would run as part of an entrypoint.
Also, be aware that mounting the /etc/passwd and /etc/group can break file permissions of other files within the container filesystem, and this user may have access inside that container that is not appropriate (e.g. you may have special access to the ping command that gives the ability to modify files or run ping commands that a normal user wouldn't have access to). This is why I tend to adjust the container user/group rather than completely replace these files.

Actually your solution is not wrong. I did the same with few differences.
This is my Dockerfile:
FROM alpine:latest
RUN addgroup -S cert-group -g 117 \
&& adduser -S --uid 113 -G cert-group myuser
USER myuser
And my docker-compose.yml:
version: '3.7'
services:
test:
build:
dockerfile: ./Dockerfile
context: .
command: >
sh -c 'ls -l / && cat /etc/passwd && cat /etc/group && cat /cert.pem'
volumes:
- "/tmp/test.txt:/cert.pem:ro"
restart: "no"
My '/tmp/test.txt' is assigned to 113:117.
IMHO, I think the problem in your docker-compose.yml that doesn't use your image. You should remove the image: and add build:

I have gone through the same issue today and luckily, the below solution helped me.
"Add :Z to your volumes mounts"
Reference: https://github.com/moby/moby/issues/41202
Note: Unfortunately It's issue with only Centos, I didn't face any problem with Ubuntu.

Related

Disable root login into the docker container [duplicate]

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Bind mounts created using rootless docker have a weird uid on the host machine. How can I delete these folders?

I have the following docker-compose.yml file which creates a bind mount located in $HOME/test on the host system:
version: '3.8'
services:
pg:
image: postgres:13
volumes:
- $HOME/test:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pass
- PGUSER=postgres
I bring up the container and inspect the permissions of the bind mount directory:
$ docker-compose up -d
$ ls -l ~
drwx------ 19 4688518 usertest 4096 Mar 11 17:06 test
The folder ~/test is created with a different uid in order to prevent accidental manipulation of this folder outside of the container. But what if I really do want to manipulate it? For example, if I try to delete the folder, I get a permission denied error as expected:
$ rm ~/test -rf
rm: cannot remove '/home/usertest/test': Permission denied
I suspect that I need to change uids using the newuidmap command somehow, but I'm not sure how to go about that.
How can I delete these folders?
But what if I really do want to manipulate it?
Using Docker, you can:
Run a command in the container as a specific user using the same UID (such as rm or sh), for example:
# Run shell session using your user with docker-compose
# You can then easily manipulate data
docker-compose exec -u 4688518 pg sh
# Run command directly with docker
# Docker container name may vary depending on your situation
# Use docker ps to see real container name
docker exec -it -u 4688518 stack_pg_1 rm -rf /var/lib/postgresql/data
Similar to previous one, you can run a new container with:
# Will run sh by default
docker run -it -u 4688518 -v $HOME/test:/tmp/test busybox
# You can directly delete data with
docker run -it -u 4688518 -v $HOME/test:/tmp/test busybox rm -rf /tmp/test/*
This may be suitable if your pg container is stopped or deleted. Docker image itself does not need to be the same as the one run by Docker Compose, you only need to specify proper user UID.
Note: you may not be able to delete folder using rm -rf /tmp/test as user 4688518 may not have writing permission on /tmp folder to do so, hence the use of /tmp/test/*
Use any of the above, but using root user such as -u 0 or -u root
Without using Docker, you can effectively run sudo command as suggested by other answer, or even temporarily change permission of said folder then change it back. However, from experience, when manipulating Docker-related data it's easier and less error-prone to user Docker itself.
Dealing with user ids in docker is tricky business because docker containers share the same kernel with the host operating system (at least on linux). Consequently, any files that the container creates in the bind mount with a given uid will have the same uid on the host system.
Whenever the uid used by the container (let's say it's 2222) is different from your own uid (or you don't have write access to files owned by 2222), you won't be able to delete the folder. The easy workaround is to run sudo rm -rf ~/test.
Edit: If the user does not have admin rights, you can still give them rights to modify the generated files like so.
# Create a directory that the users can write in.
mkdir workspace
# Change the owner to the group of users that should have access (3333).
sudo chown -R 2222:3333 workspace
# Give group write access.
sudo chmod -R g+w workspace
# Make sure that all users that should have write access are in group 3333.
Then you can run the container using
docker run --rm -u `id -u`:3333 -v `pwd`/workspace:/workspace \
-w /workspace alpine:latest touch myfile
which creates myfile in the workspace folder with the right permissions so your users can delete the file again.

how to correctly use system user in docker container

I'm starting containers from my docker image like this:
$ docker run -it --rm --user=999:998 my-image:latest bash
where the uid and gid are for a system user called sdp:
$ id sdp uid=999(sdp) gid=998(sdp) groups=998(sdp),999(docker)
but: container says "no"...
groups: cannot find name for group ID 998
I have no name!#75490c598f4c:/home/myfolder$ whoami
whoami: cannot find name for user ID 999
what am I doing wrong?
Note that I need to run containers based on this image on multiple systems and cannot guarantee that the uid:gid of the user will be the same across systems which is why I need to specify it on the command line rather than in the Dockerfile.
Thanks in advance.
This sort of error will happen when the uid/gid does not exist in the /etc/passwd or /etc/group file inside the container. There are various ways to work around that. One is to directly map these files from your host into the container with something like:
$ docker run -it --rm --user=999:998 \
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro \
my-image:latest bash
I'm not a fan of that solution since files inside the container filesystem may now have the wrong ownership, leading to potential security holes and errors.
Typically, the reason people want to change the uid/gid inside the container is because they are mounting files from the host into the container as a host volume and want permissions to be seamless across the two. In that case, my solution is to start the container as root and use an entrypoint that calls a script like:
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The above is from a fix-perms script that I include in my base image. What's happening there is the uid of the user inside the container is compared to the uid of the file or directory that is mounted into the container (as a volume). When those id's do not match, the user inside the container is modified to have the same uid as the volume, and any files inside the container with the old uid are updated. The last step of my entrypoint is to call something like:
exec gosu app_user "$#"
Which is a bit like an su command to run the "CMD" value as the app_user, but with some exec logic that replaces pid 1 with the "CMD" process to better handle signals. I then run it with a command like:
$ docker run -it --rm --user=0:0 -v /host/vol:/container/vol \
-e RUN_AS app_user --entrypoint /entrypoint.sh \
my-image:latest bash
Have a look at the base image repo I've linked to, including the example with nginx that shows how these pieces fit together, and avoids the need to run containers in production as root (assuming production has known uid/gid's that can be baked into the image, or that you do not mount host volumes in production).
It's strange to me that there's no built-in command-line option to simply run a container with the "same" user as the host so that file permissions don't get messed up in the mounted directories. As mentioned by OP, the -u $(id -u):$(id -g) approach gives a "cannot find name for group ID" error.
I'm a docker newb, but here's the approach I've been using in case it helps others:
# See edit below before using this.
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && su - $USER"
I.e. add a user (useradd) with a matching name, make it sudo (usermod), then open a terminal with that user (su -).
Edit: I've just found that this causes a E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) error when trying to use apt. Using sudo gives the error -su: sudo: command not found because sudo isn't install by default on the image I'm using. So the command becomes even more hacky and requires running an apt update and apt install sudo at launch:
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && apt update && apt install sudo && passwd -d $USER && su - $USER"
Not ideal! I'd have hoped there was a much more simple way of doing this (using command-line options, not creating a new image), but I haven't found one.
1) Make sure that the user 999 has right privilege on the current directory, you need to try something like this in your docker file
FROM
RUN mkdir /home/999-user-dir && \
chown -R 999:998 /home/999-user-dir
WORKDIR /home/999-user-dir
USER 999
try to spin up the container using this image without the user argument and see if that works.
2) other reason could be permission issue on the below files, make sure your group 998 has read permission on these files
-rw-r--r-- 1 root root 690 Jan 2 06:27 /etc/passwd
-rw-r--r-- 1 root root 372 Jan 2 06:27 /etc/group
Thanks
So, on your host you probably see your user and group:
$ cat /etc/passwd
sdp:x:999:998::...
But inside the container, you will not see them in /etc/passwd.
This is the expected behavior, the host and the container are completely separated as long as you don't mount the /etc/passwd file inside the container (and you shouldn't do it from security perspective).
Now if you specified a default user inside your Dockerfile, the --user operator overrides the USER instruction, so you left without a username inside your container, but please notice that specifying the uid:gid option means that the container have the permissions of the user with the same uid value in the host.
Now for your request not to specify a user in the Dockerfile - that shouldn't be a problem. You can set it on runtime like you did as long as that uid matches an existing user uid on the host.
If you have to run some of the containers in privileged mode - please consider using user namespace.

Jenkins wrong volume permissions

I have a virtual machine hosting Oracle Linux where I've installed Docker and created containers using a docker-compose file. I placed the jenkins volume under a shared folder but when starting the docker-compose up I got the following error for Jenkins :
jenkins | touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
jenkins | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
jenkins exited with code 1
Here's the volumes declaration
volumes:
- "/media/sf_devops-workspaces/dev-tools/continuous-integration/jenkins:/var/jenkins_home"
The easy fix it to use the -u parameter. Keep in mind this will run as a root user (uid=0)
docker run -u 0 -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
As haschibaschi stated your user in the container has different userid:groupid than the user on the host.
To get around this is to start the container without the (problematic) volume mapping, then run bash on the container:
docker run -p 8080:8080 -p 50000:50000 -it jenkins bin/bash
Once inside the container's shell run the id command and you'll get results like:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
Exit the container, go to the folder you are trying to map and run:
chown -R 1000:1000 .
With the permissions now matching, you should be able to run the original docker command with the volume mapping.
The problem is, that your user in the container has different userid:groupid as the user on the host.
you have two possibilities:
You can ensure that the user in the container has the same userid:groupid like the user on the host, which has access to the mounted volume. For this you have to adjust the user in the Dockerfile. Create a user in the dockerfile with the same userid:groupid and then switch to this user https://docs.docker.com/engine/reference/builder/#user
You can ensure that the user on the host has the same userid:groupid like the user in the container. For this, enter the container with docker exec -it <container-name> bash and show the user id id -u <username> group id id -G <username>. Change the permissions of the mounted volume to this userid:groupid.
You may be under SELinux. Running the container as privileged solved the issue for me:
sudo docker run --privileged -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
From https://docs.docker.com/engine/reference/commandline/run/#full-container-capabilities---privileged:
The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.
As an update of #Kiem's response, using $UID to ensure container uses the same user id as the host, you can do this:
docker run -u $UID -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
I had a similar issue with Minikube/Kubernetes just added
securityContext:
fsGroup: 1000
runAsUser: 0
under deployment -> spec -> template -> spec
This error solve using following commnad.
goto your jenkins data mount path : /media
Run following command :
cd /media
sudo chown -R ubuntu:ubuntu sf_devops-workspaces
restart jenkins docker container
docker-compose restart jenkins
Had a similar issue on MacOS, I had installed Jenkins using helm over a Minikube/Kubenetes after many intents I fixed it adding runAsUser: 0 (as root) in the values.yaml I use to deploy jenkins.
master:
usePodSecurityContext: true
runAsUser: 0
fsGroup: 0
Just be careful because that means that you will run all your commands as root.
use this command
$ chmod +757 /home/your-user/your-jenkins-data
first of all you can verify your current user using echo $USER command
and after that you can mention who is the user in the Dockerfile like bellow (in my case user is root)
screenshot
I had same issue it got resolved after disabling the SELINUX.
It's not recommended to disable the SELINUX so install custom semodule and enable it.
It works. Only changing the permissions won't work on CentOS 7.

How can I fix the permissions using docker on a bluemix volume?

In a container, I am trying to start mysqld.
I was able to create an image and push to the registry but when I want to start it, the /var/lib/mysql volume can't be initialized as I try to do a chown mysql on it and it is not allowed.
I checked docker specific solutions but for now I couldn't make any work.
Is there a way to set the right permissions on a bind-mounted folder from bluemix? Or is the option --volumes-from supported, I can't seem to make it work.
The only solution I can see right now is running mysqld as root, but I would rather not.
Try with mount-bind
created a volume on bluemix using cf ic volume create database
try to run mysql_install_db on my db container to initialize it's content
docker run --name init_vol -v database:/var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
mysql_install_db is supposed to populate the /var/lib/mysql and set the rights to the owner set in the --user option, but I get:
chown: changing ownership of '/var/lib/mysql': Permission denied.
I also tried the above in different ways, using sudo or a script. I tried with mysql_install_db --user=root, which does setup my folder correctly, except it is owned by the root user, and I would rather keep mysql running as the mysql user.
Try with volumes-from data container
I create a data container with a volume /var/lib/mysql
docker run --name db_data -v /var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
I run my db container with the option --volumes-from
docker run --name db_srv --volumes-from=db_data registry.ng.bluemix.net/<namespace>/<image>:<tag> sh -c 'mysqld_safe & tail -f /var/log/mysql.err'
docker inspect db_srv shows:
[{ "BluemixApp": null, "Config": {
...,
"WorkingDir": "",
... } ... }]
cf ic logs db_srv shows:
150731 15:25:11 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql 150731 15:25:11 [Note] /usr/sbin/mysqld (mysqld
5.5.44-0ubuntu0.14.04.1-log) starting as process 377 .. /usr/sbin/mysqld: File './mysql-bin.index' not found (Errcode: 13)
150731 15:25:11 [ERROR] Aborting
which is due to --volumes-from not being supported, and to data created in the first not staying in the second run.
In IBM Containers, the user namespace is enabled for docker engine. The "Permission denied " issue appears to be because the NFS is not allowing mapped user, from container, to perform the operation.
On my local setup, on the docker host, mounted a NFS (exported with no_root_squash option). and attached the volume to container using -v option. When the container is
spawned from docker with disabled user namespace, I am able to change the ownership for bind-mount inside the container. But With user namespace enabled docker, I am getting
chown: changing ownership of ‘/mnt/volmnt’: Operation not permitted
The volume created by cf (cf ic volume create ...) is a NFS, to verify just try mount -t nfs4 from container.
When, the user namespace is enabled for docker engine. The effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container.
Here is the work-around, you may want to try
In the Dockerfile
1.1 Create user mysql with UID 1010, or any free ID, before MySql installation.
Other Container or new Container can access mysql data files on Volume with UID 1010
RUN groupadd --gid 1010 mysql
RUN useradd --uid 1010 --gid 1010 -m --shell /bin/bash mysql
1.2 Install MySqlL but do not initialize database
RUN apt-get update && apt-get install -y mysql-server && rm -rf /var/lib/mysql && rm -rf /var/lib/apt/lists/*
In the entry point script
2.1 Create mysql Data directory under bind-mount as user mysql and then link it as /var/lib/mysql
Suppose the volume is mounted at /mnt/db inside the container (ice run -v <volume name>:/mnt/db --publish 3306... or cf ic run --volume <volume name>:/mnt/db ...).
Define mountpath env var
MOUNTPATH="/mnt/db"
Add mysql to group "root"
adduser mysql root
Set permission for mounted volume so that root group members can create directory and files
chmod 775 $MOUNTPATH
Create mysql directory under Volume
su -c "mkdir -p /mnt/db/mysql" mysql
su -c "chmod 700 /mnt/db/mysql" mysql
Link the directory to /var/lib/mysql
ln -sf /mnt/db/mysql /var/lib/mysql
chown -h mysql:mysql /var/lib/mysql
Remove mysql from group root
deluser mysql root
chmod 755 $MOUNTPATH
2.2 For first time, initialize database as user mysql
su -c "mysql_install_db --datadir=/var/lib/mysql" mysql
2.3 Start the mysql server as user mysql
su -c "/usr/bin/mysqld_safe" mysql
You have multiple questions here. I will try to address some. Perhaps that will get you a step further in the right direction.
--volumes-from is not supported yet in IBM Containers. You can get around that by using the same --volume (-v) option on the first and subsequent containers, instead of using -v on the first container creation command and --volumes-from on the subsequent ones.
--user option is not supported also by IBM Containers.
I see your syntax for using --user (I suppose on localhost docker) is not correct. All options for the docker run command must come before the image name. Anything after the image name is considered a command to run inside the container. In this case "--user=mysql" will be considered as a command that the system will attempt to run and fail.
The last error message you shared shows that there is some file not found in the working dir which causes the app to abort. You may work around that by using a script as the command to run in the container which changes dir to the right location.

Resources