Changing the user's uid in a pre-build docker container (jenkins) - jenkins

I am new to docker, so if this is a fairly obvious process that I am missing, I do apologize for the dumb question up front.
I am setting up a continuous integration server using the jenkins docker image. I did a docker pull jenkins, and created a user jenkins to allow me to mount the /var/jenkins_home in the container to my host's /var/jenkins_home (also owned by jenkins:jenkins user).
the problem is that the container seems to define the jenkins user with uid 102, but my host has the jenkins user as 1002, so when I run it I get:
docker run --name jenkins -u jenkins -p 8080 -v /var/jenkins_home:/var/jenkins_home jenkins
/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
I would simply make the uid for the host's jenkins user be 102 in /etc/passwd, but that uid is already taken by sshd. I think the solution is to change the container to use uid 1002 instead, but I am not sure how.
Edit
Actually, user 102 on the host is messagebus, not sshd.

Please take a look at the docker file I just uploaded:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/Dockerfile .
Here the UID is extracted from a mounted volume (host directory), with
stat -c '%u' <VOLUME-PATH>
Then the UID of the container user is changed to the same value with
usermod -u <UID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real UID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the UID, there might be some other files no longer accessible for the process in the container, so you might need a
chown -R <USERNAME> <SOME-PATH>
before the gosu command.
You can also change the GID, see my answer here
Jenkins in docker with access to host docker
and maybe you want to change both to increase security.

You can simply change the UID in /etc/passwd, assuming that no other user has UID 1002.
You will then need to change the ownership of /var/jenkins_home on your host to UID 1002:
chown -R jenkins /var/jenkins_home
In fact, you don't even need a jenkins user on the host to do this; you can simply run:
chown -R 1002 /var/jenkins_home
This will work even if there is no user with UID 1002 available locally.
Another solution is to build your own docker image, based on the Jenkins image, that has an ENTRYPOINT script that looks something like:
#!/bin/sh
chown -R jenkins /var/jenkins_home
exec "$#"
This will (recursively) chown /var/jenkins_home inside the container to whatever UID is used by the jenkins user (this assumes that your Docker contains is starting as root, which is true unless there was a USER directive in the history of the image).
Update
You can create a new image, based on (FROM ...) the jenkins image, with a Dockerfile that performs the necessary edits to the /etc/passwd file. But that seems a lot of work for not much gain. It's not clear why you're creating jenkins user on the host or if you actually need access to the jenkins home directory on the host.
If all you're doing is providing data persistence, consider using a data volume container and --volumes-from rather than a host volume, because this will isolate the data volume from your host so that UID conflicts don't cause confusion.

I had the same error, I turned SELinux off (on CEntOS) and it works.
Otherwise, it woukd be better to tune SElinux with SEManage commands.

The ideal is to change the user UID in your Dockerfile used by jenkins with the same UID used by the Host (remember that it must be done for non-root users, if root create a new user and configure the service inside the container to that user).
Assuming the user's UID on the host is 1003 and the user is called jenkins (use $id to get the user and group id).
Add to your Dockerfile
# Modifies the user's UID and GID
RUN groupmod -g 1003 jenkins && usermod -u 1003 -g 1003 jenkins
# I use a group (docker) on my host to organize the privileges,
#if that's your # case add the user to this group inside the container.
RUN groupadd -g 998 docker && usermod -aG docker nginx

Related

www-data user cannot access a Virtualbox shared folder through Docker container

I have an Ubuntu server VM running on a Windows 10 host, where a folder on an external hard drive is passed through to the VM at the mountpoint /NextCloudStorage. I can access this folder from my normal user, and from root, by adding my normal user to the vboxsf user group.
My Docker Compose file includes this to pass through the /NextCloudStorage to the container:
volumes:
- nextcloud:/var/www/html
- .:/code
- /NextCloudStorage:/NextCloudStorage
When using docker exec -it nextcloud-app-1 bash, I can interact with the shared folder live using commands such as cd /NextCloudStorage, mkdir test1, etc.
My problem is that the application running in the container cannot access this folder, because it runs as www-data. ls commands list the folder as empty, and when trying to create an item in the folder I get the error "permission denied".
Does anyone know how to give the www-data user access to this shared folder?
Sorry for the long post but I had to spiel it out!
Thanks!
You could try to add the www-data user to the vboxsf group
To achieve this we need to figure out the group id. In the virtual machine run
cat /etc/group
You will see something like this
Then in your container, if this group doesnt exist create it
In this case the gid we need to use is 115
groupadd --gid 115 vboxsf
Next add this group to your www-data user
usermod -aG vboxsf www-data
Update:
Try to run the following command as root in the container
chmod -R 777 /NextCloudStorage

Docker(containers) cgroup/namespace setup vs running Dockerfile commands as root?

From my understanding, docker sets up the required cgroup's and namespace's so containers(i.e container processes) run in isolation (isolated environment on the host system) and have limited permissions and access to the host system. So, even if the process is running as root in the container, it will not have root access on the host system.
But from this article: processes-in-containers-should-not-run-as-root, i see that it is still possible for a container process running as root to access the host files which are only accessible to root on the host system.
On host system:
root#srv:/root# ls -l
total 4
-rw------- 1 root root 17 Sep 26 20:29 secrets.txt
Dockerfile -
FROM debian:stretch
CMD ["cat", "/tmp/secrets.txt"]
On running corresponding image of above Dockerfile,
marc#srv:~$ docker run -v /root/secrets.txt:/tmp/secrets.txt <img>
top secret stuff
If, top secret stuff is readable, how is it possible. Then what is the point of container isolation. What am i missing, seems there is something more I am missing.
(has it to do with how i use docker run, by default are all permissions/capabilities given to the container based on the user running the docker run command.
A container can only access the host filesystem if the operator explicitly gives it access. For example, try without any docker run -v options:
docker run \
--rm \ # clean up the container when done
-u root \ # explicitly request root user
busybox \ # image to run
cat /etc/shadow # dumps the _container's_ password file
More generally, the rule (on native Linux without user namespace remapping) is that, if files are bind-mounted from the host into a container, they are accessible if the container's numeric user or group IDs match the file's ownership and permissions. If a file is owned by uid 1000 on the host with mode 0600, it can be read by uids 0 or 1000 in the container, regardless of the corresponding container and host users' names.
The corollary to this is that anyone who can run any docker run command at all can pretty trivially root the entire host.
docker run \
--rm \
-u root \
-v /:/host \ # bind-mount the host filesystem into the container
busybox \
cat /host/etc/shadow # dumps the host's encrypted password file
The root user in a container is further limited by Linux capabilities: without giving special additional Docker options, even running as root, a container can't change filesystem mounts, modify the network configuration, load kernel modules, reboot the host, or do several other extra-privileged things. (And it's usually better to do these things outside a container than to give extra permission to Docker; don't casually run containers --privileged.)
It's still generally better practice to run containers as non-root users. The user ID doesn't need to match any user ID in particular, it just needs to not be 0 (matching a specific host uid isn't portable across hosts and isn't recommended). The files in the container generally should be owned by root, so they can't be accidentally overwritten.
FROM debian
# Create the non-root user
RUN adduser --system --no-create-home nonroot
# Do the normal installation, as root
COPY ... # no --chown option
RUN ... # does not run chown either
# Specify the non-root user only for the final container
EXPOSE 12345
USER nonroot
CMD the main container command
If the container does need to read or (especially) write host files, bind-mount the host directory into some data-specific directory in the container (do not overwrite the application code with this mount) and use the docker run -u option to specify the host uid that the container needs to run as. The user does not specifically need to exist in the container's /etc/passwd file.
docker run \
-v "$PWD:/app/data" \ # bind-mount the current directory as data
-u $(id -u) \ # specify the user ID to use
...

How to launch container with user namespace configuration?

In the below docker file, base image(jenkins/jenkins) is providing a user jenkins with UID 1000 and GID 1000, within container.
FROM jenkins/jenkins
# Install some base packages
# Use non-privileged user provided by base image
USER jenkins # with uid 1000 and GID 1000
# Copy plugins and other stuff
On the docker host(EC2 instance), we also have similar UID & GID created,
$ groupadd -g 1000 jenkins
$ useradd -u 1000 -g jenkins jenkins
$ mkdir -p /abc/home_folder_for_jenkins
$ chown -R jenkins:jenkins /abc/home_folder_for_jenkins
to make sure, container can write files to /abc/home_folder_for_jenkins in EC2 instance.
Another aspect that we need to take care in same EC2 instance, is to run containers(other than above container) to run in non-privileged mode.
So, below configuration is performed on docker host(EC2):
$ echo dockremap:165536:65536 > /etc/subuid
$ echo dockremap:165536:65536 > /etc/subgid
$ echo '{"debug":true, "userns-remap":"default"}' > /etc/docker/daemon.json
This dockremap configuration is not allowing jenkins to start and docker container goes in Exited state:
$ ls -l /abc/home_folder_for_jenkins
total 0
After removing docker remap configuration, everything work fine.
Why dockremap configuration not allow the jenkins container to run as jenkins user?
I'm actually fighting with this because it seems not very portable but this is the best I found. As said above on your docker host the UID/GID are the ones from the container + the value in /etc/subuid & /etc/subgid.
So your "container root" is 165536 on your host and your user jenkins is 166536 (165536 + 1000).
To come back to your example what you need to do is
$ mkdir -p /abc/home_folder_for_jenkins
$ chown -R 166536:166536 /abc/home_folder_for_jenkins
User namespaces offset the UID/GID of the user inside the container, and any files inside the container. There is no mapping from the UID/GID inside the container to the external host UID/GID (that would defeat the purpose). Therefore, you would need the offset the UID/GID of the directory being created, or just use a named volume and let docker handle this for you. I believe that UID/GID on the host would be 166536 (165536 + 1000) (I may have an off by one in there, so try opening the directory permissions if this still fails and see what gets created).

Docker volume and host permissions

When I run a docker image for example like
docker run -v /home/n1/workspace:/root/workspace -it rust:latest bash
and I create a directory in the container like
mkdir /root/workspace/test
It's owned by root on my host machine. Which leads to I have to change the permissions everytime after I turn of the container to be able to operate with that directory.
Is there a way how to tell Docker to handle directories and files from my machine (host machine) point of view under a certain user?
You need to run your application as the same uid inside the container as you do on the host to get file ownership to match. My own solution for this is to start the container as root, adjust the uid of the user inside the container to match the volume mount, and then su to the user to run the app. Scripts for this can be found in this repo: https://github.com/sudo-bmitch/docker-base
The in that repo, the fix-perms script handles the change in uid/gid inside the container, and the entrypoint script has an exec gosu $username "$#" that runs the app as the selected user.
Sure, because Docker uses root as a default user. You should create user in your docker container, switch to that user and then make folder, then you will get them without root permissions on you host machine.
Dockerfile
FROM rust:latest
...
RUN useradd -ms /bin/bash myuser
USER myuser

how to create a docker image/container with same file rights as host user

When start a docker container as user with name 'username1' in group 'usergroup1'.
And that container has files/folders on the local file system with volume:
eg.
$username1>docker run -v /homes/username1/output:output outputter
The files are created with root as owner.
What do i need to do in the Dockerfile or startup options to make sure the file rigths in the output folder are the same as the localuser:group, in this case username1:usergroup1?
As explained in this project:
By default, our docker containers run as the root user. Files created or modified by the container will thus become owned by the root user, even after quitting the container.
To avoid this problem, it is necessary to run the container using a non-root user.
If the host machine user has a UID other than 1000 (or 0, for root), the user should specify their UID when running docker, e.g.
docker run -d -p 8787:8787 -v $(pwd):/home/$USER/foo \
-e USER=$USER -e USERID=$UID rocker/rstudio
to avoid changing the permissions in the linked volume on the host
Here that works because that project Dockerfile, when starting the container, creates a user with the same uid (name is not important)
## (Docker cares only about uid, not username; diff users with same uid = confusion)
if [ "$USERID" -ne 1000 ]
## Configure user with a different USERID if requested.
then
echo "creating new $USER with UID $USERID"
useradd -m $USER -u $USERID
mkdir /home/$USER
chown -R $USER /home/$USER
You are going to have to wait for user namespace support, hopefully later this year.

Resources