Add user in Docker container, UID mismatch when running Jenkins job - docker

I am running a Jenkins pipeline in a Docker container. The Docker container creates an unpriviliged user to run as:
RUN useradd jenkins --shell /bin/bash --create-home
RUN mkdir -p /home/jenkins/src && chown -R jenkins:jenkins /home/jenkins
USER jenkins
WORKDIR /home/jenkins/src
Jenkins runs this as:
docker run -t -d -u 1000:1000 [-v and -e flags etc.]
This works when I run Jenkins manually as my personal account (uid 1000) on the host. But now I changed it so that Jenkins is started automatically by systemd, and using a specifiy jenkins user with uid 1006, gid 1009:
docker run -t -d -u 1006:1009 [-v and -e flags etc.]
This mismatch causes my build to fail. I also get all kinds of problems, like this prompt in the container:
I have no name!#6d3b27a803e4:/$
Creating a jenkins user in the container seems like something that there should be a recipe for. How do I get the UIDs on host and container to match? What is the best practice?
Add something like usermod --uid $HOST_UID jenkins to the Dockerfile?
There seems to be no way to tell Docker to map host uid 1006 to container uid 1000, is there?

I face the same problem when run Jenkins job and found the solution working for me.
As you wrote above if you run job in container Jenkins automatically starts Docker like this:
docker run -t -d -u 113:117 [-v and -e flags etc.]
He takes his UID and GID locally and uses in Docker run command. But when steps commands start executing in the container docker doesn't have this values inside. It's cause of this trouble:
I have no name!#6d3b27a803e4:/$
The solution for me was to mount "passwd" and "group" files inside container like this:
pipeline {
agent {
docker {
image 'your_image'
args '-v /etc/passwd:/etc/passwd -v /etc/group:/etc/group'
}
}
stages {
stage('stage_name') {
steps {
sh '''
some_commands
'''
}
}
}
And create user in Dockerfile like this:
RUN groupadd jenkins && useradd -m -d /var/lib/jenkins -g jenkins -G root jenkins
Hope this helps someone)

This worked for me:
docker {
image 'your_image'
args '-e HOME=/tmp/home'
}
/tmp is writable for everyone. No need to create user.

Related

How to invoke docker container within docker without root?

I know there are solutions using docker in docker (docker/dind), but seems there are people giving reasons not to use it, but rather expose the socket to the first docker container by adding option:
--volume /var/run/docker.sock:/var/run/docker.sock
I am a docker user on a server (in docker user group), and I am able to run docker container with above option, but once I am in the docker container, when I want to invoke another docker container by: docker run image_name, I got error:
dial unix /var/run/docker.sock: connect: permission denied
I know this error is expected, as the user in my invoked docker container is not in the group docker of the host, I saw people provide solution by adding USER root to the docker file. Since I don't have sudo access to the server, I just wonder if there is a way to enable invoking docker in docker with out root?
Many thanks!
On the docker host, do not change the file permissions on the docker.sock to anything like 777. Doing so would expose a security risk that anyone on the host, including every untrusted user, with a command like:
docker run -it --rm -v /:/host busybox sh
To access the docker socket from inside of a container, you'll want to either run your container as root, e.g.:
docker run -it --rm -u "0:0" -v /var/run/docker.sock:/var/run/docker.sock docker docker version
Or you can run your container with the docker gid inside the container:
docker run -it --rm -u "1000:$(getent group docker | cut -f3 -d:)" -v /var/run/docker.sock:/var/run/docker.sock docker docker version
In production, you would make all the docker hosts with a predictable GID on the docker group on each host, and add your container user inside the image with that GID. That would be part of your Dockerfile with something like:
ARG DOCKER_GID=999
RUN useradd -u 5000 -g $DOCKER_GID app
USER app
My preferred solution for portable environments, particularly development environments, is to start the container as root, and dynamically adjust the group id inside the container to match the file gid of a volume mount. For an example of this, there's a fix-perms script in my docker-base repo that can be run in an entrypoint. The fix-perms script contains code like:
# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi
And then an entrypoint would check for being root before fixing the permissions, and then drop to running as a non-root user, e.g.:
if [ "$(id -u)" = "0" -a -e /var/run/docker.sock ]; then
fix-perms -r -g docker /var/run/docker.sock
fi
# run process as the container user "app" if currently root
if [ "$(id -u)" = "0" ]; then
exec gosu app "$#"
else
exec "$#"
fi
And by doing this check, the same image can be locked down in the production environment by using a predictable GID on the docker hosts that matches what's inside the image builds. And for all other hosts that didn't control the docker GID it can start the container as root, fix the permissions, and then drop to the app user inside the container.

How to get GID of group on host in Dockerfile

I'm buidling a docker image which will serve als jenkins slave.
The image needs Java and SSHD.
At the moment I have a docker container which can serve as jenkins slave. The user inside my slave is a user jenkins which I've created inside my dockerfile.
FROM java:8-jdk
ENV JENKINS_HOME /var/jenkins_home
ARG user=jenkins
ARG group=jenkins
ARG uid=999
ARG gid=999
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
VOLUME /var/jenkins_home
WORKDIR /var/jenkins_home
Now I want that my jenkins-slave is able to build docker images. So every docker-command which my jenkins needs to run will be executed on this slave. Herefor I had to mount my docker sockets to my slave container.
I start my slave-container with docker-compose. Here you see how I start my slave:
jenkins-slave:
build: ./slave
image: jenkins-slave:1.0
container_name: jenkins-slave
volumes:
- slave-volume:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
...
So now I had to change my Dockerfile because by default only root users are able to use docker. I want that my jenkins user can execute docker commands so I changed my Dockerfile and added:
RUN groupadd -g 983 docker \
&& usermod -a -G docker jenkins
Now I was able to perform ssh jenkins#172.19.0.2 and execute docker commands with the jenkins user.
But this only works because the gid of my docker group on my host is also 983 (centos7). But on my Ubuntu the gid is 1001. So then my whole setup will not work. So now my question:
Is there a way to a gid of your host inside your dockerfile?
The Dockerfile is used at build time on a build host. The host that eventually runs your built image as a container is unknown at this stage so information about a host is not easy to look up. The same image would normally be used across all hosts so configuring a GID at build time is hard too.
BMitch's suggestion of using consistent GIDs (and UIDs) across an organisation is the best solution. This is a good idea generally, not only for docker. It helps with centralised user management, NFS likes it, LDAP is easier to move to.
If consistent GIDs are too hard to setup then there are a couple of ways to work around the issue...
Multiple images
If you have a limited number of GIDs to support you could create multiple images from your jenkins base image.
Tag: my/jenkins-centos
FROM my/jenkins
RUN groupadd -g 983 docker \
&& usermod -a -G docker jenkins
Tag: my/jenkins-ubuntu
FROM my/jenkins
RUN groupadd -g 1001 docker \
&& usermod -a -G docker jenkins
Then choose which image you run on which host.
Runtime
If you had to support variable docker GID's then the groupadd logic could run at container startup in a launcher script that does the group setup then launches Jenkins. You would probably need to mount /etc/group somewhere in the container to be able to look that host information up as well.

Connect to docker container as user other than root

BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.

Jenkins in docker with access to host docker

I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker command in the build script is essentially the same as running it on the host. This is done using the following DockerFile for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes groupadd -g 997 docker which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
FROM bdruemen/jenkins-uid-from-volume
RUN apt-get -yqq update && apt-get -yqq install docker.io && usermod -g docker jenkins
VOLUME /var/run/docker.sock
ENTRYPOINT groupmod -g $(stat -c "%g" /var/run/docker.sock) docker && usermod -u $(stat -c "%u" /var/jenkins_home) jenkins && gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
docker run -d /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock <IMAGE>
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: https://blog.container-solutions.com/running-docker-in-jenkins-in-docker
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
stat -c '%g' <VOLUME-PATH>
Then the GID of the group of the container user is changed to the same value with
groupmod -g <GID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
chgrp -R <GROUPNAME> <SOME-PATH>
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins)
and maybe you want to change both to increase security.
I solved a similar problem in the following way.
Docker is installed on the host. Jenkins is deployed in the docker container of the host. Jenkins must build and run containers with web applications on the host.
Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.
Log in to the host and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Reload and restart docker service
sudo systemctl daemon-reload
sudo service docker restart
Docker file for Jenkins
FROM jenkins/jenkins:lts
USER root
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update
RUN apt-get -y --no-install-recommends install apt-transport-https \
apt-utils ca-certificates curl gnupg2 software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
RUN apt-get update && apt-get install -y docker-ce-cli docker-ce && \
apt-get clean && \
usermod -aG docker jenkins
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.6 docker-workflow:1.29 ansicolor"
Build jenkins docker image
docker build -t you-jenkins-name .
Run Jenkins
docker run --name you-jenkins-name --restart=on-failure --detach \
--network jenkins \
--env DOCKER_HOST=tcp://172.17.0.1:4243 \
--publish 8080:8080 --publish 50000:50000 \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
you-jenkins-name
Your web application has a repository at the root of which is jenkins and a docker file.
Jenkinsfile for web app:
pipeline {
agent any
environment {
PRODUCT = 'web-app'
HTTP_PORT = 8082
DEVICE_CONF_HOST_PATH = '/var/web-app'
}
options {
ansiColor('xterm')
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
//BRANCH_NAME = env.CHANGE_BRANCH ? env.CHANGE_BRANCH : env.BRANCH_NAME
deleteDir()
//git url: "git#<host>:<org>/${env.PRODUCT}.git", branch: BRANCH_NAME
}
checkout scm
}
}
stage('Stop and remove old') {
steps {
script {
try {
sh "docker stop ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker rm ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker image rm ${env.PRODUCT}"
} catch (Exception e) {}
}
}
}
stage('Build') {
steps {
sh "docker build . -t ${env.PRODUCT}"
}
}
// ④ Run the test using the built docker image
stage('Run new') {
steps {
script {
sh """docker run
--detach
--name ${env.PRODUCT} \
--publish ${env.HTTP_PORT}:8080 \
--volume ${env.DEVICE_CONF_HOST_PATH}:/var/web-app \
${env.PRODUCT} """
}
}
}
}
}

How to run docker image as a non-root user?

I'm new to docker. When I run a docker images like ubuntu image by using the command,
sudo docker run -i -t ubuntu:14.04
By default, it is entering into the container as root like this.
I searched regarding this, but I couldn't get any of how to start a docker image as a non root user as I'm completely a starter for this topic.
It would be great if someone explains with an example of how to run a docker image as a non root user.
the docker run command has the -u parameter to allow you to specify a different user. In your case, and assuming you have a user named foo in your docker image, you could run:
sudo docker run -i -t -u foo ubuntu:14.04 /bin/bash
NOTE: The -u parameter is the equivalent of the USER instruction for Dockerfile.
This is admittedly hacky, but good for those quick little containers you start just to test something quickly:
#!/bin/bash
set -eu
NAME=$1
IMG=$2
#UID=$(id -u)
USER=$(id -un)
GID=$(id -g)
GROUP=$(id -gn)
docker run -d -v /tmp:/tmp -v "/home/$USER:/home/$USER" -h "$NAME" --name "$NAME" "$IMG" /bin/bash
docker exec "$NAME" /bin/bash -c "groupadd -g $GID $GROUP && useradd -M -s /bin/bash -g $GID -u $UID $USER"
Full version of the script I use here:
https://github.com/ericcurtin/staging/blob/master/d-run
udocker is a basic variant of docker which runs in user space:
udocker is a basic user tool to execute simple docker containers in user space without requiring root privileges. Enables download and execution of docker containers by non-privileged users in Linux systems where docker is not available. It can be used to pull and execute docker containers in Linux batch systems and interactive clusters that are managed by other entities such as grid infrastructures or externally managed batch or interactive systems.
It is not advisable to allow running docker without sudo as Docker has no auditing or logging built in, while sudo does.
If you want to give docker access to non-root users Red Hat recommends setting up sudo.
Add an entry like the following to /etc/sudoers.
dwalsh ALL=(ALL) NOPASSWD: /usr/bin/docker
Now, set up an alias in ~/.bashrc for running the docker command:
alias docker="sudo /usr/bin/docker"
Now when the user executes the docker command as non-root it will be allowed and get proper logging.
docker run -ti --privileged -v /:/host fedora chroot /host
Look at the journal or /var/log/messages.
journalctl -b | grep docker.*privileged
Aug 04 09:02:56 dhcp-10-19-62-196.boston.devel.redhat.com sudo[23422]: dwalsh : TTY=pts/3 ; PWD=/home/dwalsh/docker/src/github.com/docker/docker ; USER=root ; COMMAND=/usr/bin/docker run -ti --privileged -v /:/host fedora chroot /host

Resources