I'm buidling a docker image which will serve als jenkins slave.
The image needs Java and SSHD.
At the moment I have a docker container which can serve as jenkins slave. The user inside my slave is a user jenkins which I've created inside my dockerfile.
FROM java:8-jdk
ENV JENKINS_HOME /var/jenkins_home
ARG user=jenkins
ARG group=jenkins
ARG uid=999
ARG gid=999
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
VOLUME /var/jenkins_home
WORKDIR /var/jenkins_home
Now I want that my jenkins-slave is able to build docker images. So every docker-command which my jenkins needs to run will be executed on this slave. Herefor I had to mount my docker sockets to my slave container.
I start my slave-container with docker-compose. Here you see how I start my slave:
jenkins-slave:
build: ./slave
image: jenkins-slave:1.0
container_name: jenkins-slave
volumes:
- slave-volume:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
...
So now I had to change my Dockerfile because by default only root users are able to use docker. I want that my jenkins user can execute docker commands so I changed my Dockerfile and added:
RUN groupadd -g 983 docker \
&& usermod -a -G docker jenkins
Now I was able to perform ssh jenkins#172.19.0.2 and execute docker commands with the jenkins user.
But this only works because the gid of my docker group on my host is also 983 (centos7). But on my Ubuntu the gid is 1001. So then my whole setup will not work. So now my question:
Is there a way to a gid of your host inside your dockerfile?
The Dockerfile is used at build time on a build host. The host that eventually runs your built image as a container is unknown at this stage so information about a host is not easy to look up. The same image would normally be used across all hosts so configuring a GID at build time is hard too.
BMitch's suggestion of using consistent GIDs (and UIDs) across an organisation is the best solution. This is a good idea generally, not only for docker. It helps with centralised user management, NFS likes it, LDAP is easier to move to.
If consistent GIDs are too hard to setup then there are a couple of ways to work around the issue...
Multiple images
If you have a limited number of GIDs to support you could create multiple images from your jenkins base image.
Tag: my/jenkins-centos
FROM my/jenkins
RUN groupadd -g 983 docker \
&& usermod -a -G docker jenkins
Tag: my/jenkins-ubuntu
FROM my/jenkins
RUN groupadd -g 1001 docker \
&& usermod -a -G docker jenkins
Then choose which image you run on which host.
Runtime
If you had to support variable docker GID's then the groupadd logic could run at container startup in a launcher script that does the group setup then launches Jenkins. You would probably need to mount /etc/group somewhere in the container to be able to look that host information up as well.
Related
I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).
I'm using NVIDIA Docker in a Linux machine (Ubuntu 20.04). I've created a container named user1 using nvidia/cuda:11.0-base image as follows:
docker run --gpus all --name user1 -dit nvidia/cuda:11.0-base /bin/bash
And, here is what I see if I run docker ps -a:
admin#my_desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a365362840de nvidia/cuda:11.0-base "/bin/bash" 3 seconds ago Up 2 seconds user1
I want to access to that container via ssh using its unique IP address from a totally different machine (other than my_desktop, which is the host). First of all, is it possible to grant each container a unique IP address? If so, how can I do it? Thanks in advance.
In case you want to access to your container with ssh from an external VM, you need to do the following
Install the ssh daemon for your container
Run the container and expose its ssh port
I would propose the following Dockerfile, which builds from nvidia/cuda:11.0-base and creates an image with the ssh daemon inside
Dockerfile
# Instruction for Dockerfile to create a new image on top of the base image (nvidia/cuda:11.0-base)
FROM nvidia/cuda:11.0-base
ARG root_password
RUN apt-get update || echo "OK" && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:${root_password}" | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image from the Dockerfile
docker image build --build-arg root_password=password --tag nvidia/cuda:11.0-base-ssh .
Create the container
docker container run -d -P --name ssh nvidia/cuda:11.0-base-ssh
Run docker ps to see the container port
Finally, access the container
ssh -p 49157 root#<VM_IP>
EDIT: As David Maze correctly pointed out. You should be aware that the root password will be visible in the image history. Also this way overwrites the original container process.
This process, if it is to be adopted it needs to be modified in case you need it for production use. This serves as a starting point for someone who wishes to add ssh to his container.
In the below docker file, base image(jenkins/jenkins) is providing a user jenkins with UID 1000 and GID 1000, within container.
FROM jenkins/jenkins
# Install some base packages
# Use non-privileged user provided by base image
USER jenkins # with uid 1000 and GID 1000
# Copy plugins and other stuff
On the docker host(EC2 instance), we also have similar UID & GID created,
$ groupadd -g 1000 jenkins
$ useradd -u 1000 -g jenkins jenkins
$ mkdir -p /abc/home_folder_for_jenkins
$ chown -R jenkins:jenkins /abc/home_folder_for_jenkins
to make sure, container can write files to /abc/home_folder_for_jenkins in EC2 instance.
Another aspect that we need to take care in same EC2 instance, is to run containers(other than above container) to run in non-privileged mode.
So, below configuration is performed on docker host(EC2):
$ echo dockremap:165536:65536 > /etc/subuid
$ echo dockremap:165536:65536 > /etc/subgid
$ echo '{"debug":true, "userns-remap":"default"}' > /etc/docker/daemon.json
This dockremap configuration is not allowing jenkins to start and docker container goes in Exited state:
$ ls -l /abc/home_folder_for_jenkins
total 0
After removing docker remap configuration, everything work fine.
Why dockremap configuration not allow the jenkins container to run as jenkins user?
I'm actually fighting with this because it seems not very portable but this is the best I found. As said above on your docker host the UID/GID are the ones from the container + the value in /etc/subuid & /etc/subgid.
So your "container root" is 165536 on your host and your user jenkins is 166536 (165536 + 1000).
To come back to your example what you need to do is
$ mkdir -p /abc/home_folder_for_jenkins
$ chown -R 166536:166536 /abc/home_folder_for_jenkins
User namespaces offset the UID/GID of the user inside the container, and any files inside the container. There is no mapping from the UID/GID inside the container to the external host UID/GID (that would defeat the purpose). Therefore, you would need the offset the UID/GID of the directory being created, or just use a named volume and let docker handle this for you. I believe that UID/GID on the host would be 166536 (165536 + 1000) (I may have an off by one in there, so try opening the directory permissions if this still fails and see what gets created).
So I'm working with a slightly strange infrastructure: I have a openshift container platform that has a jenkins image from docker running inside it using the image openshift3/jenkins-2-rhel7
I'm trying to run docker build . command's within a jenkins pipeline and i'm getting a "Cannot connect to the Docker daemon" error. I don't understand why docker is installed on the machine yet not running and I don't currently have access to the openshift server other than cli and via the console. Does anyone have recommendations on how to get the docker build . command to run successfully for jenkins either with or without utilizing slaves?
node("master"){
withEnv(["PATH=${tool 'docker'}/bin:${env.PATH}"]) {
docker.withRegistry( 'dockertest') {
git url: "https://github.com/mydockertag/example.git", credentialsId: 'dockertest'
stage "build"
sh "docker build -t mydockertag/example -f ./Dockerfile ."
stage "publish"
}
}
After running the build command i get the following error:
+ docker build -t mydockertag/example -f ./Dockerfile .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
There can be two reasons for the error "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?".
Docker is running but the user executing the docker command does not have privileges to talk to '/var/run/docker.sock'. Try using 'sudo docker build'. If you do not wish to use 'sudo' everytime, you can add your user to the docker group by following the post docker installation steps here (https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user).
The docker daemon is not up and running at all. You will have to start the docker daemon manually.
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:
RUN useradd -g root -G sudo -u 1001 user && \
chown -R user:root /some/directory && \
chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
#Specify the user with UID
USER 1001
Refer section "Support Arbitrary User IDs" on the Guideline from Openshift
I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.