Give permission to jenkins to access unix:///var/run/docker.sock - docker

I have installed the docker plugin into jenkins and I am trying to configure a docker cloud.
My jenkins installation is running inside a docker container and I have bound to the docker socket on the host like so:
version: '3.3'
services:
jenkins:
container_name: jenkins
ports:
- '7345:8080'
- '50000:50000'
volumes:
- /docker/jenkins/data/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
image: 'jenkins/jenkins:lts'
This method works fine using docker-ce-cli. If I install the cli and bind to the socket of host then it works.
However setting up jenkins I am getting an error:
Inside the jenkins container everything is run under user "jenkins" with a UID of 1000. On my host, UID 1000 is a user called "ubuntu".
I have added this user to the docker group
usermod -aG docker ubuntu
And checked the socket permissions:
# ls -lisa /var/run/docker.sock
833 0 srw-rw---- 1 root docker 0 Jul 22 22:02 /var/run/docker.sock
But jenkins still complains it doesn't have permissions.
What is right way to give jenkins permissions to access this socket?

None of the customizations in the other thread worked but I tweaked it a bit and got it working with the below file:
FROM jenkins/jenkins
USER 0
ARG DOCKERGID=998
# Docker
RUN apt-get update \
&& apt-get install software-properties-common apt-transport-https ca-certificates gnupg-agent dialog apt-utils -y \
&& curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable" \
&& apt-get update \
&& apt-get install docker-ce-cli -y
# Setup users and groups
RUN addgroup --gid ${DOCKERGID} docker
RUN usermod -aG docker jenkins
USER 1000

To be able to use docker from jenkins - just add jenkins user to docker group, not ubuntu one.
usermod -aG docker jenkins

Related

mkdir: cannot create directory ‘cpuset’: Read-only file system when running a "service docker start" in Dockerfile

I have a Dockerfile that extends the Apache Airflow 2.5.1 base image. What I want to do is be able to use docker inside my airflow containers (i.e. docker-in-docker) for testing and evaluation purposes.
My docker-compose.yaml has the following mount:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
My Dockerfile looks as follows:
FROM apache/airflow:2.5.1
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker airflow
RUN service docker start
USER airflow
Basically:
Install docker.
Add the airflow user to the docker group.
Start the docker service.
Continue as airflow.
Unfortunately, this does not work. During RUN service docker start, I encounter the following error:
Step 11/12 : RUN service docker start
---> Running in 77e9b044bcea
mkdir: cannot create directory ‘cpuset’: Read-only file system
I have another Dockerfile for building a local jenkins image, which looks as follows:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker jenkins
RUN service docker start
USER jenkins
I.e. it is exactly the same, except that I am using the jenkins user. Building this image works.
I have not set any extraneous permission on my /var/run/docker.sock:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 18 17:14 /var/run/docker.sock
My questions are:
Why does RUN service start docker not work when building my airflow image?
Why does the exact same command in my jenkins Dockerfile work?
I've tried most of the answers to similar questions, e.g. here and here, but they have unfortunately not helped.
I'd rather try to avoid the chmod 777 /var/run/docker.sock solution if at all possible, and it should be since my jenkins image can build correctly...
Just delete the RUN service start docker line.
The docker CLI tool needs to connect to a Docker daemon, which it normally does through the /var/run/docker.sock Unix socket file. Bind-mounting the socket into the container is enough to make the host's Docker daemon accessible; you do not need to separately start Docker in the container.
There are several issues with the RUN service ... line specifically. Docker has a kind of complex setup internally, and some of the things it does aren't normally allowed in a container; that's probably related to the "cannot create directory" error. In any case, a Docker image doesn't persist running processes, so if you were able to start Docker inside the build, it wouldn't still be running when the container eventually ran.
More conceptually, a container doesn't "run services", it is a wrapper around only a single process (and its children). Commands like service or systemctl often won't work the way you expect, and I'd generally avoid them in a Docker context.

Got permission denied while trying to connect to the Docker daemon socket: without chmod

This question is related to this but I am trying to avoid solutions which make use of chmod. I can't change the permissions of /var/run/docker.sock inside the Dockerfile because it is a volume and I am looking to not have to manually interfere with the environment. I am running on MacOS.
I have a Dockerfile which installs the docker engine into a debian based container, and adds a user xyz to the group docker.
FROM debian
USER root
# https://docs.docker.com/engine/install/debian/
RUN apt-get update
RUN apt-get --yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | \
gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get --yes install docker-ce docker-ce-cli containerd.io
RUN useradd xyz
RUN usermod -a -G docker xyz
RUN newgrp docker
USER xyz
This is my docker-compose.yml:
services:
my_service:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: tail -f /dev/null
The user xyz gets created and gets added to the docker group which according to Docker's instructions here should be enough to allow the user xyz access to the docker socket but I still find permission issues.
> docker compose exec my_service whoami
xyz
> docker compose exec my_service groups
xyz docker
> docker compose exec my_service docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
Hopefully this is reproducible for others - it would be good to know whether others experience the same issue.

Running Docker commands inside Jenkins pipeline

Is there a proper way to run Docker commands through a Jenkins containerized service?
I see there are many plugins to support Docker commands in the Jenkins ecosystem, although all of them raise errors because Docker isn't installed in the Jenkins container.
I have a Dockerfile that provides a Jenkins image with a working Docker installation, but to work I have to mount the host's Docker socket:
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && \
apt-get -y install sudo \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get -y update && \
apt-get -y install --allow-unauthenticated \
docker-ce \
docker-ce-cli \
containerd.io
RUN echo "jenkins:jenkins" | chpasswd && adduser jenkins sudo
RUN echo jenkins ALL= NOPASSWD: ALL >> /etc/sudoers
USER jenkins
It can be run like this:
docker run -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock
This way it's possible to run Docker commands inside the Jenkins container. Although, I am concerned about security: namely this way the Jenkins container can access all the containers running in the host machine, moreover Jenkins is a root user, which I wouldn't like for production.
I seek to run a Jenkins instance within a Kubernetes cluster to support CI and CD pipelines within that cluster, therefore I'm guessing Jenkins must be containerized.
Am I missing something?

/var/run/docker.sock: connect: permission denied Jenkins slave on ecs cluster

I'm using the AWS EC2 plugin for Jenkins to spawn up Jenkins slaves when tasks are generated. Running into permission issues when trying to build docker inside docker container. I've looked at dozens of other posts and people frequently provide this as the answer:
create docker group
add jenkins user to docker group
restart
everything magically works
The thing is is that I can't restart, because the jenkins slave gets spawned using the plugin, and I'm not sure how to restart it properly for it to handle the build correctly upon restart. Also, that would mean to run the restart on the host despite being in a container which sounds like a bad idea.
I've tried:
Adding jenkins to sudo users in dockerfile RUN adduser jenkins
sudo followed by RUN echo "jenkins ALL=NOPASSWD: ALL" >>
/etc/sudoers
Changing docker socket file owner RUN chown root:jenkins /var/run/docker.sock
Changing docker socket permissions chmod 777 /var/run/docker.sock
Using newgrp so I don't have to restart docker from outside the container
Basically, how do I get around not restarting the docker service while also providing sudo permissions in order to build dockerfiles inside jenkins slave container? Or if I can actually restart while still using EC2 plugin, how would I best go about that?
Current dockerfile:
FROM jenkins/jnlp-slave
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce && \
apt-get -y install sudo
VOLUME /var/run/docker.sock
RUN adduser jenkins sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN usermod -aG docker jenkins
RUN chmod 777 /var/run/docker.sock
RUN chown root:jenkins /var/run/docker.sock
USER jenkins
Thank you!
I had also faced this issue I have to do below steps for it to work:
Update the docker file to install the docker-client.
Update the ECS Cloud Jenkins configuration to do volumen mount i.e /var/run/docker.sock
The most essential part is to run the docker commands inside the container. There are two options:
i. run the containerUser as root user.
this is the most convenient step if running the container as root user if fine to you
ii. add jenkins user to the docker group on the host
usermod -a -G docker jenkins
Get the gid of the docker group and map the gid inside the container. Update the gid of the docker container to the gid of the host. This way you won't get the permission denied error and don't need to do chown 777 /var/run/docker.sock

Cannot execute ansible playbook via docker container

Im executing a pipeline on jenkins that is inside a docker container. This pipeline calls another docker-compose file that executes an ansible playbook. The service that executes the playbook is called agent, and is defined as follows:
agent:
image: pjestrada/ansible
links:
- db
environment:
PROBE_HOST: "db"
PROBE_PORT: "3306"
command: ["probe.yml"]
this is the images it uses:
FROM ubuntu:trusty
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Prevent dpkg errors
ENV TERM=x-term-256color
RUN sed -i "s/http:\/\/archive./http:\/\/nz.archive./g" /etc/apt/sources.list
#Install ansible
RUN apt-get update -qy && \
apt-get install -qy software-properties-common && \
apt-add-repository -y ppa:ansible/ansible && \
apt-get update -qy && \
apt-get install -qy ansible
# Copy baked in playbooks
COPY ansible /ansible
# Add voulme for Ansible Playbooks
Volume /ansible
WORKDIR /ansible
RUN chmod +x /
#Entrypoint
ENTRYPOINT ["ansible-playbook"]
CMD ["site.yml"]
My local machine is Ubuntu 16.04, and when I run docker-compose up agent the plabook is executed successfully. However when Im inside the jenkins container im getting this error on the same command call.
Attaching to todobackend9dev_agent_1
[36magent_1 | [0mERROR! the playbook: site.yml does not appear to be a file
This are the images and compose files for my jenkins container:
FROM jenkins:1.642.1
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the group ID used by AWS Linux ECS Instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker Compose
RUN groupadd -g ${DOCKER_GID:-497} docker
# Used to control Docker and Docker Compose versions installed
# NOTE: As of February 2016, AWS Linux ECS only supports Docker 1.9.1
ARG DOCKER_ENGINE=1.10.2
ARG DOCKER_COMPOSE=1.6.2
# Install base packages
RUN apt-get update -y && \
apt-get install apt-transport-https curl python-dev python-setuptools gcc make libssl-dev -y && \
easy_install pip
# Install Docker Engine
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | tee /etc/apt/sources.list.d/docker.list && \
apt-get update -y && \
apt-get purge lxc-docker* -y && \
apt-get install docker-engine=${DOCKER_ENGINE:-1.10.2}-0~trusty -y && \
usermod -aG docker jenkins && \
usermod -aG users jenkins
# Install Docker Compose
RUN pip install docker-compose==${DOCKER_COMPOSE:-1.6.2} && \
pip install ansible boto boto3
# Change to jenkins user
USER jenkins
# Add Jenkins plugins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Compose File:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
I put a volume in order to access docker socket from my jenkins container. However, for some reason Im not being able to access the site.yml file I need for the playbook even though outside the container the file is available.
Can anyone help me solve this issue?
How sure are you about that volume mount point and your paths?
- jenkins_home:/var/jenkins_home
Have you tried debug via echo? If it can't find the site.yml then paths are the most likely cause. You can use jenkins replay on a job to iterate quickly and modify parts of the jenkins code. That will let you run things like
sh "pwd; ls -la"
I recommend adding the equivalent within your docker container so you can check the paths. My guess is that the workspace isn't where you think it is and you'll want to run docker with:
-v${env.WORKSPACE}:jenkins-workspace
and then within the container:
pushd /jenkins-worspace

Resources