I have rootless docker host, jenkins on docker and a fastapi app inside a container as well.
Jenkins dockerfile:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
This is the docker run command:
docker run -d --name jenkins-docker --restart=on-failure -v jenkins_home:/var/jenkins_home -v /run/user/1000/docker.sock:/var/run/docker.sock -p 8080:8080 -p 5000:5000 jenkins-docker-image
Where -v /run/user/1000/docker.sock:/var/run/docker.sock is used so jenkins-docker can use the host's docker engine.
Then, for the tests I have a docker compose file:
services:
app:
volumes:
- /home/jap/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result:/usr/src
depends_on:
- testdb
...
testdb:
image: postgres:14-alpine
...
volumes:
test-result:
Here I am using the volume create on the host when I ran the jenkins-docker-image. After running jenkins 'test' stage I can see that a report.xml file was created inside the host and jenkins-docker volumes.
Inside jenkins-docker
root#89b37f219be1:/var/jenkins_home/workspace/vlep-pipeline_main/test-result# ls
report.xml
Inside host
jap#jap:~/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result $ ls
report.xml
I then have the following steps on my jenkinsfile:
steps {
sh 'docker compose -p testing -f docker/testing.yml up -d'
junit "/var/jenkins_home/workspace/vlep-pipeline_main/test-result/report.xml"
}
I also tried using the host path for the junit step, but either way I get on jenkins logs:
Recording test results
No test report files were found. Configuration error?
What am I doing wrong?
Related
I have Jenkins deployed on kubernetes (AWS EKS), and a node designated for the jenkins pipelines tasks.
I have a pipeline which I want to build a docker image, so here is how my pipelines looks:
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
spec:
nodeSelector:
illumex.ai/noderole: jenkins-worker
containers:
- name: docker
image: docker:latest
imagePullPolicy: Always
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('build') {
steps {
container('system') {
sh """
docker system prune -f
"""
However, my job fails with:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I think it's permissions issue. However, since the containers are created as part of the pipeline, so for which user I should give permission?
On your Jenkins machine, ensure docker is installed and the user jenkins is in the docker group.
Installing the docker plugin is not enough.
Same for kubectl must be installed on the Jenkins machine.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
FROM base
RUN kubectl version
USER jenkins
In Jenkins, I want to understand why "docker ps" is not running inside my container despite I redact my Jenkins file like this :
podTemplate(serviceAccount: 'jenkins', containers: [
containerTemplate(
name: 'docker',
image: 'docker',
resourceRequestCpu: '100m',
resourceLimitCpu: '300m',
resourceRequestMemory: '300Mi',
resourceLimitMemory: '500Mi',
command: 'cat',
privileged: true,
ttyEnabled: true
)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]
) {
node(POD_LABEL) {
stage('Check running containers') {
container('docker') {
sh 'hostname'
sh 'hostname -i'
sh 'docker --version'
sh 'docker ps'
}
}
I have always this kind of message after I start my pipeline :
unix:///var/run/docker.sock.
Is the docker daemon running?
Thanks
It must have something like the result of a PS running with docker command.
Even if you run Jenkins inside a docker container, you must install docker inside.
Best way create a new Dockerfile and install docker inside.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
For the sake of clarity if you deploy to Kubernetes or use docker-compose, you will need also kubectl and docker-compose inside your container. If not you can remove the installation from Dockerfile above.
I am using Docker version 20.10.17, build 100c701, on Ubuntu 20.04.4.
I have successfully created a jenkins/jenkins:lts image with its own volume, using the following command:
docker run -p 8082:8080 -p 50001:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
But after installing many plugins and running many jobs on Jenkins, I kept getting a notification on the Jenkins GUI that the storage is almost full (it was almost 388 Mb).
1- What is the default size of a docker volume ? I couldn't find an answer anywhere.
2- I tried to specify the size of the volume (after deleting everything image/container/volume) using the driver_opts and using a docker compose file.
The docker-compose.yml file.
version: '3'
services:
jenkins:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_home:/var/jenkins_home
ports:
- 8082:8080
- 50001:50000
volumes:
jenkins_home:
driver_opts:
o: "size=900m"
The Dockerfile.
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && apt-get install -y lsb-release &&\
apt-get -y install apt-transport-https ca-certificates
RUN apt-get -y install curl && \
apt-get -y update && \
apt-get -y install python3.10
RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \
https://download.docker.com/linux/debian/gpg
RUN echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
RUN apt-get -y update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
USER jenkins
I got an error that the required device option is not specified.
I don't want a temporary storage tmpfs, so i tried to specify a path on my machine. I got the error that there is no such device.
What am I doing wrong? How should I proceed?
My final target is to create a Jenkins container that has a large volume size.
You could create a volume with a specified size using this command
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
And use it when creating the container.
So I'm trying to start a serverless sibling container from the main contianer. I've volume binded /var/run/docker.sock to /var/run/docker.sock
$ docker inspect -f '{{ .Mounts }}' webapp
Output: [{bind /var/run/docker.sock /var/run/docker.sock true rprivate}]
And in my dockerfile (FROM ubuntu:bionic) I've installed docker with the same bit of code that I used to install docker on my local machine (also ubuntu bionic).
RUN apt -y install apt-transport-https ca-certificates curl software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" && \
apt-get -y install docker.ce && \
getent group docker || sudo groupadd docker
Regular docker commands are working fine.
However serverless invoke local --docker -f function --data '{MY_DATA}' isn't working and I'm given the response:
'Please start the Docker daemon to use the invoke local Docker integration.'
I just tried build my test image for Jenkins course and got the issue
+ docker build -t nginx_lamp_app .
/var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: 2: /var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
But I've already configured docker socket in docker-compose file for Jenkins, like this
version: "2"
services:
jenkins:
image: "jenkins/jenkins:lts"
ports:
- "8080:8080"
restart: "always"
volumes:
- "/var/jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
But, when I attach to container I see also "docker: not found" when I type command "docker"...
And I've changed permissions to socket like 777
What's can be wrong?
Thanks!
You are trying to achieve a Docker-in-Docker kind of thing. Mounting just the docker socket will not make it working as you expect. You need to install docker binary into it as well. You can do this by either extending your jenkins image/Dockerfile or create(docker commit) a new image after installing docker binary into it & use that image for your CI/CD. Try to integrate below RUN statement with the extended Dockerfile or the container to be committed(should work on ubuntu docker image) -
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
Ref - https://github.com/jpetazzo/dind
PS - It isn't really recommended (http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
Adding to that, you shouldn't mount host docker binary inside the container -
⚠️ Former versions of this post advised to bind-mount the docker
binary from the host to the container. This is not reliable anymore,
because the Docker Engine is no longer distributed as (almost) static
libraries.