How to build docker image in Jenkins running into docker? - docker

I just tried build my test image for Jenkins course and got the issue
+ docker build -t nginx_lamp_app .
/var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: 2: /var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
But I've already configured docker socket in docker-compose file for Jenkins, like this
version: "2"
services:
jenkins:
image: "jenkins/jenkins:lts"
ports:
- "8080:8080"
restart: "always"
volumes:
- "/var/jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
But, when I attach to container I see also "docker: not found" when I type command "docker"...
And I've changed permissions to socket like 777
What's can be wrong?
Thanks!

You are trying to achieve a Docker-in-Docker kind of thing. Mounting just the docker socket will not make it working as you expect. You need to install docker binary into it as well. You can do this by either extending your jenkins image/Dockerfile or create(docker commit) a new image after installing docker binary into it & use that image for your CI/CD. Try to integrate below RUN statement with the extended Dockerfile or the container to be committed(should work on ubuntu docker image) -
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
Ref - https://github.com/jpetazzo/dind
PS - It isn't really recommended (http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
Adding to that, you shouldn't mount host docker binary inside the container -
⚠️ Former versions of this post advised to bind-mount the docker
binary from the host to the container. This is not reliable anymore,
because the Docker Engine is no longer distributed as (almost) static
libraries.

Related

Jenkins on docker not finding xml pytest report

I have rootless docker host, jenkins on docker and a fastapi app inside a container as well.
Jenkins dockerfile:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
This is the docker run command:
docker run -d --name jenkins-docker --restart=on-failure -v jenkins_home:/var/jenkins_home -v /run/user/1000/docker.sock:/var/run/docker.sock -p 8080:8080 -p 5000:5000 jenkins-docker-image
Where -v /run/user/1000/docker.sock:/var/run/docker.sock is used so jenkins-docker can use the host's docker engine.
Then, for the tests I have a docker compose file:
services:
app:
volumes:
- /home/jap/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result:/usr/src
depends_on:
- testdb
...
testdb:
image: postgres:14-alpine
...
volumes:
test-result:
Here I am using the volume create on the host when I ran the jenkins-docker-image. After running jenkins 'test' stage I can see that a report.xml file was created inside the host and jenkins-docker volumes.
Inside jenkins-docker
root#89b37f219be1:/var/jenkins_home/workspace/vlep-pipeline_main/test-result# ls
report.xml
Inside host
jap#jap:~/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result $ ls
report.xml
I then have the following steps on my jenkinsfile:
steps {
sh 'docker compose -p testing -f docker/testing.yml up -d'
junit "/var/jenkins_home/workspace/vlep-pipeline_main/test-result/report.xml"
}
I also tried using the host path for the junit step, but either way I get on jenkins logs:
Recording test results
No test report files were found. Configuration error?
What am I doing wrong?

How to specify the size of a Docker volume?

I am using Docker version 20.10.17, build 100c701, on Ubuntu 20.04.4.
I have successfully created a jenkins/jenkins:lts image with its own volume, using the following command:
docker run -p 8082:8080 -p 50001:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
But after installing many plugins and running many jobs on Jenkins, I kept getting a notification on the Jenkins GUI that the storage is almost full (it was almost 388 Mb).
1- What is the default size of a docker volume ? I couldn't find an answer anywhere.
2- I tried to specify the size of the volume (after deleting everything image/container/volume) using the driver_opts and using a docker compose file.
The docker-compose.yml file.
version: '3'
services:
jenkins:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_home:/var/jenkins_home
ports:
- 8082:8080
- 50001:50000
volumes:
jenkins_home:
driver_opts:
o: "size=900m"
The Dockerfile.
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && apt-get install -y lsb-release &&\
apt-get -y install apt-transport-https ca-certificates
RUN apt-get -y install curl && \
apt-get -y update && \
apt-get -y install python3.10
RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \
https://download.docker.com/linux/debian/gpg
RUN echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
RUN apt-get -y update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
USER jenkins
I got an error that the required device option is not specified.
I don't want a temporary storage tmpfs, so i tried to specify a path on my machine. I got the error that there is no such device.
What am I doing wrong? How should I proceed?
My final target is to create a Jenkins container that has a large volume size.
You could create a volume with a specified size using this command
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
And use it when creating the container.

Running Docker commands inside Jenkins pipeline

Is there a proper way to run Docker commands through a Jenkins containerized service?
I see there are many plugins to support Docker commands in the Jenkins ecosystem, although all of them raise errors because Docker isn't installed in the Jenkins container.
I have a Dockerfile that provides a Jenkins image with a working Docker installation, but to work I have to mount the host's Docker socket:
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && \
apt-get -y install sudo \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get -y update && \
apt-get -y install --allow-unauthenticated \
docker-ce \
docker-ce-cli \
containerd.io
RUN echo "jenkins:jenkins" | chpasswd && adduser jenkins sudo
RUN echo jenkins ALL= NOPASSWD: ALL >> /etc/sudoers
USER jenkins
It can be run like this:
docker run -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock
This way it's possible to run Docker commands inside the Jenkins container. Although, I am concerned about security: namely this way the Jenkins container can access all the containers running in the host machine, moreover Jenkins is a root user, which I wouldn't like for production.
I seek to run a Jenkins instance within a Kubernetes cluster to support CI and CD pipelines within that cluster, therefore I'm guessing Jenkins must be containerized.
Am I missing something?

Run docker in docker on a dedicated server?

I will run docker on a dedicated sever by a service provider. It is not
possible to install docker on this server. Apache, Git and a lot more is
installed. So I try to run docker in a container. I will pull a docker image
from the gitlab registry and run in a sub domain. I wrote a .gitlab-ci.yml. But
I get an error message.
I found this answer:
You can't (*) run Docker inside Docker containers or images. You can't (*)
start background services inside a Dockerfile. As you say, commands like
systemctl and service don't (*) work inside Docker anywhere. And in any case
you can't use any host-system resources, including the host's Docker socket,
from anywhere in a Dockerfile.
How do I solve this problem?
$ sudo docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
ERROR: Job failed: exit code 1
.gitlab-ci.yml
image: ubuntu:latest
before_script:
- apt-get update
- apt-get install -y apt-transport-https ca-certificates curl software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
stages:
- test
test:
stage: test
script:
- apt-get update
- apt-cache search docker-ce
- apt-get install -y docker-ce
- docker run -d hello-world
This .gitlab-ci.yml works for me.
image: ubuntu:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375/
before_script:
- apt-get update
- apt-get install -y apt-transport-https ca-certificates curl software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update
stages:
- test
test:
stage: test
script:
- apt-get install -y docker-ce
- docker info
- docker pull ...
- docker run -p 8000:8000 -d --name ...
The answer that you found.... is a little old. There are options to run systemd in a container and it is also possible to run some systemctl-replacement script.
However, I am not sure what application you really want to install.

Docker in docker fails to start if container restarted

We are running a docker build agent inside a docker container.
It's based off debian jessie, and gets docker directly from docker as documented here.
The docker daemon runs fine the first time you start the container, but not the second time. (if you don't delete the container)
Dockerfile:
FROM debian:jessie
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get -y install -q \
apt-transport-https \
ca-certificates \
software-properties-common \
curl \
&& curl -fsSL https://yum.dockerproject.org/gpg | apt-key add - \
&& add-apt-repository \
"deb https://apt.dockerproject.org/repo/ \
debian-$(lsb_release -cs) \
main" \
&& apt-get update \
&& apt-get install -y \
docker-engine
CMD []
docker-compose.yml:
services:
dockerTest:
container_name: dockerTest
privileged: true
image: tomeinc/intel-docker-node:latest
command: bash -c "service docker start && sleep 2 && docker ps"
To reproduce: build the Dockerfile with docker build -t test . and then use docker-compose up twice. The second time, docker-ps will fail with
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Weirdly, if the container keeps running, you can manually start docker by running docker exec -it test /bin/bash and then executing service docker start and docker ps.
I'm not really sure how to approach debugging this, any suggestions are welcomed.
Turns out to be that docker thought that it and or containterd was still running(which it wasn't, but the PID files didn't get cleaned up)
Recommended starting approach to debugging issues: Look at the log files. I am shocked by this revelation.
Anyway adding rm /var/run/docker/libcontainerd/docker-containerd.pid /var/run/docker.pid to the start command before service docker start fixes it.

Resources