Dockerized Jenkins not able to find docker - docker

I'm trying to establish a Jenkins Pipeline, that's able to build docker images. But I ran into the problem docker: not found after executing the pipeline. The Jenkinsfile has the following content:
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh 'docker --version '
}
}
}
}
It's a simple script to get things started. But it seems that the dockerized Jenkins installation can't find a suitable docker installation to use.
The required plugins (Docker and Docker pipeline) are installed and a global docker installation configuration is present. But the error keeps going.
Jenkins setup is done by using this docker-compose:
version: '3.1'
networks:
docker:
volumes:
jenkins-data:
jenkins-docker-certs:
services:
jenkins:
image: jenkins/jenkins:lts
restart: always
networks:
- docker
ports:
- 8090:8080
- 50000:50000
tty: true
volumes:
- jenkins-data:/var/jenkins_home
- jenkins-docker-certs:/certs/client:ro
- $HOME:/home
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
dind:
image: docker:dind
privileged: true
restart: always
networks:
docker:
aliases:
- docker
ports:
- 2376:2376
tty: true
volumes:
- jenkins-data:/var/jenkins_home
- jenkins-docker-certs:/certs/client
- $HOME:/home
environment:
- DOCKER_TLS_CERTDIR=/certs
After reading some more posts about that issue and following the official Jenkins doc, I thought that for this purpose docker:dind is used. Maybe I miss some important configurations here? When launching the docker:dind container, the log states the following warning message: could not change group /var/run/docker.sock to docker: group docker not found, but the group exists and I'm able to run docker commands without specifying sudo. (Followed the official docker post-installation steps)
Another problematic point right now is, that Jenkins can't persist configuration data in general or pipeline related stuff. After restarting the machine I have to go through the wizard every single time and I don't know why.
Did someone suffer similar problems?
Many thanks in advice!

Your docker-compose file is correct, you just need to add a volume in the jenkins container :
- /usr/bin/docker:/usr/bin/docker
You have also a lot of configuration not required, you can check this link to see others possible configurations. You use actually the Solution 3 and you can switch to this docker-compose file.
For volumes, they should be persisted since they are declared in the volume section. You can try to use external volumes if needed.

Fast forward one year and I've run into analogous problem, only with mismatched GLIBC versions, as described here.
I solved it by upgrading GLIBC version in the Jenkins container to 2.35 (as shipped with Ubuntu Jammy on the host). To achieve this I had to build my own Jenkins container based on ubuntu:jammy and JDK 17, using a template from the official Debian-based one (sourced from here). Now GLIBC versions agree, and docker-in-docker Jenkins builds can be made using docker installed on a host with Ubuntu Jammy:
$ ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
# vs.
$ docker run --rm -it mirekphd/jenkins-jdk17-on-ubuntu-2204:2.374 ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
Feel free to use this container (best served with the latest tag), as I will have to maintain it for our own in-house use, setting its builds as one of... Jenkins pipelines (bootstrap problem notwithstanding). It will be a Docker-in-Docker Jenkins-in-Jenkins pipeline:)

Related

Docker-in-Docker doesn't work on self hosted linux runner

I'm having trouble utilizing docker commands in af self hosted linux runner.
Reading the docs it should work more or less out of the box, just like when using atlassians own runners.
however, when running a docker command i get an error:
+ docker version
bash: docker: command not found
The relevant part of the pipelines yml file:
pipelines:
branches:
'master':
- step:
name: 'step1'
script:
- docker version //this works
services:
- docker
- step:
name: 'step2'
runs-on:
- self.hosted
- linux
script:
- docker version //this fails
services:
- docker
The only self hosted runner specific mentions of docker commands, is the new addition of using custom images to run the docker daemon inside a runner, but as i understand it, running the default should work, also on selfhosted runners.
https://support.atlassian.com/bitbucket-cloud/docs/configure-your-runner-in-bitbucket-pipelines-yml#Custom-docker-in-docker-image
Am i missing that should be done when starting the runner, or is this not supported (yet) ?
I've asked the same question on atlassians community: https://community.atlassian.com/t5/Bitbucket-questions/Selfhosted-runner-cannot-use-docker-commands/qaq-p/2186491#M87567
Will answer this question, if i get an answer from there.
My question was answered on Atlassians community, and the solution was to use the image docker:dind as the Docker image.
You can add the "definitions:" configuration below to your YAML file above the "pipelines:" config.
definitions:
services:
docker:
image: docker:dind
pipelines:
branches:
'master':
- step:
...

Push image built with docker-compose to dockerhub

I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.

docker stack ignoring unsupported options

I am running docker Server Version: 18.06.0-ce on centos 7.5.
I have a docker-compose file running db2 server with the following sample definition:
The docker-compose file has the following options:
version: "3.7"
services:
db2exp:
image: db2
ports:
- "50000:50000"
networks:
- lmnet
ipc: host
cap_add:
- IPC_LOCK
- IPC_OWNER
environment:
- DB2INSTANCE=db2inst1
- DB2PASSWD=db2inst1
- LICENSE=accept
volumes:
- db2data:/home
When using docker-compose up, I do not have problems with starting the db2 service. However when I try to use docker stack, I get the following message:
docker stack deploy test --compose-file docker-compose.yml
Ignoring unsupported options: cap_add, ipc
This renders db2start to return SQL1042C An unexpected system error occurred.
It would be ideal if what runs in compose runs in stack. What, if any, can be done so that the db2 container can be used in a docker stack environment and not just docker-compose?
If it matters, I have docker-compose version 1.23.0-rc1, build 320e4819.
Thanks in advance.
This is not supported by swarm mode currently as the error message you've show and documentation identify. Personally I'd question whether you really want to have your database running in swarm mode. Docker does not migrate the volume for you, so you wouldn't see your data if rescheduled on another node.
You can follow the progress of getting this added to Swarm Mode in the github issues, there are several, including:
https://github.com/moby/moby/issues/24862
https://github.com/moby/moby/issues/25885
The hacky solution I've seen if you really need this run from swarm mode is to schedule a container with the docker socket mounted and docker binaries in the image, which then executes a docker run command directly against the local engine. E.g.:
version: "3.7"
services:
db2exp-wrapper:
image: docker:stable
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: docker run --rm --cap-add IPC_LOCK --cap-add IPC_OWNER -p 50000:50000 ... db2
I don't really recommend the above solution, sticking with docker-compose would likely be a better implementation for your use case. Downsides of this solution include only publishing the port on the single host, and potential security risks of anyone else with access to that docker socket.

Docker on Windows10 home - inside docker container connect to the docker engine

When creating a Jenkins Docker container, it is very useful to able to connect to the Docker daemon. In that way, I can start docker commands inside the Jenkins container.
For example, after starting the Jenkins Docker container, I would like to 'docker exec -it container-id bash' and start 'docker ps'.
On Linux you can use bind-mounts on /var/run/docker.sock. On Windows this seems not possible. The solution is by using 'named pipes'. So, in my docker-compose.yml file I tried to create a named pipe.
version: '2'
services:
jenkins:
image: jenkins-docker
build:
context: ./
dockerfile: Dockerfile_docker
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- jenkins_home:/var/jenkins_home
- \\.\pipe\docker_engine:\\.\pipe\docker_engine
# - /var/run/docker.sock:/var/run/docker.sock
# - /path/to/postgresql/data:/var/run/postgresql/data
# - etc.
Starting docker-compose with this file, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
How can I setup the docker-compose file so that I can use the docker.sock (or Docker) inside the started container?
On Linux you can use something like volumes: /var/run/docker.sock:/var/run/docker.sock. This does not work in a Windows environment. When you add this folder (/var) to Oracle VM Virtualbox, it won't get any IP forever. And on many posts
You can expose the daemon on tcp://localhost:2375 without TLS in the settings. This way you can configure Jenkins to use the Docker API instead of the socket. I encourage you to read this article by Nick Janetakis about "Understanding how the Docker Daemon and the Docker CLI work together".
And then there are several Docker plugins for Jenkins that allows this connection:
Also, you can find additional information in the Docker plugin documentation on wiki.jenkins.io:
def dockerCloudParameters = [
connectTimeout: 3,
containerCapStr: '4',
credentialsId: '',
dockerHostname: '',
name: 'docker.local',
readTimeout: 60,
serverUrl: 'unix:///var/run/docker.sock', // <-- Replace here by the tcp address
version: ''
]
EDIT 1:
I don't know if it is useful, but the Docker Daemon on Windows is located to C:\ProgramData\docker according to the Docker Daemon configuration doc.
EDIT 2:
You need to say explicitly the container to use the host network because you want to expose both Jenkins and Docker API.
Following this documentation, you only have to add --network=host (or network_mode: 'host' in docker-compose) to your container/service. For further information, you can read this article to understand what is the purpose of this network mode.
First try was to start a Docker environment using "Docker Quickstart terminal". This is a good solution when running Docker commands within that environment.
When installing a complete CI/CD Jenkins environment via Docker means that WITHIN the Jenkins Docker container you need to access the Docker daemon. After trying many solutions, reading many posts, this did not work. #Paul Rey, thank you very much for trying all kinds of routes.
A good solution is to get an Ubuntu Virtual Machine and install it via the Oracle VM Virtualbox. It is then VERY IMPORTANT to install Docker via this official description.
Before installing Docker, of course you need to install Curl, Git, etc.

Docker - issue command from one linked container to another

I'm trying to set up a primitive CI/CD pipeline using 2 Docker containers -- I'll call them jenkins and node-app. My aim is for the jenkins container to run a job upon commit to a GitHub repo (that's done). That job should run a deploy.sh script on the node-app container. Therefore, when a developer commits to GitHub, jenkins picks up the commit, then kicks off a job including automated tests (in the future) followed by a deployment on node-app.
The jenkins container is using the latest image (Dockerfile).
The node-app container's Dockerfile is:
FROM node:latest
EXPOSE 80
WORKDIR /usr/src/final-exercise
ADD . /usr/src/final-exercise
RUN apt-get update -y
RUN apt-get install -y nodejs npm
RUN cd /src/final-exercise; npm install
CMD ["node", "/usr/src/final-exercise/app.js"]
jenkins and node-app are linked using Docker Compose, and that docker-compose.yml file contains (updated, thanks to #alkis):
node-app:
container_name: node-app
build: .
ports:
- 80:80
links:
- jenkins
jenkins:
container_name: jenkins
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
The containers are built using docker-compose up -d and start as expected. docker ps yields (updated):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69e52b216d48 finalexercise_node-app "node /usr/src/final-" 3 hours ago Up 3 hours 0.0.0.0:80->80/tcp node-app
5f7e779e5fbd jenkins "/bin/tini -- /usr/lo" 3 hours ago Up 3 hours 0.0.0.0:8080->8080/tcp, 50000/tcp jenkins
I can ping jenkins from node-app and vice versa.
Is this even possible? If not, am I making an architectural mistake here?
Thank you very much in advance, I appreciate it!
EDIT:
I've stumbled upon nsenter and easily entering a container's shell using this and this. However, these both assume that the origin (in their case the host machine, in my case the jenkins container) has Docker installed in order to find the PID of the destination container. I can nsenter into node-app from the host, but still no luck from jenkins.
node-app:
build: .
ports:
- 80:80
links:
- finalexercise_jenkins_1
jenkins:
image: jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Try the above. You are linking by image name, but you must use container name.
In your case, since you don't specify explicitly the container name, it gets auto-generated like this
finalexercise : folder where your docker-compose.yml is located
node-app : container configs tag
1 : you only have one container with the prefix finalexercise_node-app. If you built a second one, then its name will be finalexercise_node-app_2
The setup of the yml files:
node-app:
build: .
container_name: my-node-app
ports:
- 80:80
links:
- my-jenkins
jenkins:
image: jenkins
container_name: my-jenkins
ports:
- 8080:8080
volumes:
- /home/ec2-user/final-exercise:/var/jenkins
Of course you can specify a container name for the node-app as well, so you can use something constant for the communication.
Update
In order to test, log to a bash terminal of the jenkins container
docker exec -it my-jenkins bash
Then try to ping my-node-app, or even telnet for the specific port.
ping my-node-app:80
Or you could
telnet my-node-app 80
Update
What you want to do is easily accomplished by the exec command.
From your host you can execute this (try it so you are sure it's working)
docker exec -i <container_name> ./deploy.sh
If the above works, then your problem delegates to executing the same command from a container. As it is you can't do that, since the container that's issuing the command (jenkins) doesn't have access to your host's docker installation (which not only recognises the command, but holds control of the container you need access to).
I haven't used either of them, but I know of two solutions
Use this official guide to gain access to your host's docker daemon and issue docker commands from your containers as if you were doing it from your host.
Mount the docker binary and socket into the container, so the container acts as if it is the host (every command will be executed by the docker daemon of your host, since it's shared).
This thread from SO gives some more insight about this issue.

Resources