Docker-in-Docker doesn't work on self hosted linux runner - docker

I'm having trouble utilizing docker commands in af self hosted linux runner.
Reading the docs it should work more or less out of the box, just like when using atlassians own runners.
however, when running a docker command i get an error:
+ docker version
bash: docker: command not found
The relevant part of the pipelines yml file:
pipelines:
branches:
'master':
- step:
name: 'step1'
script:
- docker version //this works
services:
- docker
- step:
name: 'step2'
runs-on:
- self.hosted
- linux
script:
- docker version //this fails
services:
- docker
The only self hosted runner specific mentions of docker commands, is the new addition of using custom images to run the docker daemon inside a runner, but as i understand it, running the default should work, also on selfhosted runners.
https://support.atlassian.com/bitbucket-cloud/docs/configure-your-runner-in-bitbucket-pipelines-yml#Custom-docker-in-docker-image
Am i missing that should be done when starting the runner, or is this not supported (yet) ?
I've asked the same question on atlassians community: https://community.atlassian.com/t5/Bitbucket-questions/Selfhosted-runner-cannot-use-docker-commands/qaq-p/2186491#M87567
Will answer this question, if i get an answer from there.

My question was answered on Atlassians community, and the solution was to use the image docker:dind as the Docker image.
You can add the "definitions:" configuration below to your YAML file above the "pipelines:" config.
definitions:
services:
docker:
image: docker:dind
pipelines:
branches:
'master':
- step:
...

Related

Dockerized Jenkins not able to find docker

I'm trying to establish a Jenkins Pipeline, that's able to build docker images. But I ran into the problem docker: not found after executing the pipeline. The Jenkinsfile has the following content:
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh 'docker --version '
}
}
}
}
It's a simple script to get things started. But it seems that the dockerized Jenkins installation can't find a suitable docker installation to use.
The required plugins (Docker and Docker pipeline) are installed and a global docker installation configuration is present. But the error keeps going.
Jenkins setup is done by using this docker-compose:
version: '3.1'
networks:
docker:
volumes:
jenkins-data:
jenkins-docker-certs:
services:
jenkins:
image: jenkins/jenkins:lts
restart: always
networks:
- docker
ports:
- 8090:8080
- 50000:50000
tty: true
volumes:
- jenkins-data:/var/jenkins_home
- jenkins-docker-certs:/certs/client:ro
- $HOME:/home
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
dind:
image: docker:dind
privileged: true
restart: always
networks:
docker:
aliases:
- docker
ports:
- 2376:2376
tty: true
volumes:
- jenkins-data:/var/jenkins_home
- jenkins-docker-certs:/certs/client
- $HOME:/home
environment:
- DOCKER_TLS_CERTDIR=/certs
After reading some more posts about that issue and following the official Jenkins doc, I thought that for this purpose docker:dind is used. Maybe I miss some important configurations here? When launching the docker:dind container, the log states the following warning message: could not change group /var/run/docker.sock to docker: group docker not found, but the group exists and I'm able to run docker commands without specifying sudo. (Followed the official docker post-installation steps)
Another problematic point right now is, that Jenkins can't persist configuration data in general or pipeline related stuff. After restarting the machine I have to go through the wizard every single time and I don't know why.
Did someone suffer similar problems?
Many thanks in advice!
Your docker-compose file is correct, you just need to add a volume in the jenkins container :
- /usr/bin/docker:/usr/bin/docker
You have also a lot of configuration not required, you can check this link to see others possible configurations. You use actually the Solution 3 and you can switch to this docker-compose file.
For volumes, they should be persisted since they are declared in the volume section. You can try to use external volumes if needed.
Fast forward one year and I've run into analogous problem, only with mismatched GLIBC versions, as described here.
I solved it by upgrading GLIBC version in the Jenkins container to 2.35 (as shipped with Ubuntu Jammy on the host). To achieve this I had to build my own Jenkins container based on ubuntu:jammy and JDK 17, using a template from the official Debian-based one (sourced from here). Now GLIBC versions agree, and docker-in-docker Jenkins builds can be made using docker installed on a host with Ubuntu Jammy:
$ ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
# vs.
$ docker run --rm -it mirekphd/jenkins-jdk17-on-ubuntu-2204:2.374 ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
Feel free to use this container (best served with the latest tag), as I will have to maintain it for our own in-house use, setting its builds as one of... Jenkins pipelines (bootstrap problem notwithstanding). It will be a Docker-in-Docker Jenkins-in-Jenkins pipeline:)

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

gitlab ci can not find docker buildx command with shell executor

I have some troubles getting my gitlab-runner to execute docker buildx command.
I have a gitlab-runner which is configured like this:
[[runners]]
name = "Name"
url = "https://gitlab.mypage.com/"
token = "token"
executor = "shell"
shell = "powershell"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
And the pipeline which is triggered:
stages:
- test
- build
test_backend:
stage: test
script:
- exit 0
only:
- merge_request
- master
build:
stage: build
script:
- docker login someregistry -u xxxx -p yyyy
- docker buildx ls
only:
- merge_request
- master
- dev
I obfuscated the code a bit.
The problem I have is, that the docker login command is executed correctly but the docker buildx command not.
I already tested the command manually on the machine and it was successfull.
Can somebody help me here?
In my experience with the docker runners the most likly situation here is, that the docker runner doesnt have the experimental features enabled just because the docker base have it... I have experienced things like that in the past: The docker in the runner IS NOT the docker where you hosting the runner at!
You problably have to add the DIND (Docker in Docker) Service for that, because as far as I understand this runner systems, only then the docker from your host is connected with the docker within the runner.
We did it like that:
# gitlab-runner
gitlab-runner:
container_name: vivavis.gitlab-runner
image: gitlab/gitlab-runner:latest
restart: always
volumes:
- gitlab-runner:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock // <<<<<< THIS IS THE IMPORTANT LINE
networks:
- swp-infra-code
A little bit of warning here:
When activating that, we experienced a bug / not well cleaned up thing:
Because the GitLab Runner is now connected with the base docker, the images that will be created while running a CI / CD are not deleted properly: GitLab doesnt implemented that feature, it just assumes, that when the gitlab-runner-container dies, all data dies within. But with this connection the data here is not dying becuase it is not created within the container - it is created in the base docker.
I found a solution for my problem.
To use the experimental features you can set environment variables.
$env:DOCKER_CLI_EXPERIMENTAL=enabled
this command can be used in ci pipeline.
It looks like the docker cli in shell executor is not the same as the docker cli you can use on system if you try out.
Very confusing.

Testing Node server (docker) with GitLab CI

So I wrote a simple one-page server with node and express. I wrote a dockerfile for this and ran it locally. Then I made a postman collection and tested the endpoints.
I want to do this with gitlab ci using newman so I came up with the following .gitlab-ci.yml:
image: docker:latest
services:
- docker:dind
before_script:
- docker build -t test_img .
- docker run -d -p 3039:3039 test_img
stages:
- test
# test
api-test:
image:
name: postman/newman:alpine
entrypoint: [""]
stage: test
script:
- newman run pdfapitest.postman_collection.json
It fails saying:
docker build -t test_img .
/bin/sh: eval: line 86: docker: not found
ERROR: Job failed: exit code 127
full output: https://pastebin.com/raw/C3mmUXKa
what am I doing wrong here? this seems to me like a very common use case but I haven't found anything useful about this.
The issue is that your api-test job uses the image postman/newman:alpine to run the script.
This means that when GitLab tries to run the before_script section, it has no docker command available.
What you should do is to provide the docker command in the image you're using to run the job. You can do that either by installing docker as the first step of your script, or starting from a custom image which contains the software you're using inside the job plus the docker client itself.

Gitlab CI - docker: command not found

I am trying to build my docker image within the gitlab ci pipeline.
However it is not able to find the docker command.
/bin/bash: line 69: docker: command not found ERROR: Job failed: error
executing remote command: command terminated with non-zero exit code:
Error executing in Docker Container: 1
.gitlab-ci.yml
stages:
- quality
- test
- build
- deploy
image: node:8.11.3
services:
- mongo
- docker:dind
before_script:
- npm install
quality:
stage: quality
script:
- npm run-script lint
test:
stage: test
script:
- npm run-script test
build:
stage: build
script:
- docker build -t server .
deploy:
stage: deploy
script:
- echo "TODO deploy push docker image"
you need to choose an image including docker binaries
image: gitlab/dind
services:
- docker:dind
You have 2 options to fix this. You will need to edit your config.toml file (located wherever you installed your gitlab runner).
OPTION 1
in config.toml:
privileged = true
in .gitlab-ci.yml:
myjob:
stage: myjob
image: docker:latest
services:
- docker:18.09.7-dind # older version that does not need demand TLS (see below)
OPTION 2
in config.toml:
privileged = true
volumes = ["/certs/client", "/cache"]
in .gitlab-ci.yml:
myjob:
stage: myjob
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2 # not sure if this is needed
DOCKER_TLS_CERTDIR: "/certs"
IMPORTANT: ONCE YOU HAVE MADE THE CHANGES TO config.toml YOU WILL PROBABLY NEED TO RESTART THE GITLAB RUNNER (which may vary depending on OS) - I DID RESTART MINE, NOT SURE WHAT WOULD HAPPEN IF YOU DID NOT RESTART IT!
Instructions for restarting gitlab runner are here ... https://docs.gitlab.com/runner/commands/ ... basically gitlab-runner restart but on Windows I had to use Windows "Services" to restart it
Why this problem?
priviledged=true gets rid of the docker: command not found problem
However, docker:dind now requires TLS certs (whatever they are). If you are happy with an older docker version then you can use OPTION 1. If you want the latest you need to setup Gitlab CLI to use them which is OPTION 2. J.E.S.U.S loves you :)
For more info ... https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03
Problem here is that node docker image does not embed docker binaries.
Two possibilities :
split stages to two jobs. One using node images for quality and test, one using docker image for building and deploying. See jobs documentation.
build a custom docker image that embed both node and docker and use this image to build your repo.
Note that in both case you will have to enable docker inside your agent. See documentation.

Resources