GitLab Docker-in-Docker: how does Docker client in job container discover Docker daemon in `dind` service container? - docker

I have a GitLab CI/CD pipeline that is being run on GKE.
One of the jobs in the pipeline uses a Docker-in-Docker service container so that Docker commands can be run inside the job container:
my_job:
image: docker:20.10.7
services:
- docker:dind
script:
- docker login -u $USER -p $PASSWORD $REGISTRY
- docker pull ${REGISTRY}:$TAG
# ...more Docker commands
It all works fine, but I would like to know why. How does the Docker client in the my_job container know that it needs to communicate with the Docker daemon running inside the Docker-in-Docker service container, and how does it know the host and port of this daemon?

There is no 'discovery' process, really. The docker client must be told about the daemon host through configuration (e.g., DOCKER_HOST). Otherwise, the client will assume a default configuration:
if the DOCKER_HOST configuration is present, this is used. Otherwise:
if the default socket (e.g., unix:///var/run/docker.sock) is present, the default socket is used.
if the default socket is NOT present AND a TLS configuration is NOT detected, tcp://docker:2375 is used
If the socket is NOT present AND a TLS configuration is present, tcp://docker:2376 is used
You can see this logic explicitly in the docker dockerfile entrypoint.
The docker client can be configured a couple ways, but the most common way in GitLab CI and with the official docker image is through the DOCKER_HOST environment variable. If you don't see this variable in your YAML, it may be set as a project or group setting or may be set on the runner configuration, or is relying on default behavior described above.
It's also possible, depending on your runner configuration (config.toml), that your job is not using the docker:dind service daemon at all. For example, if your runner has a volumes specification mounting the docker socket (/var/run/docker.sock) into the job container and there is no DOCKER_HOST (or equivalent) configuration, then your job is probably not even using the service because it would use the mounted socket (per the configuration logic above). You can run docker info in your job to be sure of this one way or the other.
Additional references:
official docker image entrypoint logic
Securing the daemon socket
GitLab docker in docker docs
Docker "context"

Related

gitlab runner docker networks missing

I'm trying to set up a ci cd with gitlab and docker, but when gitlab runner executes the command docker network ls.
The nginxproxymanager_default network is missing while on my vps if I run the same command, the nginxproxymanager network is present.
gitlab-ci.yml:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3f277ce4da1a bridge bridge local
e3c2cfc360d0 host host local
e0d83076a0f3 none null local
ssh with vps:
docker network ls
NETWORK ID NAME DRIVER SCOPE
fec1b6465ccd bridge bridge local
d2f1618cf9a9 host host local
b879f034d44a nginxproxymanager_default bridge local
54cfc9978bc1 none null local
Can someone help me ? :/
When you utilize docker-in-docker in GitLab using the docker:dind service, GitLab jobs do not share the same docker daemon as the host. The service is the daemon for your job. Each instance of the docker:dind service has its own set of networks, containers, etc. This is beneficial because it means that jobs won't accidentally interfere with one another. For example, if two jobs run concurrently and run docker run then docker ps -- each job will only see their own respective containers in the output of docker ps.
If you want GitLab jobs to use the host docker daemon then you have to mount the docker socket (/var/run/docker.sock) in the volumes configuration in your gitlab-runner. However, this is not recommended, partly for the reasons stated above.

Does --network=host still connect to the host network when running a Gitlab CI job with the FF_NETWORK_PER_BUILD flag?

I am trying to run a test using docker in docker within a Gitlab CI job. My understanding is that enabling the FF_NETWORK_PER_BUILD flag will automatically create a user-defined bridge network that the job runner and all of the created dockers within that job will connect to... but looking at the Gitlab documentation I am slightly confused...
This page: https://docs.gitlab.com/ee/ci/services/
Gives an example of using the docker:dind service with FF_NETWORK_PER_BUILD: "true"
But then when using docker run they still include the --network=host flag.
Here is the given example:
stage: build
image: docker:19.03.1
services:
- docker:dind # necessary for docker run
- tutum/wordpress:latest
variables:
FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking
script: |
docker run --rm --name curl \
--volume "$(pwd)":"$(pwd)" \
--workdir "$(pwd)" \
--network=host \
curlimages/curl:7.74.0 curl "http://tutum-wordpress"
I am trying to ensure that all of my dockers within this job are on their own separate network,
so does using the --network=host flag in this instance connect the new docker to the host server that the actual job runner is on? Or the per-job network that was just created? In what case would you want to create a per-job network and still connect a new docker to the host network?
Would appreciate any advice!
does using the --network=host flag in this instance connect the new docker to the host server that the actual job runner is on? Or the per-job network that was just created?
This is probably confusing because the "host" in --network=host does not mean host as in the underlying runner host / 'baremetal' system. To understand what is happening here, we must first understand how the docker:dind service works.
When you use the service docker:dind to power docker commands from your build job, you are running containers 'on' the docker:dind service; it is the docker daemon.
When you provide the --host option to docker run it refers to the host network of the daemon I.E. the docker:dind container, not the underlying system host.
When you specify FF_NETWORK_PER_BUILD that was specifying the docker network for the build job and its service containers that encapsulates all of your job's containers.
So, in order, the relevant activities happen as follows:
The GitLab runner creates a new docker network for the build
The runner creates the docker:dind and tutum/wordpress:latest services, connected to the network created in step (1)
Your job container starts, also connected to the docker network in step (1)
Your job contacts the docker:dind container and asks it to start a new curl container, connected to the host network of the docker:dind container -- the same network created in step (1), allowing it to reach the service containers.
Without the --network=host flag, the created container would be on a different bridge network and be unable to reach the network created from step (1).

GitLab - Docker inside gitlab/gitlab-ce get errors

I'm running a gitlab/gitlab-ce container on docker. Then , inside it, i want to run a gitlab-runner service, by providing docker as runner. And every single command that i run (e.g docker ps, docker container ..), i get this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running
P.s: i've tried service docker restart, reinstal docker and gitlab-runner.
By default it is not possible to run docker-in-docker (as a security measure).
You can run your Gitlab container in privileged mode, mount the socket (-v /var/run/docker.sock://var/run/docker.sock) and try again.
Also, there is a docker-in-docker image that has been modified for docker-in-docker usage. You can read up on it here and create your own custom gitlab/gitlab-ce image.
In both cases, the end result will be the same as docker-in-docker isn't really docker-in-docker but lets your manage the hosts docker-engine from within a docker container. So just running the Gitlab-ci-runner docker image on the same host has the same result and is a lot easier.
By default the docker container running gitlab does not have access to your docker daemon on your host. The docker client uses a socket connection to communicate to the docker daemon. This socket is not available in your container.
You can use a docker volume to make the socket of your host available in the container:
docker run -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-ce
Afterwards you will be able to use the docker client in your container to communicate with the docker daemon on the host.

Build/push image from jenkins running in docker

I have two docker containers - one running jenkins and one running docker registry. I want to build/push images from jenkins to docker registry. How do I achieve this in an easy and secure way (meaning no hacks)?
The easiest would be to make sure the jenkins container and registry container are on the same host. Then you can mount the docker socket onto the jenkins container and use the dockerd from the host machine to push the image to the registry. /var/run/docker.sock is the unix socket the dockerd is listening to.
By mounting the docker socket any docker command you run from that container executes as if it was the host.
$ docker run -dti --name jenkins -v /var/run/docker.sock:/var/run/docker.sock jenkins:latest
If you use pipelines, you can install this Docker Plugin https://plugins.jenkins.io/docker-workflow,
create a credentials resource on Jenkins,to access the Docker registry, and do this in your pipeline:
stage("Build Docker image") {
steps {
script {
docker_image = docker.build("myregistry/mynode:latest")
}
}
}
stage("Push images") {
steps {
script {
withDockerRegistry(credentialsId: 'registrycredentials', url: "https://myregistry") {
docker_image.push("latest")
}
}
}
}
Full example at: https://pillsfromtheweb.blogspot.com/2020/06/build-and-push-docker-images-with.html
I use this type of workflow in a Jenkins docker container, and the good news is that it doesn't require any hackery to accomplish. Some people use "docker in docker" to accomplish this, but I can't help you if that is the route you want to go as I don't have experience doing that. What I will outline here is how to use the existing docker service (the one that is running the jenkins container) to do the builds.
I will make some assumptions since you didn't specify what your setup looks like:
you are running both containers on the same host
you are not using docker-compose
you are not running docker swarm (or swarm mode)
you are using docker on Linux
This can easily be modified if any of the above conditions are not true, but I needed a baseline to start with.
You will need the following:
access from the Jenkins container to docker running on the host
access from the Jenkins container to the registry container
Prerequisites/Setup
Setting that up is pretty straight forward. In the case of getting Jenkins access to the running docker service on the host, you can do it one of two ways. 1) over TCP and 2) via the docker unix socket. If you already have docker listening on TCP you would simply take note of the host's IP address and the default docker TCP port number (2375 or 2376 depending on whether or not you use TLS) along with and TLS configuration you may have.
If you prefer not to enable the docker TCP service it's slightly more involved, but you can use the UNIX socket at /var/run/docker.sock. This requires you to bind mount the socket to the Jenkins container. You do this by adding the following to your run command when you run jenkins:
-v /var/run/docker.sock:/var/run/docker.sock
You will also need to create a jenkins user on the host system with the same UID as the jenkins user in the container and then add that user to the docker group.
Jenkins
You'll now need a Docker build/publish plugin like the CloudBees Docker Build and Publish plugin or some other plugin depending on your needs. You'll want to note the following configuration items:
Docker URI/URL will be something like tcp://<HOST_IP>:2375 or unix:///var/run/docker.sock depending on how we did the above setup. If you use TCP and TLS for the docker service you will need to upload the TLS client certificates for your Jenkins instance as "Docker Host Certificate Authentication" to your usual credentials section in Jenkins.
Docker Registry URL will be the URL to the registry container, NOT localhost. It might be something like http://<HOST_IP>:32768 or similar depending on your configuration. You could also link the containers, but that doesn't easily scale if you move the containers to separate hosts later. You'll also want to add the credentials for logging in to your registry as a username/password pair in the appropriate credentials section.
I've done this exact setup so I'll give you a "tl;dr" version of it as getting into depth here is way outside of the scope of something for StackOVerflow:
Install PID1 handler files in container (i.e. tini). You need this to handle signaling and process reaping. This will be your entrypoint.
Install some process control service (i.e. supervisord) packages. Generally running multiple services in containers is not recommended but in this particular case, your options are very limited.
Install Java/Jenkins package or base your image from their DockerHub image.
Add a dind (Docker-in-Docker) wrapper script. This is the one I based my config on.
Create the configuration for the process control service to start Jenkins (as jenkins user) and the dind wrapper (as root).
Add jenkins user to docker group in Dockerfile
Run docker container with --privileged flag (DinD requires it).
You're done!
Thanks for your input! I came up with this after some experimentation.
docker run -d \
-p 8080:8080 \
-p 50000:50000 \
--name jenkins \
-v pwd/data/jenkins:/var/jenkins_home \
-v /Users/.../.docker/machine/machines/docker:/Users/.../.docker/machine/machines/docker \
-e DOCKER_TLS_VERIFY="1" \
-e DOCKER_HOST="tcp://192.168.99.100:2376" \
-e DOCKER_CERT_PATH="/Users/.../.docker/machine/machines/docker" \
-e DOCKER_MACHINE_NAME="docker" \
johannesw/jenkins-docker-cli

Connecting the Docker Daemon insde the CDK on RHEL based docker images

I want to use the docker command line tool as in "docker ps", "docker build" and "docker run". How can I connect "docker" to the Docker Daemon inside the CDK, so I can create RHEL-based Docker images?
Use the vagrant-service-manager plugin to set up your host environment for connecting your client Docker binary (docker) to the Docker service running inside CDK. In the directory with the Vagrantfile you used to launch CDK, run:
eval "$(vagrant service-manager env docker)"
This will export environment variables that instruct the docker binary to connect to CDK.
To display info about the services running inside CDK and about the necessary settings to connect to the from your host (i.e. to see what the first command does), run:
vagrant service-manager env
See documentation for details: Using the vagrant-service-manager Plugin.
If you don't already have the docker client binary installed on your host system, vagrant-service-manager can do it for you:
vagrant service-manager install-cli docker
More details in documentation: Preparing Host System for Using Docker from the Command Line.
Just like using the docker binary to connect to the Docker daemon inside CDK, you can use the oc binary to connect to the OpenShift service running in CDK. Installation and set up is analogous to the docker client.

Resources