Role of docker-in-docker (dind) service in gitlab ci - docker

According to the official gitlab documentation, one way to enable docker build within ci pipelines, is to make use of the dind service (in terms of gitlab-ci services).
However, as it is always the case with ci jobs running on docker executors, the docker:latest image is also needed.
Could someone explain:
what is the difference between the docker:dind and the docker:latest images?
(most importantly): why are both the service and the docker image needed (e.g. as indicated in this example, linked to from the github documentation) to perform e.g. a docker build whithin a ci job? doesn't the docker:latest image (within which the job will be executed!) incorporate the docker daemon (and I think the docker-compose also), which are the tools necessary for the commands we need (e.g. docker build, docker push etc)?
Unless I am wrong, the question more or less becomes:
Why a docker client and a docker daemon cannot reside in the same docker (enabled) container

what is the difference between the docker:dind and the docker:latest images?
docker:latest contains everything necessary to connect to a docker daemon, i.e., to run docker build, docker run and such. It also contains the docker daemon but it's not started as its entrypoint.
docker:dind builds on docker:latest and starts a docker daemon as its entrypoint.
So, their content is almost the same but through their entrypoints one is configured to connect to tcp://docker:2375 as a client while the other is meant to be used for a daemon.
why are both the service and the docker image needed […]?
You don't need both. You can just use either of the two, start dockerd as a first step, and then run your docker build and docker run commands as usual like I did here; apparently this was the original approach in gitlab at some point. But I find it cleaner to just write services: docker:dind instead of having a before_script to setup dockerd. Also you don't have to figure out how to start & install dockerd properly in your base image (if you are not using docker:latest.)
Declaring the service in your .gitlab-ci.yml also lets you swap out the docker-in-docker easily if you know that your runner is mounting its /var/run/docker.sock into your image. You can set the protected variable DOCKER_HOST to unix:///var/run/docker.sock to get faster builds. Others who don't have access to such a runner can still fork your repository and fallback to the dind service without modifying your .gitlab-ci.yml.

The container will contain only things defined in a docker image. You know you can install anything, starting from a base image.
But you can also install Docker (deamon and client) in a container, that is to say a Docker IN Docker (dind). So the container will be able to run other containers. That's why gitlab need this.

Related

How to start docker daemon in my custom Docker image?

I am trying to create my custom docker image which I will use in my GitLab build pipeline. (Following this guide as I would like to configure my GitLab runners over AWS Fargate https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/).
One of the prerequisites is to create your own custom docker image that has everything that's needed for the build pipeline to execute.
I would need to add a docker to my docker image.
I am able to install docker, however, I do not understand how to start the docker service as the error I am getting is
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
each time docker command is used.
I tried to add in my startup.sh script used as a docker entrypoint to start docker using rc-service(alpine-based image) or systemctl (amazon linux 2) but without any luck.
Any help is appreciated. Thanks.
For running docker in docker you need to configure docker image with docker-dind service to build docker. But it is limited and requires sudo priviledges, I do recommend to use kaniko, it is very easy to configure, does not require anything more than kaniko executor image.
https://docs.gitlab.com/ee/ci/docker/using_kaniko.html
If really need to use DinD (docker in docker), just go to:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Kaniko is simplest and safe way to run docker build

GitLab CI - :image needed if runner runs on a vm with pre-installed docker-compose

I'm wondering that most of tutorials for the configuration of .gitlab-ci.yml use the image: docker or image: docker/compose.
In my case we have a pre-installed docker and docker-compose on our virtual machine (Linux).
So is it necessary to use a image definition?
In other cases they often use the dind (Docker-in-Docker) functionality, is that necessary in my case?
If not, when do i use it/is it useful?
So is it necessary to use a image definition?
No, as mentioned in "Using Docker images"
GitLab CI/CD in conjunction with GitLab Runner can use Docker Engine to test and build any application.
When used with GitLab CI/CD, Docker runs each job in a separate and isolated container using the predefined image that’s set up in .gitlab-ci.yml.
So you can use any image you need for your job to run.
In other cases they often use the dind (Docker-in-Docker) functionality, is that necessary in my case?
3. If not, when do i use it/is it useful?
As documented in "Building Docker images with GitLab CI/CD ", this is needed if your job is to build a docker image (as opposed to use an existing docker image)

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

Docker inside docker with gitlab-ci.yml

I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks
Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.

Resources