How to start docker daemon in my custom Docker image? - docker

I am trying to create my custom docker image which I will use in my GitLab build pipeline. (Following this guide as I would like to configure my GitLab runners over AWS Fargate https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/).
One of the prerequisites is to create your own custom docker image that has everything that's needed for the build pipeline to execute.
I would need to add a docker to my docker image.
I am able to install docker, however, I do not understand how to start the docker service as the error I am getting is
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
each time docker command is used.
I tried to add in my startup.sh script used as a docker entrypoint to start docker using rc-service(alpine-based image) or systemctl (amazon linux 2) but without any luck.
Any help is appreciated. Thanks.

For running docker in docker you need to configure docker image with docker-dind service to build docker. But it is limited and requires sudo priviledges, I do recommend to use kaniko, it is very easy to configure, does not require anything more than kaniko executor image.
https://docs.gitlab.com/ee/ci/docker/using_kaniko.html
If really need to use DinD (docker in docker), just go to:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Kaniko is simplest and safe way to run docker build

Related

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

Cannot connect to dockerhost from docker image

I am trying to use docker(dind) image for building an image. When I run docker info in the DockerFile, it complains dockerhost cannot be found. Is there any way when building a Docker image, can we use docker host in the build step ?
I don't think there is an effective way to run docker in DIND when building image via dockerfile. I ran a container version of DIND image and added all the config stuff and committed that to an image which solved the problem of me using dind

Installing and Running docker in a Docker container running in Openshift

I am currently working on the following scenario
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I have been working on it on quite some time now. Checked dozens of posts and threads online but I have not been able to accomplish it. Basically I got so far
I can install docker in my container (from the baseimage openshift/jenkins-2-centos7:latest)
I can't get docker to run as this makes use of systemctl which
Now I read that systemctl is not working inside docker containers or at least highly unrecommended as it interferes with the PID 1 in the system. Without
systemctl start docker
that will leave me with docker beeing unable to connect with the daemon (as expected) and the error message
Can't connect to docker daemon. Is 'docker -d' running on this host?
So I tried to set up the daemon myself using
the follwoing in my Dockerfile
RUN usermod -aG docker $(whoami)
RUN dockerd -H unix:///var/run/docker.sock
which will also not work telling me that cgroups cannot be mounted. After some more research I found that this could be handled with the cgroupfs-mount script from
https://github.com/tianon/cgroupfs-mount/tree/master
But also here I got no luck leaving me with the following error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.4.21: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
Now after hours I am out of ideas. Does anyone have an idea how to make docker work inside of OpenShift? Would be really greatful
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I don't think your conclusion here is the only possibility, and what I'll describe below is an easier approach to get what (I think) you want! :) If there are any other use cases that you have than these 3 I'll describe, let me know and I'll try to update to cover them:
Pipelines running in their own containers
Running additional containers from Pipelines
Building container images from Pipelines
Pipelines running in their own containers
For this case, there's the excellent Kubernetes plugin.
With this plugin, you add a Kubernetes/OpenShift cloud to the Jenkins global config. This can either be the one in which Jenkins is running (if you use the Jenkins image provided by OpenShift, this gets added by default at least), or an external cluster.
Inside that configuration, you can define PodTemplates (again, there are a couple of examples provided in the Jenkins image provided by OpenShift), or you can specify that in your pipeline directly also I think. When your pipeline requests a node/agent with a label that matches one of these (and there are no long-running agents that match), then a pod will be created from that template, and your pipeline execution will happen inside a container in that. Once it's no longer needed, it will be deprovisioned again.
Here are the pipeline steps exposed by this plugin: https://jenkins.io/doc/pipeline/steps/kubernetes/
Running additional containers from Pipelines
As part of your pipeline, you may want to run some tests, and those may expect to be able to interact with e.g. a database. You can create resources for that in your OpenShift project (e.g. a Deployment & expose it with a Service), and tear them down after. The openshift-client plugin is very useful here and has docs on how to interact with OpenShift.
Building container images from Pipelines
If your goal is to build container images from pipelines, remember that OpenShift also exposes this capability (depending on the security configuration) through Builds. Just like in the previous section, you can use the openshift-client plugin to create and trigger builds.
For more information on the Jenkins image that's maintained by OpenShift (and generally how to do useful things in Jenkins on OpenShift), there's this dedicated page in the OpenShift docs.
You have this article by #jpetazzo, from Docker team, about Docker In Docker (DinD):
article:
The primary purpose of Docker-in-Docker was to help with the development of Docker itself. Many people use it to run CI (e.g. with Jenkins), which seems fine at first, but they run into many “interesting” problems that can be avoided by bind-mounting the Docker socket into your Jenkins container instead.
DinD Repo:
This work is now obsolete, thanks to the combined efforts of some amazing people like #jfrazelle and #tianon, who also are black belts in the art of putting IKEA furniture together.
If you want to run Docker-in-Docker today, all you need to do is:
docker run --privileged -d docker:dind
So here is an article using another approach to build docker containers with Jenkins inside a docker container:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
So you may want to adapt one of this solutions to your OpenShift scenario. I hope it solves your issue.
You'll need a privileged pod running jenkins wich mounts the openshift node docker socket. This is a bad idea as jenkins'll launch container outside kubernetes semantics and control.
Why do not use s2i service shipped with openshift ?
Regards.

Docker inside docker with gitlab-ci.yml

I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks
Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.

Role of docker-in-docker (dind) service in gitlab ci

According to the official gitlab documentation, one way to enable docker build within ci pipelines, is to make use of the dind service (in terms of gitlab-ci services).
However, as it is always the case with ci jobs running on docker executors, the docker:latest image is also needed.
Could someone explain:
what is the difference between the docker:dind and the docker:latest images?
(most importantly): why are both the service and the docker image needed (e.g. as indicated in this example, linked to from the github documentation) to perform e.g. a docker build whithin a ci job? doesn't the docker:latest image (within which the job will be executed!) incorporate the docker daemon (and I think the docker-compose also), which are the tools necessary for the commands we need (e.g. docker build, docker push etc)?
Unless I am wrong, the question more or less becomes:
Why a docker client and a docker daemon cannot reside in the same docker (enabled) container
what is the difference between the docker:dind and the docker:latest images?
docker:latest contains everything necessary to connect to a docker daemon, i.e., to run docker build, docker run and such. It also contains the docker daemon but it's not started as its entrypoint.
docker:dind builds on docker:latest and starts a docker daemon as its entrypoint.
So, their content is almost the same but through their entrypoints one is configured to connect to tcp://docker:2375 as a client while the other is meant to be used for a daemon.
why are both the service and the docker image needed […]?
You don't need both. You can just use either of the two, start dockerd as a first step, and then run your docker build and docker run commands as usual like I did here; apparently this was the original approach in gitlab at some point. But I find it cleaner to just write services: docker:dind instead of having a before_script to setup dockerd. Also you don't have to figure out how to start & install dockerd properly in your base image (if you are not using docker:latest.)
Declaring the service in your .gitlab-ci.yml also lets you swap out the docker-in-docker easily if you know that your runner is mounting its /var/run/docker.sock into your image. You can set the protected variable DOCKER_HOST to unix:///var/run/docker.sock to get faster builds. Others who don't have access to such a runner can still fork your repository and fallback to the dind service without modifying your .gitlab-ci.yml.
The container will contain only things defined in a docker image. You know you can install anything, starting from a base image.
But you can also install Docker (deamon and client) in a container, that is to say a Docker IN Docker (dind). So the container will be able to run other containers. That's why gitlab need this.

Resources