Docker inside docker with gitlab-ci.yml - docker

I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks

Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.

Related

Commands are not working in Ubuntu container

I have created a container using the following command: docker container run -i ubuntu. However, when I try to run a command within the container, such as cd, I get the following error: bash: line 1: cd: $'bin\r': No such file or directory. What could be the issue?
When you docker run an image, or use an image in a Dockerfile FROM line, or name an image: in a Docker Compose setup, Docker first checks to see if you have that image locally. If you have that image, Docker just uses it without checking Docker Hub or the other upstream registry.
Meanwhile, you can docker build or docker tag an image with any name you want...even a name that matches an official Docker Hub image.
You mention in a comment that you at some point did run docker build -t ubuntu .... That replaces the ubuntu image with what you built, so when you later docker run ubuntu, it's running your modified image and not the official Docker Hub Ubuntu image.
This is straightforward to fix. If you
docker rmi ubuntu
it will delete your local (modified) copy, and the next time you use it, Docker will automatically pull it from Docker Hub. It should also work to
# Explicitly get the Docker Hub copy of the image
docker pull ubuntu
# Build a custom image, pulling whatever's in the FROM line
docker build --pull -t my/image .
(You can also hit this in a Docker Compose setup if you specify both image: and build:; this instructs Compose on an explicit name to use for the built image. You do not need to repeat the FROM line in image:, and it causes trouble if you do. The resolution is the same as described above. I might leave image: out entirely unless you're planning to push the image to a registry.)

How to start docker daemon in my custom Docker image?

I am trying to create my custom docker image which I will use in my GitLab build pipeline. (Following this guide as I would like to configure my GitLab runners over AWS Fargate https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/).
One of the prerequisites is to create your own custom docker image that has everything that's needed for the build pipeline to execute.
I would need to add a docker to my docker image.
I am able to install docker, however, I do not understand how to start the docker service as the error I am getting is
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
each time docker command is used.
I tried to add in my startup.sh script used as a docker entrypoint to start docker using rc-service(alpine-based image) or systemctl (amazon linux 2) but without any luck.
Any help is appreciated. Thanks.
For running docker in docker you need to configure docker image with docker-dind service to build docker. But it is limited and requires sudo priviledges, I do recommend to use kaniko, it is very easy to configure, does not require anything more than kaniko executor image.
https://docs.gitlab.com/ee/ci/docker/using_kaniko.html
If really need to use DinD (docker in docker), just go to:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Kaniko is simplest and safe way to run docker build

is there a `docker up` command, like `vagrant up`?

Is there a docker command which works like the vagrant up command?
I'd like to use the arangodb docker image and provide a Dockerfile for my team without forcing my teammates to get educated on the details of its operation, it should 'just work'. Within the the project root, I would expect the database to start and stop with a standard docker command. Does this not exist? If so, why not?
Docker Compose could do it.
docker-compose up builds image, creates container and starts it.
docker-compose stop stops the container.
docker-compose start restarts the container.
docker-compose down stops the container and removes image and the container.
With Docker compose file you can configure the ArangoDB (expose ports, volume mapping for db initialisation, etc.). Place the compose file to the project root, and run the up command.

Containarize spring booot microservices

For a project i'm trying to put my microservices inside a container.
Right now I can succesfully put a jar file inside a docker container and run it.
I know how docker images and containers work. But Im very new on microservices, a friend of me asked to put his spring boot microservices in a docker environment. Right now this is my plan.
Put every microservice inside 1 container , manage them with docker compose so that you can run and config them at the same time. And maybe later put some high availibility in it with docker compose scale or try something out with Docker swarm.
My question now is how do you put one service inside a container. Do you create a jar /war file from a service put that inside a container with the expose port you are working with inside your service ?
For my testjar file (a simple hello world i found online) i used this dockerfile
FROM openjdk:8
ADD /jarfiles/test.jar test.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar" , "test.jar"]
You need to convert your spring boot application into docker image. Conversion of boot application can be converted using docker maven plugin or you can use docker command for this.
Using docker you need dockerfile that have the steps for creating docker image.
Once your image is ready you can run that docker image on docker engine, hence image after run will be a docker container. That is basically the virtualization.
There set of docker commands for running/creating docker images.
Install docker engine and start docker engine using
service docker start
And then use all docker commands

Role of docker-in-docker (dind) service in gitlab ci

According to the official gitlab documentation, one way to enable docker build within ci pipelines, is to make use of the dind service (in terms of gitlab-ci services).
However, as it is always the case with ci jobs running on docker executors, the docker:latest image is also needed.
Could someone explain:
what is the difference between the docker:dind and the docker:latest images?
(most importantly): why are both the service and the docker image needed (e.g. as indicated in this example, linked to from the github documentation) to perform e.g. a docker build whithin a ci job? doesn't the docker:latest image (within which the job will be executed!) incorporate the docker daemon (and I think the docker-compose also), which are the tools necessary for the commands we need (e.g. docker build, docker push etc)?
Unless I am wrong, the question more or less becomes:
Why a docker client and a docker daemon cannot reside in the same docker (enabled) container
what is the difference between the docker:dind and the docker:latest images?
docker:latest contains everything necessary to connect to a docker daemon, i.e., to run docker build, docker run and such. It also contains the docker daemon but it's not started as its entrypoint.
docker:dind builds on docker:latest and starts a docker daemon as its entrypoint.
So, their content is almost the same but through their entrypoints one is configured to connect to tcp://docker:2375 as a client while the other is meant to be used for a daemon.
why are both the service and the docker image needed […]?
You don't need both. You can just use either of the two, start dockerd as a first step, and then run your docker build and docker run commands as usual like I did here; apparently this was the original approach in gitlab at some point. But I find it cleaner to just write services: docker:dind instead of having a before_script to setup dockerd. Also you don't have to figure out how to start & install dockerd properly in your base image (if you are not using docker:latest.)
Declaring the service in your .gitlab-ci.yml also lets you swap out the docker-in-docker easily if you know that your runner is mounting its /var/run/docker.sock into your image. You can set the protected variable DOCKER_HOST to unix:///var/run/docker.sock to get faster builds. Others who don't have access to such a runner can still fork your repository and fallback to the dind service without modifying your .gitlab-ci.yml.
The container will contain only things defined in a docker image. You know you can install anything, starting from a base image.
But you can also install Docker (deamon and client) in a container, that is to say a Docker IN Docker (dind). So the container will be able to run other containers. That's why gitlab need this.

Resources