Configure gitlab-runner using a Dockerfile - docker

I'm trying to write-down a Dockerfile to create create and register a new runner to a private gitlab repository. According to gitlab documentation, I wrote down the following Dockerfile:
FROM gitlab/gitlab-runner:latest
RUN gitlab-runner register \
--non-interactive \
--url "https://gitlab.com/" \
--registration-token "GITLAB_REPO_TOKEN" \
--executor "docker" \
--docker-image alpine:latest \
--description "docker-runner" \
--maintenance-note "Free-form maintainer notes about this runner" \
--run-untagged="true" \
--locked="false"
Then build it with:
docker build -t test .
And then run it in a container via:
docker run test:latest
The runner is correctly seen by gitlab (the runner is available under Settings\CI/CD\Runners).
Then, I set up the following CI, for testing:
image: python:3.7-alpine
testci:
stage: test
script:
- python test.py
The job is then pulled by the runner, but I immediately get the following error:
Running with gitlab-runner 15.8.2 (4d1ca121)
on docker-runner yVa1JDny, system ID: xxxxxxxxx
Preparing the "docker" executor
00:09
ERROR: Failed to remove network for build
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (docker.go:753:0s)
Can anyone please provide support in that? I didn't get what it is missing from the configuration I've made.
I've tried to modify the docker run call trying with the volume mount guide found here, but nothing changes.
I've also found here a similar Dockerfile, but using a gitlab-ci-multi-runner which is not the desired service.

You're attempting to use the docker executor for your runner, but your runner doesn't have access to the docker socket in order to create new containers. Your runner manager (what your docker file is creating) is attempting to start up new docker containers to handle each of your jobs, but failing to connect to docker.
In your docker run command, you will need to do a couple things:
Set your container to use privileged mode: --privileged
Map in the docker socket: -v /var/run/docker.sock:/var/run/docker.sock
With those two things, you can likely connect to the docker daemon and start new containers. If you want to perform docker builds within CI using this runner, note you'll need to configure your runner manager (again, what your docker file is creating) to allow these same two settings on the build containers. You can get information about how to do that here: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding

Related

Build step in pipeline is failing with connection refused error while running GitLab and GitLab-Runner docker instances locally

I am running GitLab and Gitlab-Runner docker instances locally. When a Spring Boot and Maven project pipeline is executed, I am getting below error.
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/starter-springboot-pipeline/.git/
fatal: unable to access 'http://localhost/root/starter-springboot-pipeline.git/': Failed to connect to localhost port 80: Connection refused
Uploading artifacts for failed job
00:07
ERROR: Job failed: exit code 1
Not sure if the localhost in the above error is referring to GitLab container or Runner container. Should it refer to the gitlab container and not the localhost?
Below are the commands and configuration I used.
Start the GitLab server:
docker run -itd --network=gitlab-network --hostname localhost \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab --restart always --volume config:/etc/gitlab \
--volume logs:/var/log/gitlab \
--volume data:/var/opt/gitlab \
gitlab/gitlab-ee:12.10.14-ee.0
Start the GitLab Runner
docker run -d --name gitlab-runner --restart always \
-v ~/gitlab/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:v12.10.3
Created Network 'gitlab-network' and added both containers to it.
docker network connect gitlab-network gitlab
docker network connect gitlab-network gitlab-runner
Registered the Runner as below:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
http://gitlab
Please enter the gitlab-ci token for this runner:
XxXXxXXXxxXXXXXX
Please enter the gitlab-ci description for this runner:
[49ad685039ad]: runner14
Please enter the gitlab-ci tags for this runner (comma separated):
docker
Registering runner... succeeded runner=EkWnb63h
Please enter the executor: docker-ssh, parallels, shell, virtualbox, docker+machine, kubernetes, custom, docker, ssh, docker-ssh+machine:
docker
Please enter the default Docker image (e.g. ruby:2.6):
alpine:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Below is the gitlab-ci.yml
image: maven:3.3-jdk-8
stages:
- test
test_job:
stage: test
script:
- pwd
- mvn clean
- mvn compile
- mvn test
tags:
- docker
I have newly started working on GitLab and docker, able to setup them and run the pipeline after resolving some issues with good amount of research. But I am stuck with this issue.
Hostname localhost is always resolved to 127.0.0.1 or ::1 - in other words to the loopback interface, which as the name suggests loops back and connects to itself.
This means that your runner is trying to find http://localhost/root/starter-springboot-pipeline.git/ inside its own container - which obviously fails because it's in GitLab's container.
It's also why, in your runner config, you had to specify GitLab's address as http://gitlab and not as http://localhost
You might try starting the GitLab container with command (recreate named volumes beforehand to ensure it's configured from scratch)
docker run -itd --network=gitlab-network --hostname gitlab \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab --restart always \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab/'" \
--volume config:/etc/gitlab \
--volume logs:/var/log/gitlab \
--volume data:/var/opt/gitlab \
gitlab/gitlab-ee:12.10.14-ee.0
but I can't guarantee it'll work, as GitLab was designed to run on servers that have their own unique hostname.
Edit:
You may also try editing config.toml and setting network_mode in [runners.docker] section to gitlab-network. See here for more info.

Official Docker image says docker not running?

I perform the following docker commands in the following order:
docker pull docker
docker run -ti <imgId>
https://hub.docker.com/_/docker/
Now I am inside the "docker" image for Docker
Now suppose I create a temp folder and download a Dockerfile
mkdir temp
cd temp
curl <dockerfile>
docker build .
It will tell me Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This means that the docker service needs to be started, but as the official docker image comes on alpine linux, commands like service/systemctl are not available, so we must perform apk add openrc --no-cache to access these.
After I install it, I still cannot start the docker service.
Performing system docker start says that it cannot find docker as a service?
service: service docker does not exist
Eventually I want to build this via Jenkins.
In the build step, I perform Execute Shell
if [ -f "Dockerfile" ]; then
echo "Dockerfile exists ... removing it"
rm Dockerfile
fi
wget <dockerFile url>
docker build .
I purposely don't do the openrc on Jenkins since I want to test locally first
The image you're pulling here (with the latest tag) does not contain the docker daemon. It's meant to be used as the docker client. What you want is to first get the docker daemon running with the image tagged dind (docker in docker).
docker network create dind
docker run --privileged --name docker --network dind -v docker-client-certs:/certs/client -d docker:dind
To verify it started up and works, you can check the logs.
docker logs docker
Now you can use a client container to connect to the daemon. This is how you connect interactively to the shell, like you wanted to:
docker run -ti --network dind -e DOCKER_TLS_CERTDIR=/certs -v docker-client-certs:/certs/client:ro docker
Docker commands should work inside this container. If you do docker version, you should see the versions of both the client and the server.
Note the two containers share the same network (some examples online feature links, but those are deprecated). They also share some of the TLS certs, which are generated when starting up the dind image.

alpine cannot access docker daemon when using gitlab-ci

I have a custom gitlab ci that I want to compile a Golang app and build a docker image. I have decided to use alpine docker image for the gitlab runner. I can't seam to get docker started. I have tried to manually start docker and get an error ( * WARNING: docker is already starting ) and if I don't manually start the docker service I get (Fails (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?) Any one else experience this?
This would not be a duplicate question. Gitlab runner runs the docker alpine container in root (verified by running whoami). For the sake of trying I did try usermod -aG docker $(whoami) and had the same output.
.gitlab-ci.yml
image: alpine
variables:
GO_PROJECT: linkscout
before_script:
- apk add --update go git libc-dev docker openrc
- mkdir -p ~/go/src/${GO_PROJECT}
- cp -r ${CI_PROJECT_DIR}/* ~/go/src/${GO_PROJECT}/
- cd ~/go/src/${GO_PROJECT}
- service docker start # * WARNING: docker is already starting
stages:
- compile
- build
compile:
stage: compile
script:
- go get
- go build -a
build:
stage: build
script:
- docker version # If I don't run (service docker start) I get this message: Fails (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?)
By default you cannot use Docker-in-docker. You should configure your runner like this. Then, as stated in the explanation also use docker:latest as image instead of alpine.

Gitlab CI/CD runner and docker connection configuration

I am trying to configure gitlab CI/CD runner. On the runner, I have deployed maven and java that builds my project and executes the test. So far so good, but the final step which it should pakage the code as a docker image and deploy fails. Here is the script which runs fine in cloud.But it says docker command not found in local, and I did not understand the workflow. Now for that to run, am I supposed to install docker on to my runner ? As the runner itself is a container inside docker. I could not figure out what should I do for this step to run. Please help.
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/imran_yusubov/gs-spring-boot-docker .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/imran_yusubov/gs-spring-boot-docker
How are you starting the runner?
The proper way to start the runner would be:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Where you pass your docker socket and then in your pipeline you would have to call the docker:dind service in order to be able to run Docker in Docker which will allow you to build Docker images and run containers
You could find more info in this tutorial

GitLab CI - Cannot connect to the Docker daemon from within an image

I have a node-based project and following are the first few steps that are required to be executed as part of the build:
npm install
npm run build
docker build -t client .
The last command above builds the following Dockerfile:
FROM docker.artifactory.abc.net/nginx
COPY build /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf
Content of .gitlab-ci.yml:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
stage: build
script:
- npm install
- npm run build
- docker build -t client .
In the above Dockerfile, i am using a custom node image (node:1.0) which contains the proxy settings for apk to work and Artifactory configuration so all the dependencies are fetched using Artifactory. Now when i was running this build, i was getting docker: command not found error while executing the last command (docker build -t client .), which is expected because the base image is for node and doesn't contain docker. So i added docker setup instructions to the node Dockerfile based on this link except for the last 3 lines where it's configuring the ENTRYPOINT and CMD.
Now when i ran the build, i got:
$ docker build -t client .
Sending build context to Docker daemon 372.7MB
Step 1 : FROM docker.artifactory.abc.net/nginx
Get https://docker.artifactory.abc.net/v2/nginx/manifests/latest: unknown: Authentication is required
ERROR: Job failed: exit code 1
This error, as per my past experience, had to do with running docker login command. Since the docker setup in official image uses tar, i had to add docker user to /etc/group and then add current user (root) to the docker group. Also added the docker login command as shown below to the Dockerfile:
addgroup docker; \
adduser root docker; \
docker login docker.artifactory.abc.net -u svc-art -p "ZTg6#&kq"; \
After that, if i try building this Dockerfile, i get following error:
+ dockerd -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ docker -v
Docker version 17.05.0-ce, build v17.05.0-ce
+ adduser root docker
+ tail -2 /etc/group
node:x:1000:node
docker:x:101:root
+ docker login docker.artifactory.abc.net -u svc-art -p ZTg6#&kq
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I also did an ls -ltr /var/run/docker.sock; and the docker socket file was not present inside the image. This seems to be the issue.
Any idea how i can get this working?
Well from the example you have provided I cannot see where you call your docker service, therefore I assume you are not calling it also you are not logging into the registry.
The way your pipeline should look like is something as follows:
image: docker.artifactory.abc.net/docker/node:1.0
stages:
- build
- deploy
build:
image: docker:latest
services:
- docker:dind
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.example.com
- docker build -t registry.example.com/group/project/image:latest .
- docker push registry.example.com/group/project/image:latest
You could also find more info here

Resources