I have setup a testplant of Gitlab CI
Gitlab-CE on ubuntu 18.04 VM
Docker gitlab runner
Microk8s cluster
I am able to install the gitlab managed Ingress controller
As I am running dind, how should I expose port 4000 to my host machine (VM) and what is the best way to do it ?
I tried to play around with gitlab installed ingress controller, but not sure where the config files/yaml for gitlab managed apps ?
Tried simple nodeport expose and it did not help
kubectl -n gitlab-managed-apps expose deployment <Gitlab Runner> --type=NodePort --port=4000
Below is my gitlab-ci.yaml file..
image: docker:19.03.13
services:
- name: docker:18.09.7-dind
command:
[
'--insecure-registry=gitlab.local:32000',
]
stages:
- testing
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
CI_REGISTRY_IMAGE: img1
before_script:
- echo "$REG_PASSWORD" | docker -D login "$CI_REGISTRY" -u "$REG_USER" --password-stdin
testing:
stage: testing
tags: [docker]
script:
- docker pull "gitlab.local:32000/$CI_REGISTRY_IMAGE:latest"
- docker images
- hostname
- docker run --rm -d -p 4000:4000 "gitlab.local:32000/$CI_REGISTRY_IMAGE:latest"
- netstat -na | grep -w 4000
- sleep 3600
only:
- master
I managed to figure out what was the issue on exposing using k8s services. It was the selector that was not clearly defined. Some key points to note
I could see that the port was listening on IPv6 interface (::4000) within the pod. However this was not the problem
I added podLabels in config.toml of the gitlab runner config (e.g. app: myapp). This way, each pod spawned by the runner had a predefined label
Used the label in my selector section of the LB service
Hope its useful to anyone
Related
i have this pipeline to execute :
stages:
- build-gitlab
- deploy-uat
build:
image: node:14-alpine
stage: build-gitlab
services:
- docker
before_script:
- docker login $CI_REGISTRY_URL -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
script:
- docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_FRONTEND_REGISTRY_URL
- docker push $CI_FRONTEND_REGISTRY_URL
deploy:
image:
name: bitnami/kubectl:latest
stage: deploy-uat
before_script:
- kubectl config set-cluster deploy-cluster --server="$K8S_SERVER" --insecure-skip-tls-verify
- kubectl config set-credentials gitlab --token=$(echo $K8S_TOKEN | base64 -d)
- kubectl config set-context deploy-cluster --cluster=deploy-cluster --namespace=ns-frontend-dev --user=gitlab
- kubectl config use-context deploy-cluster
script:
- envsubst < deploy.tmpl > deploy.yaml
- kubectl apply -f deploy.yaml
Initially i defined a runner for my gitlab with shell executor. Docker is installed in my runner that is why the build stage executed itself successfully. But if i would like to use multiple docker images as you can see in my gitlab-ci.yaml file, the shell executor is not the appropriate one.
I saw this documentation about gitlab executors
but it is not explicit enough.
i register a new runner with docker executor, then i got this result :
Preparing the "docker" executor
Using Docker executor with image node:14-alpine ...
Starting service docker:latest ...
Pulling docker image docker:latest ...
Using docker image sha256:0f8d12a73562adf6588be88e37974abd42168017f375a1e160ba08a7ee3ffaa9 for docker:latest with digest docker#sha256:75026b00c823579421c1850c00def301a6126b3f3f684594e51114c997f76467 ...
Waiting for services to be up and running (timeout 30 seconds)...
*** WARNING: Service runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0 probably didn't start properly.
Health check error:
service "runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0-wait-for-service" health check: exit code 1
Health check container logs:
2023-01-18T15:50:31.037166246Z FATAL: No HOST or PORT found
and the deploy part did not succeed. What is the right executor to choose between :
docker, shell, ssh, kubernetes, custom, parallels, virtualbox, docker+machine, docker-ssh+machine, instance, docker-ssh
And how to use it
I am trying to set up some integration tests in Gitlab CI/CD - in order to run these tests, I want to reconstruct my system (several linked containers) using the Gitlab runner and docker-compose up. My system is composed of several containers that communicate with each other through mqtt, and an InfluxDB container which is queried by other containers.
I've managed to get to a point where the runner actually executes the docker-compose up and creates all the relevant containers. This is my .gitlab-ci.yml file:
image: docker:19.03
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- name: docker:19.03-dind
alias: localhost
before_script:
- docker info
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
As you can see, I am installing docker-compose, running compose up on my config yml file and then executing my integration tests from within one of the containers. When I run that final line on my local system, the integration tests run as expected; in the CI/CD environment, however, all the tests throw some variation of ConnectionRefusedError: [Errno 111] Connection refused errors. Running docker-compose ps seems to show all the relevant containers Up and healthy.
I have found that the issues stem from every time one container tries to communicate with another, through lines like self.localClient = InfluxDBClient("influxdb", 8086, database = "replay") or client.connect("mosquitto", 1883, 60). This works fine on my local docker environment as the address names resolve to the other containers that are running, but seems to be creating problems in this Docker-in-Docker setup. Does anyone have any suggestions? Do containers in this dind environment have different names?
It is also worth mentioning that this could be a problem with my docker-compose.yml file not being configured correctly to start healthy containers. docker-compose ps suggests they are up, but is there a better way to check whether they are running correctly? Here's an excerpt of my docker-compose file:
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
volumes:
- ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web:
influxnet:
internal: true
brokernet:
driver: bridge
internal: true
There are a few possibilities to why this error is occurring:
A bug on Docker 19.03-dind is known to be problematic and unable to create networks when using services without a proper TLS setup, have you correctly set up your Gitlab Runner with TLS certificates? I've noticed you are using "/certs"on your gitlab-ci.yml, did you mount your runner to share the volume where the certificates are stored?
If your Gitlab Runner is not running with privileged permissions or correctly configured to use the remote machine's network socket, you won't be able to create networks. A simple solution to unify your networks to run in a CI/CD environment is to configure your machine using this docker-compose followed by this script. (Source) It'll setup a local network where you can communicate between containers using hostnames in a network where the network driver is bridged.
There's an issue with gitlab-ci.yml as well, when you execute this part of the script:
services:
- name: docker:19.03-dind
alias: localhost
integration-tests:
stage: test
script:
- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
You're renaming your docker hostname to localhost, but you never use it, instead you type directly to use the docker and docker-compose from your image, binding them to a different network set of networks than the ones created by Gitlab automatically.
Let's try this solution (Albeit I couldn't test it right now so I apologize if it doesn't work right away):
gitlab-ci.yml
image: docker/compose:debian-1.28.5 # You should be running as a privileged Gitlab Runner
services:
- docker:dind
integration-tests:
stage: test
script:
#- apk add --no-cache docker-compose
- docker-compose -f "docker-compose.replay.yml" up -d --build
- docker exec moderator-monitor_datareplay_1 bash -c 'cd src ; python integration_tests.py'
docker-compose.yml
services:
datareplay:
networks:
- web
- influxnet
- brokernet
image: data-replay
build:
context: data-replay
# volumes: You're mounting your volume to an ephemeral folder, which is in the CI pipeline and will be wiped afterwards (if you're using Docker-DIND)
# - ./data-replay:/data-replay
mosquitto:
image: eclipse-mosquitto:latest
hostname: mosquitto
networks:
- web
- brokernet
networks:
web: # hostnames are created automatically, you don't need to specify a local setup through localhost
influxnet:
brokernet:
driver: bridge #If you're using a bridge driver, an overlay2 doesn't make sense
Both of this commands will install a Gitlab Runner as Docker containers without the hassle of having to configure them manually to allow for socket binding on your project.
(1):
docker run --detach --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest
And then (2):
docker run --rm -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register --non-interactive --description "monitoring cluster instance" --url "https://gitlab.com" --registration-token "replacethis" --executor "docker" --docker-image "docker:latest" --locked=true --docker-privileged=true --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Remember to change your token on the (2) command.
I have this CircleCI configuration.
version: 2
jobs:
build:
docker:
- image: docker:18.09.2-git
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
name: elasticsearch
working_directory: ~/project
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: test
command: |
docker run --rm \
--network host \
byrnedo/alpine-curl \
elasticsearch:9200
I'm looking for a way to allow my new container to access to the elasticsearch port 9200. With this configuration, the elasticsearch is not even a known host name.
Creating an extra network is not possible, so I have this error message container sharing network namespace with another container or host cannot be connected to any other network
Host network seems to be working only in the primary image
How could I do this?
That will not work. Containers started during a build via the docker run command are running via a remote Docker engine. The cannot talk to the containers running as part of the executor via TCP since they are isolated. Just docker exec.
The solution will ultimately depend on your end goal, but one option might be to remove the Elasticsearch image/container from the executor, and use Docker Compose to get both images to talk to each other within the build.
I am trying to automate the deployment process of my project. The environment looks like:
we use GitLab to store our code
we execute a CD/CI pipeline within GitLab to build a Docker image and to store it in Amazon repository
once the build stage is completed, Docker have to run in deployment stage the latest image on the first of two instances and after successful execution to scale the containers on the second instance.
This is how the .gutlab-ci.yml file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
testBuild:
stage: build
script:
- docker login -u AWS -p <password> <link to Amazons' repo>
- docker build -t <repo/image:latest> app/
- docker push <repo/image:latest>
testDeploy:
stage: deploy
variables:
AWS_DEFAULT_REGION: "us-east-2"
AWS_ACCESS_KEY_ID: "access key"
AWS_SECRET_ACCESS_KEY: "ssecretAK"
AWS_CLUSTER: "testCluster"
AWS_SIZE: "2"
before_script:
- apk add --update curl
- curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
- chmod +x /usr/local/bin/ecs-cli
script:
- docker login -u AWS -p <password> <repo_link>
- docker run --rm --name <name-ofcontainer> -p 80:8000 -i <repo/image:latest>
- ecs-cli configure --region $AWS_DEFAULT_REGION --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --cluster $AWS_CLUSTER
- ecs-cli scale --capability-iam --size $AWS_SIZE
only:
- development
Now when the script is successfully executed I SSH the instances and and enter docker ps -a it does not list a running container also it does not find the image with docker image.
If I enter manually the commands on one of the instances the website is available.
My questions is how to make the container available?
EDIT 1:
We use shared runner, if this is what you asks. The reason we use docker:dind is because when we do not use it the following error occurs and we cannot go further:
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
GitLab's running in kubernetes cluster. Runner can't build docker image with build artifacts. I've already tried several approaches to fix this, but no luck. Here are some configs snippets:
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B --settings settings.xml"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t gitlab.my.com/group/app .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
- docker push gitlab.my.com/group/app
config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "app"
url = "https://gitlab.my.com/ci"
token = "xxxxxxxx"
executor = "kubernetes"
[runners.kubernetes]
privileged = true
disable_cache = true
Package stage log:
running with gitlab-ci-multi-runner 1.11.1 (a67a225)
on app runner (6265c5)
Using Kubernetes namespace: default
Using Kubernetes executor with image docker:latest ...
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Running on runner-6265c5-project-4-concurrent-0h9lg9 via gitlab-runner-3748496643-k31tf...
Cloning repository...
Cloning into '/group/app'...
Checking out 10d5a680 as master...
Skipping Git submodules setup
Downloading artifacts for maven-build (61)...
Downloading artifacts from coordinator... ok id=61 responseStatus=200 OK token=ciihgfd3W
$ docker build -t gitlab.my.com/group/app .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
What am I doing wrong?
Don't need to use this:
DOCKER_DRIVER: overlay
cause it seems like OVERLAY isn't supported, so svc-0 container is unable to start with it:
$ kubectl logs -f `kubectl get pod |awk '/^runner/{print $1}'` -c svc-0
time="2017-03-20T11:19:01.954769661Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
time="2017-03-20T11:19:01.955720778Z" level=info msg="libcontainerd: new containerd process, pid: 20"
time="2017-03-20T11:19:02.958659668Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded."
Also, add export DOCKER_HOST="tcp://localhost:2375" to the docker-build:
docker-build:
stage: package
script:
- export DOCKER_HOST="tcp://localhost:2375"
- docker build -t gitlab.my.com/group/app .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
- docker push gitlab.my.com/group/app
When using Kubernetes, you have to adjust your Build image to connect with the Docker engine.
Add to your build image:
DOCKER_HOST=tcp://localhost:2375
Quote from the docs:
Running the docker:dind also known as the docker-in-docker image is also
possible but sadly needs the containers to be run in privileged mode.
If you're willing to take that risk other problems will arise that might not
seem as straight forward at first glance. Because the docker daemon is started
as a service usually in your .gitlab-ci.yaml it will be run as a separate
container in your pod. Basically containers in pods only share volumes assigned
to them and an IP address by wich they can reach each other using localhost.
/var/run/docker.sock is not shared by the docker:dind container and the docker
binary tries to use it by default. To overwrite this and make the client use tcp
to contact the docker daemon in the other container be sure to include
DOCKER_HOST=tcp://localhost:2375 in your environment variables of the build container.
Gitlab-CI on Kubernetes
based on #Yarik 's comment what worked for me was
- export DOCKER_HOST=$DOCKER_PORT
no other answers worked.
I had the same problem, and I could not get the above workarounds to work for me (I did not try the volumes trick mentioned by #fkpwolf).
Now GitLab has an alternative solution by using Kaniko, which did work for me:
The .gitlab-ci.yaml could then be something like this, in that case:
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B --settings settings.xml"
artifacts:
paths:
- target/*.jar
docker-kaniko-build:
stage: package
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- echo "{\"auths\":{\"gitlab.my.com\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_BUILD_TOKEN\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination gitlab.my.com/group/app
From the GitLab docs it is mentioned that:
kaniko solves two problems with using the docker-in-docker build method:
Docker-in-docker requires privileged mode in order to function, which is a significant security concern.
Docker-in-docker generally incurs a performance penalty and can be quite slow.
See: https://docs.gitlab.com/ee/ci/docker/using_kaniko.html