Not able to build docker image in gitlab CI - docker

Been trying to build a simple gitlab CI pipeline which builds an image and pushes it to Google container repository. I am running through this error -
ERROR: error during connect: Get "http://docker:2375/v1.24/info": dial
tcp: lookup docker on 169.254.169.254:53: no such host
I have tried all the solutions posted across gitlab issues threads but no help. I am using public runners, it's a pretty simple ci script.
image: docker:latest
variables:
GCR_IMAGE: <GCR_IMAGE>
services:
- docker:dind
build:
stage: build
before_script:
- docker info
- echo $GOOGLE_CLOUD_ACCOUNT | docker login -u _json_key --password-stdin https://us.gcr.io
script:
- docker build -t $GCR_IMAGE:latest .
- docker push $GCR_IMAGE:$CI_COMMIT_SHA
Relevant issue thread: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4794
Using gitlab-runner 15.7.1

A few weeks ago I encountered this problem and was able to solve it with this method:
image:
name: docker:20.10.16
services:
- name: docker:20.10.16-dind
variables:
DOCKER_HOST: tcp://docker:2376/
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
before_script:
- until docker info; do sleep 1; done
- echo $GOOGLE_CLOUD_ACCOUNT | docker login -u _json_key --password-stdin https://us.gcr.io
script:
- docker build -t $GCR_IMAGE:latest .
- docker push $GCR_IMAGE:$CI_COMMIT_SHA
Also add this configuration to runner
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:20.04"
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"

Related

Gitlab job failed with exit code2 after execution of owasp zap scanner

Gitlab job are failing with exit code 2 after execution. No error details are appearing in log file only showing ERROR: Job failed: exit code 2.
.gitlab.ci.yml file:
image: docker:latest
services:
- name: docker:dind
alias: thedockerhost
variables:
DOCKER_HOST: tcp://thedockerhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stages:
- test1
test1:
stage: test1
script:
- docker run -v $(pwd):/zap/wrk/:rw --name zap2 owasp/zap2docker-stable zap-baseline.py -t http://www.example.com -r example.html
artifacts:
when: always
paths:
- example.html
The the documentation:
https://www.zaproxy.org/docs/docker/full-scan/
https://www.zaproxy.org/docs/docker/baseline-scan/
https://www.zaproxy.org/docs/docker/api-scan/
-I do not return failure on warning
If you want to understand the various exit states you can check the Open Source code: https://github.com/zaproxy/zaproxy/tree/main/docker
For further information about using ZAP's packaged scans and docker images refer to: https://www.zaproxy.org/docs/docker/

Authentication Error when Building and Pushing docker image to ACR using Azure DevOps Pipelines and docker-compose

I am trying to build and push a docker image to ACR using Azure DevOps pipelines. I have to build it with a docker-compose.yml file to be able to use openvpn in the container.
When I run the pipeline I get the following error. Does anyone have an idea of how to solve this?
Starting: DockerCompose
==============================================================================
Task : Docker Compose
Description : Build, push or run multi-container Docker applications. Task can be used with Docker or Azure Container registry.
Version : 0.183.0
Author : Microsoft Corporation
Help : https://aka.ms/azpipes-docker-compose-tsg
==============================================================================
/usr/local/bin/docker-compose -f /home/vsts/work/1/s/src/docker-compose.yml -f /home/vsts/agents/2.188.2/.docker-compose.1624362077551.yml -p Compose up -d
Creating network "composeproject_default" with the default driver
Pulling getstatus (***/getstatus:)...
Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]Creating network "composeproject_default" with the default driver
##[error]Pulling getstatus (***/getstatus:)...
##[error]Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]The process '/usr/local/bin/docker-compose' failed with exit code 1
Finishing: DockerCompose
My azure-pipelines.yml look like this:
# Docker
# Build and push an image to Azure Container Registry
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: '*****************************'
imageRepository: 'getstatus'
containerRegistry: 'composeproject.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: DockerCompose#0
inputs:
containerregistrytype: 'Azure Container Registry'
dockerComposeFile: '**/docker-compose.yml'
action: 'Run a Docker Compose command'
dockerComposeCommand: 'up -d'
And the docker-compose.yml like this:
version: "3.3"
services:
getstatus:
image: composeproject.azurecr.io/getstatus
restart: always
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
volumes:
- /etc/timezone:/etc/timezone:ro
I think your docker compose task is missing a couple of parameters
try adding azureContainerRegistry: composeproject.azurecr.io
and azureSubscriptionEndpoint: $(dockerRegistryServiceConnection)
Not sure why the credentials supplied in the Docker#2 task don't persist since they're in the same stage but then I could fill an encyclopedia with what I'm not sure on when it comes to Azure pipelines

gitlab-runner docker exeuctor (dind) - Error https://docker:2375/v1.40/info dial tcp: lookup docker on

Ive an issue with gitlab-runner executor docker. After I ran my gitlab-ci.yml file , pipeline fail on step docker info during before_script with:
Running with gitlab-runner 13.10.0 (54944146)
on docker-runner N2_yEgUD
Preparing the "docker" executor 00:07
Using Docker executor with image docker:19.03.0 ...
Starting service docker:19.03.0-dind ...
Pulling docker image docker:19.03.0-dind ...
Using docker image sha256:fd0c64832f7e46b63a180e6000dbba7ad7a63542c5764841cba73429ba74a39e for docker:19.03.0-dind with digest docker#sha256:442ac4b31375cbe617f31759b5199d240f11d5f430e54946575b274b2fb6f096 ...
Waiting for services to be up and running...
.............................................................................................
$ docker info
Client:
Debug Mode: false
Server:
ERROR: error during connect: Get https://docker:2375/v1.40/info: dial tcp: lookup docker on 127.0.0.53:53: server misbehaving
errors pretty printing info
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I did a research on stack and official gitlab forum but none of the answers fix my issue:
add to .toml -> volume: ['/certs/client']
run against old: docker:18.x.x / docker:18.x.x -dind | docker:stable / docker:dind
run with: DOCKER_TLS_CERTDIR:""
run with/without:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
add endpoint to service:
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
Content of gitlab-runner toml
concurrent = 1
check_interval = 0
log_level = "debug"
[session_server]
session_timeout = 1800
[[runners]]
name = "docker-runner"
url = "xxxxxxxx"
token = "xxxxxxx"
executor = "docker"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
privileged = true
image = "docker:19.03.12"
disable_cache = false
volumes = ["/cache", "/certs/client"]
network_mode = "host"
Content of gitlab-ci.yml
image: docker:19.03.0
services:
- docker:19.03.0-dind
stages:
- build
- test_framework
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: xxxx
CONTAINER_RELEASE_IMAGE: xxxx
before_script:
- docker info
- docker login -u xxxx -p $CI_JOB_TOKEN xxxx
build:
stage: build
tags:
- adm-docker
script:
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build -t $CONTAINER_TEST_IMAGE --cache-from $CONTAINER_RELEASE_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
timeout: 1 hours
.test_commit: &test_commit
stage: test_framework
image: $CONTAINER_TEST_IMAGE
tags:
- adm-docker
timeout: 1 hours
artifacts:
reports:
junit: 'results/xunit.xml'
expire_in: 1 day
except:
- master
test-unit:
<<: *test_commit
script:
- python3 -m pytest --junitxml=results/xunit.xml test_unit/
Only one thing fix issue (workaround issue). When I add to .toml
volume: ["/var/run/docker.sock:/var/run/docker.sock"]
But after that Iam loosing DIND possibility to run my gitlab-ci.yml with different image for test stage (without using under script: -docker run MY_IMAGE python3....).
Which is not what I want
gitlab-runner under Ubuntu20 / Docker version 20.10.5, build 55c4c88
Ive worked with very similar gitlab-ci.yml around 1Yr ago and there was no issue with docker executor
Any ideas/suggestions ?
I was able to fix issue by changing flow of my gitlab-ci.yml
image: docker:19.03.5
services:
- docker:19.03.5-dind
stages:
- build
- test_framework
- release
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: xxxxx
CONTAINER_RELEASE_IMAGE: xxxxx
build:
stage: build
tags:
- adm-docker
before_script:
- docker info
- docker login -u xxxxx -p $CI_JOB_TOKEN xxxxx
script:
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build -t $CONTAINER_TEST_IMAGE --cache-from $CONTAINER_RELEASE_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
timeout: 1 hours
.test_commit: &test_commit
stage: test_framework
tags:
- adm-docker
timeout: 1 hours
artifacts:
reports:
junit: 'results/xunit.xml'
expire_in: 1 day
except:
- master
test-unit:
<<: *test_commit
image: $CONTAINER_TEST_IMAGE
script:
- python3 -m pytest --junitxml=results/xunit.xml test_unit/
and toml
[[runners]]
name = "docker-runner"
url = xxxxx
token = xxxxx
executor = "docker"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
privileged = true
image = "docker:19.03.12"
disable_cache = false
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
network_mode = "host"
issue was fixed by volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
and issue with test stage was cased by:
before_script:
- docker info
- docker login -u xxxxx -p $CI_JOB_TOKEN xxxxx
in root structure of .yml file. I had to move it to build stage
I hope that will help ppl in the future

Rolling out a new backend version + Kubernetes + Gitlab CI + Google Cloud

I don't know how to update my backend workload on my Kubernetes cluster. My Gitlab Pipeline is running without errors. My active revision is still on my first push, so how can I update the revision to call the rolling update action? Can I integrate an automatic rollout into the Gitlab Ci?
.gitlab-ci
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MYNAME --docker-password=$REGISTRY_PASSWD --docker-email=MYMAIL
- kubectl apply -f deployment.yml
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ft-backend
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: ft-backend
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: registry.gitlab.com
Google Cloud Workload
As discussed in comments, you have to update your Deployment .spec.template to trigger a rollout. An easy way for you to do it is to tag your image upon release.
In your .gitlab-ci.yml file you can use the CI_COMMIT_SHA variable:
# in your docker-build job, update build and push:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
# in your k8s-deploy job add this:
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
That would both version your image on your GitLab project registry, and trigger a rollout.
Like Clorichel mentioned in the comments, you'd need to modify your deployment to trigger a rollout. You could use something like Gitflow and Semantic Versioning (if you're not already) to tag your container image. For example, in the .gitlab-ci you could add the Git tag to your container image:
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG
In the deployment.yml you would reference the new version:
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend:YOUR_NEW_GIT_TAG
imagePullPolicy: Always
ports:
- containerPort: 8080

Kubernetes deployment.extensions not found

I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the yml and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.
$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
Error from server (NotFound): deployments.extensions "ft-backend" not
found
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
I suppose that when you are invoking command:
kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
deployment ft-backend does not exist in your cluster. Does the command: kubectl get deployment ft-backend return the same result?
Use this command to create deployments, its not supported in newer version:
check this for newer version:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

Resources