Gitlab job failed with exit code2 after execution of owasp zap scanner - docker

Gitlab job are failing with exit code 2 after execution. No error details are appearing in log file only showing ERROR: Job failed: exit code 2.
.gitlab.ci.yml file:
image: docker:latest
services:
- name: docker:dind
alias: thedockerhost
variables:
DOCKER_HOST: tcp://thedockerhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stages:
- test1
test1:
stage: test1
script:
- docker run -v $(pwd):/zap/wrk/:rw --name zap2 owasp/zap2docker-stable zap-baseline.py -t http://www.example.com -r example.html
artifacts:
when: always
paths:
- example.html

The the documentation:
https://www.zaproxy.org/docs/docker/full-scan/
https://www.zaproxy.org/docs/docker/baseline-scan/
https://www.zaproxy.org/docs/docker/api-scan/
-I do not return failure on warning
If you want to understand the various exit states you can check the Open Source code: https://github.com/zaproxy/zaproxy/tree/main/docker
For further information about using ZAP's packaged scans and docker images refer to: https://www.zaproxy.org/docs/docker/

Related

Not able to build docker image in gitlab CI

Been trying to build a simple gitlab CI pipeline which builds an image and pushes it to Google container repository. I am running through this error -
ERROR: error during connect: Get "http://docker:2375/v1.24/info": dial
tcp: lookup docker on 169.254.169.254:53: no such host
I have tried all the solutions posted across gitlab issues threads but no help. I am using public runners, it's a pretty simple ci script.
image: docker:latest
variables:
GCR_IMAGE: <GCR_IMAGE>
services:
- docker:dind
build:
stage: build
before_script:
- docker info
- echo $GOOGLE_CLOUD_ACCOUNT | docker login -u _json_key --password-stdin https://us.gcr.io
script:
- docker build -t $GCR_IMAGE:latest .
- docker push $GCR_IMAGE:$CI_COMMIT_SHA
Relevant issue thread: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4794
Using gitlab-runner 15.7.1
A few weeks ago I encountered this problem and was able to solve it with this method:
image:
name: docker:20.10.16
services:
- name: docker:20.10.16-dind
variables:
DOCKER_HOST: tcp://docker:2376/
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
before_script:
- until docker info; do sleep 1; done
- echo $GOOGLE_CLOUD_ACCOUNT | docker login -u _json_key --password-stdin https://us.gcr.io
script:
- docker build -t $GCR_IMAGE:latest .
- docker push $GCR_IMAGE:$CI_COMMIT_SHA
Also add this configuration to runner
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:20.04"
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"

Authentication Error when Building and Pushing docker image to ACR using Azure DevOps Pipelines and docker-compose

I am trying to build and push a docker image to ACR using Azure DevOps pipelines. I have to build it with a docker-compose.yml file to be able to use openvpn in the container.
When I run the pipeline I get the following error. Does anyone have an idea of how to solve this?
Starting: DockerCompose
==============================================================================
Task : Docker Compose
Description : Build, push or run multi-container Docker applications. Task can be used with Docker or Azure Container registry.
Version : 0.183.0
Author : Microsoft Corporation
Help : https://aka.ms/azpipes-docker-compose-tsg
==============================================================================
/usr/local/bin/docker-compose -f /home/vsts/work/1/s/src/docker-compose.yml -f /home/vsts/agents/2.188.2/.docker-compose.1624362077551.yml -p Compose up -d
Creating network "composeproject_default" with the default driver
Pulling getstatus (***/getstatus:)...
Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]Creating network "composeproject_default" with the default driver
##[error]Pulling getstatus (***/getstatus:)...
##[error]Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]The process '/usr/local/bin/docker-compose' failed with exit code 1
Finishing: DockerCompose
My azure-pipelines.yml look like this:
# Docker
# Build and push an image to Azure Container Registry
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: '*****************************'
imageRepository: 'getstatus'
containerRegistry: 'composeproject.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: DockerCompose#0
inputs:
containerregistrytype: 'Azure Container Registry'
dockerComposeFile: '**/docker-compose.yml'
action: 'Run a Docker Compose command'
dockerComposeCommand: 'up -d'
And the docker-compose.yml like this:
version: "3.3"
services:
getstatus:
image: composeproject.azurecr.io/getstatus
restart: always
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
volumes:
- /etc/timezone:/etc/timezone:ro
I think your docker compose task is missing a couple of parameters
try adding azureContainerRegistry: composeproject.azurecr.io
and azureSubscriptionEndpoint: $(dockerRegistryServiceConnection)
Not sure why the credentials supplied in the Docker#2 task don't persist since they're in the same stage but then I could fill an encyclopedia with what I'm not sure on when it comes to Azure pipelines

gitlab-runner docker exeuctor (dind) - Error https://docker:2375/v1.40/info dial tcp: lookup docker on

Ive an issue with gitlab-runner executor docker. After I ran my gitlab-ci.yml file , pipeline fail on step docker info during before_script with:
Running with gitlab-runner 13.10.0 (54944146)
on docker-runner N2_yEgUD
Preparing the "docker" executor 00:07
Using Docker executor with image docker:19.03.0 ...
Starting service docker:19.03.0-dind ...
Pulling docker image docker:19.03.0-dind ...
Using docker image sha256:fd0c64832f7e46b63a180e6000dbba7ad7a63542c5764841cba73429ba74a39e for docker:19.03.0-dind with digest docker#sha256:442ac4b31375cbe617f31759b5199d240f11d5f430e54946575b274b2fb6f096 ...
Waiting for services to be up and running...
.............................................................................................
$ docker info
Client:
Debug Mode: false
Server:
ERROR: error during connect: Get https://docker:2375/v1.40/info: dial tcp: lookup docker on 127.0.0.53:53: server misbehaving
errors pretty printing info
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I did a research on stack and official gitlab forum but none of the answers fix my issue:
add to .toml -> volume: ['/certs/client']
run against old: docker:18.x.x / docker:18.x.x -dind | docker:stable / docker:dind
run with: DOCKER_TLS_CERTDIR:""
run with/without:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
add endpoint to service:
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
Content of gitlab-runner toml
concurrent = 1
check_interval = 0
log_level = "debug"
[session_server]
session_timeout = 1800
[[runners]]
name = "docker-runner"
url = "xxxxxxxx"
token = "xxxxxxx"
executor = "docker"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
privileged = true
image = "docker:19.03.12"
disable_cache = false
volumes = ["/cache", "/certs/client"]
network_mode = "host"
Content of gitlab-ci.yml
image: docker:19.03.0
services:
- docker:19.03.0-dind
stages:
- build
- test_framework
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: xxxx
CONTAINER_RELEASE_IMAGE: xxxx
before_script:
- docker info
- docker login -u xxxx -p $CI_JOB_TOKEN xxxx
build:
stage: build
tags:
- adm-docker
script:
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build -t $CONTAINER_TEST_IMAGE --cache-from $CONTAINER_RELEASE_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
timeout: 1 hours
.test_commit: &test_commit
stage: test_framework
image: $CONTAINER_TEST_IMAGE
tags:
- adm-docker
timeout: 1 hours
artifacts:
reports:
junit: 'results/xunit.xml'
expire_in: 1 day
except:
- master
test-unit:
<<: *test_commit
script:
- python3 -m pytest --junitxml=results/xunit.xml test_unit/
Only one thing fix issue (workaround issue). When I add to .toml
volume: ["/var/run/docker.sock:/var/run/docker.sock"]
But after that Iam loosing DIND possibility to run my gitlab-ci.yml with different image for test stage (without using under script: -docker run MY_IMAGE python3....).
Which is not what I want
gitlab-runner under Ubuntu20 / Docker version 20.10.5, build 55c4c88
Ive worked with very similar gitlab-ci.yml around 1Yr ago and there was no issue with docker executor
Any ideas/suggestions ?
I was able to fix issue by changing flow of my gitlab-ci.yml
image: docker:19.03.5
services:
- docker:19.03.5-dind
stages:
- build
- test_framework
- release
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: xxxxx
CONTAINER_RELEASE_IMAGE: xxxxx
build:
stage: build
tags:
- adm-docker
before_script:
- docker info
- docker login -u xxxxx -p $CI_JOB_TOKEN xxxxx
script:
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build -t $CONTAINER_TEST_IMAGE --cache-from $CONTAINER_RELEASE_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
timeout: 1 hours
.test_commit: &test_commit
stage: test_framework
tags:
- adm-docker
timeout: 1 hours
artifacts:
reports:
junit: 'results/xunit.xml'
expire_in: 1 day
except:
- master
test-unit:
<<: *test_commit
image: $CONTAINER_TEST_IMAGE
script:
- python3 -m pytest --junitxml=results/xunit.xml test_unit/
and toml
[[runners]]
name = "docker-runner"
url = xxxxx
token = xxxxx
executor = "docker"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
privileged = true
image = "docker:19.03.12"
disable_cache = false
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
network_mode = "host"
issue was fixed by volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
and issue with test stage was cased by:
before_script:
- docker info
- docker login -u xxxxx -p $CI_JOB_TOKEN xxxxx
in root structure of .yml file. I had to move it to build stage
I hope that will help ppl in the future

Preferred way to Build/Test/Deploy docker images in GitLab CI/CD

I am trying to build a CI/CD pipeline in GitLab. The goal is to build a docker image from a Dockerfile, run tests on the running container, push the image to DockerHub, then deploy it to a Kubernetes cluster. This is what I currently have for my gitlab-ci.yml.
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
CONTAINER_IMAGE: ${DOCKER_USER}/my_app
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
stage: build
script:
- echo ${DOCKER_PASSWORD} | docker login --username ${DOCKER_USER} --password-stdin
- docker pull ${CONTAINER_IMAGE}:latest || true
- docker build --cache-from ${CONTAINER_IMAGE}:latest --tag ${CONTAINER_IMAGE}:$CI_COMMIT_SHA --tag ${CONTAINER_IMAGE}:latest .
- docker push ${CONTAINER_IMAGE}:$CI_COMMIT_SHA
- docker push ${CONTAINER_IMAGE}:latest
deploy:
image:
name: bitnami/kubectl:1.16.15
entrypoint: [""]
stage: deploy
variables:
GIT_STRATEGY: none
script:
- kubectl get pods -A # <- Won't work until I pass a Kubeconfig file with cluster details
I have a few main questions:
How can I deploy this image? I know I need to pass a KUBECONFIG file to bitnami/kubectl, but not sure how to do that with GitLab CI/CD
Can I pass the built image to a test stage before pushing to DockerHub
---
stages:
- test app
- build
- test
- deploy
test app:
stage: test_app
image: node:latest
script:
- git clone (path to code)
- npm install
- lint
- audit fix
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: anchor:latest (one you have built yourself or use another testing suite)
script:
- anchore-cli image add user/image:v1
- anchore-cli image wait user/image:v1
- anchore-cli image content user/image:v1
- image vuln user/image:v1 all
- anchore-cli evaluate check user/image:v1 > result .txt
- if ( grep -ci "fail" result.txt >= 1); then exit 1 fi
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
image:
name: kubectl:latest (build your own image that installed kubectl)
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=my-service
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
1. have variables passed in for cluter address, cert data, and token stuff... so you can target other clusters, pre-prod, prod, qa...
2. you can't test an image that isn't on the repo, as the testing suite needs to pull the image from somewhere... You should have a clean up script running to cleanup old image in your repo anyway, so the initial push should be a (test location)
like: docker push untrusted/image:v1
You should also have before scripts and after scripts... before calls docker login
after calls docker logout...
I do not have an answer for deploying to Kubernetes, but I do recommend publishing a test/construction image to Dockerhub when working a merge request/development branch of building the image. Then only deploy the latest image when you merge the branch to master .
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_iamge:test
- deploy push your_image:test
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
- export BRANCH=${CI_COMMIT_TITLE#*\'}; export BRANCH=${BRANCH%\' into*}
- docker delete your_image:$BRANCH
rules:
- if: '$CI_COMMIT_REF_NAME == "master"

Kubernetes deployment.extensions not found

I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the yml and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.
$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
Error from server (NotFound): deployments.extensions "ft-backend" not
found
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
I suppose that when you are invoking command:
kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
deployment ft-backend does not exist in your cluster. Does the command: kubectl get deployment ft-backend return the same result?
Use this command to create deployments, its not supported in newer version:
check this for newer version:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

Resources