Container Scanning feature does not work for multiple images - docker

I've successfully setup the Container Scanning feature from GitLab for a single Docker image. Now I'd like to scan yet another image using the same CI/CD configuration in .gitlab-ci.yml
Problem
It looks like it is not possible to have multiple Container Scanning reports on the Merge Request detail page.
The following screenshot shows the result of both Container Scanning jobs in the configuration below.
We scan two Docker images, which both have CVE's to be reported:
iojs:1.6.3-slim (355 vulnerabilities)
golang:1.3 (1139 vulnerabilities)
Expected result
The Container Scanning report would show a total of 1494 vulnerabilities (355 + 1139). Currently it looks like only the results for the golang image are being included.
Relevant parts of the configuration
container_scanning_first_image:
script:
- docker pull golang:1.3
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-first-image.json
container_scanning_second_image:
script:
- docker pull iojs:1.6.3-slim
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-second-image.json
Full configuration for reference
image: docker:stable
stages:
- scan
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
container_scanning_first_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull golang:1.3
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
paths:
- gl-container-scanning-report-first-image.json
reports:
container_scanning: gl-container-scanning-report-first-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
container_scanning_second_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull iojs:1.6.3-slim
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
paths:
- gl-container-scanning-report-second-image.json
reports:
container_scanning: gl-container-scanning-report-second-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
Question
How should the GitLab Container Scanning feature be configured in order to be able to report the results of two Docker images?

Related

Can we combine jobs into one in .gitlab ci/cd yaml file?

This is ci/cd yaml file I using
services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build_dev:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name= development -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- developer
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name= stage -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- stage
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service.yml -n ${KUBE_NAMESPACE_DEV}
only:
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- stage
But currently I want to combine the stages of publish & deploy? I done but it shows some error in publish stage
services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name= development -t $IMAGE_TAG .
- docker build --build-arg environment_name= stage -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- developer
- stage
deploy_job:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- developer
- stage
This is the one , I used now but it shows error
$ docker build --build-arg environment_name= development -t $IMAGE_TAG .
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
My problem is , I'm combining two branch (stage & developer) yaml scripts and files , for the single line like "--build-arg environment_name=development" for developer "--build-arg environment_name=stage" for stage likely I separating the jobs for this single line, so only I'm asking that , is there any possibility for combining the script? So I enclosed the full script which is divided and also combined one #Bichon Motive: want to combine the two publish(developer and stage) and deploy (developer and stage)jobs into single job
Following my comments, here is my understanding of the problem, what I think is wrong in the solution attempt, my solution and finally the limitations of what you want to achieve.
Problem
An original Gitlab-CI script build an image docker and deploy associated Kubernetes resources for two environments: stage and development. These are respectively built and deployed by different jobs, each one executed for a dedicated branch (respectively stage and developer). Now, I guess, the two environments are merged into the same cluster. For some unknown reason, the question ask that pushing on either one of the two branches should build and deploy the two environments with the same code (which is almost sure to bring problems in the future if the service is not stateless but let's suppose it is). If this is not the problem to solve, let me know please.
Errors in your solution attempt
as mentioned in comment, the docker error is raised because of the space in the --build-arg environment_name= development which should be --build-arg environment_name=development
when you build your docker image, you give them a build argument that determines for which environment they are built. However, you are using the same docker tag ${IMAGE_TAG} for both docker images. So what happens is that your last built image (the one for stage) will be deployed for both environments which is not what you want.
Solution
We fix the docker error and use different docker image names for each environment. Also, we must sed the version for both environment before deployment so I created temporary copies of the original deployment.
docker_build:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG_DEV: $CI_REGISTRY_IMAGE:dev-$CI_COMMIT_SHORT_SHA
IMAGE_TAG_STAGE: $CI_REGISTRY_IMAGE:stage-$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name=development -t "${IMAGE_TAG_DEV}" .
- docker build --build-arg environment_name=stage -t "${IMAGE_TAG_STAGE}" .
- docker push "${IMAGE_TAG_DEV}" "${IMAGE_TAG_STAGE}"
only:
- developer
- stage
deploy_job:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- cp provider-service.yml provider-service-dev.yml
- cp provider-service.yml provider-service-stage.yml
- sed -i "s/<VERSION>/dev-${CI_COMMIT_SHORT_SHA}/g" provider-service-dev.yml
- sed -i "s/<VERSION>/stage-${CI_COMMIT_SHORT_SHA}/g" provider-service-stage.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service-dev.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f provider-service-stage.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- developer
- stage
Limitations
Beware that such a workflow (having two distinct branches deploying, potentially in concurrence, two environments at the same time) is not really recommended and can lead to versioning problems, especially if your service is not stateless but also if the branches diverge. So before using this, I would advise to ensure that your service is stateless and also, for instance, that merge requests are fast-forward only.

where is the docker image stored in gitlab ci?

I have build a docker image successfully and tag it as testdock:latest ($CI_REGISTRY_IMAGE:latest) the $CI_REGISTRY variable is kept in GitLab project variable.
I have another stage , to start scanning the testdock image by using Trivy:
the process is just stuck without progress. I am guessing is that the image cannot be found or something wrong with the docker environment in GitLab.
Where is the `docker image (testdock)` stored?
this is the command that I used for Trivy to scan the testdock image:
$ TRIVY_INSECURE=true trivy --skip-update --output "$CI_PROJECT_DIR/scanning-report.json" $CI_REGISTRY_IMAGE:latest
the yml:
build:
stage: build
image: $CI_REGISTRY/devops/docker:latest
services:
- $CI_REGISTRY/devops/docker:dind-nx1.0
#tags:
# - docker
variables:
# No need to clone the repo, we exclusively work on artifacts. See
# https://docs.gitlab.com/ee/ci/runners/README.html#git-strategy
TRIVY_USERNAME: "$CI_REGISTRY_USER"
TRIVY_PASSWORD: "$CI_REGISTRY_PASSWORD"
TRIVY_AUTH_URL: "$CI_REGISTRY"
FULL_IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
# Tell docker CLI how to talk to Docker daemon.
DOCKER_HOST: tcp://localhost:2375/
# Use the overlayfs driver for improved performance.
DOCKER_DRIVER: overlay2
# Disable TLS since we're running inside local network.
DOCKER_TLS_CERTDIR: ""
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build -t $FULL_IMAGE_NAME .
# - docker push $CI_REGISTRY_IMAGE:latest
security_scan:
stage: test
image:
name: $CI_REGISTRY/devops/trivy/trivy:0.20.1
entrypoint: [""]
services:
- $CI_REGISTRY/devops/docker:dind-nx1.0
#tags:
# - docker
variables:
# No need to clone the repo, we exclusively work on artifacts. See
# https://docs.gitlab.com/ee/ci/runners/README.html#git-strategy
# GIT_STRATEGY: none
TRIVY_USERNAME: "$CI_REGISTRY_USER"
TRIVY_PASSWORD: "$CI_REGISTRY_PASSWORD"
TRIVY_AUTH_URL: "$CI_REGISTRY"
FULL_IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
# Tell docker CLI how to talk to Docker daemon.
DOCKER_HOST: tcp://localhost:2375/
# Use the overlayfs driver for improved performance.
DOCKER_DRIVER: overlay2
# Disable TLS since we're running inside local network.
DOCKER_TLS_CERTDIR: ""
before_script:
- git config --global http.sslVerify false
- git clone $CI_REPOSITORY_URL
- echo "the project directory is - $CI_PROJECT_DIR"
- echo "the CI_REGISTRY_IMAGE variable is - $CI_REGISTRY_IMAGE"
- echo "the full image name is - $FULL_IMAGE_NAME"
- ls -la
- trivy -h | grep cache
- mkdir -p /root/.cache/trivy/db
- ls -la
- cp "eval-trivy-2/trivy-offline.db.tgz" "/root/.cache/trivy/db"
- cd /root/.cache/trivy/db
- tar xvf trivy-offline.db.tgz
- ls -la
script:
- trivy --version
- time trivy image --clear-cache
# running 1 hr and stopped.
#- TRIVY_INSECURE=true trivy --skip-update $CI_REGISTRY_IMAGE:latest
#- TRIVY_INSECURE=true trivy --skip-update -f json -o scanning-report.json $CI_REGISTRY/devops/aquasec/trivy:0.16.0
- TRIVY_INSECURE=true trivy --skip-update -o "$CI_PROJECT_DIR/scanning-report.json" $FULL_IMAGE_NAME
#keep loading by using testdock:latest
#- TRIVY_INSECURE=true trivy --skip-update -o "$CI_PROJECT_DIR/scanning-report.json" testdock:latest
# - TRIVY_INSECURE=true trivy --skip-update --exit-code 1 --severity CRITICAL $CI_REGISTRY/devops/aquasec/trivy:0.16.0
artifacts:
when: always
reports:
container_scanning: scanning-report.json
All jobs are running isolated. Therefore jobA normally does not know what jobB produced as long as you do not tell the job specifically to pass things on to the next job with the artifacts directive.
In your case you build your image in your job, but if you did not push it - it will be just like any throw away data and lost at the next stage. The easiest way is to push it to a docker registry and use it from there. eg. a common practice is to tag it with the commit SHA instead of latest. This way you can ensure you are always hitting the right image.
final gitlan-ci.yml which works well now:
variables:
# Tell docker CLI how to talk to Docker daemon.
DOCKER_HOST: tcp://localhost:2375/
# Use the overlayfs driver for improved performance.
DOCKER_DRIVER: overlay2
# Disable TLS since we're running inside local network.
DOCKER_TLS_CERTDIR: ""
services:
- $CI_REGISTRY/devops/docker:dind-nx1.0
stages:
- build
- test
#include:
# Trivy integration with GitLab Container Scanning
# - remote: "https://github.com/aquasecurity/trivy/raw/master/contrib/Trivy.gitlab-ci.yml"
build:
image: $CI_REGISTRY/devops/docker:latest
stage: build
variables:
IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
script:
- docker info
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t $IMAGE .
- docker tag $IMAGE $CI_REGISTRY/$IMAGE
- docker push $CI_REGISTRY/$IMAGE
Trivy_container_scanning:
stage: test
image:
name: $CI_REGISTRY/devops/trivy/trivy:0.20.1
variables:
# Override the GIT_STRATEGY variable in your `.gitlab-ci.yml` file and set it to `fetch` if you want to provide a `clair-whitelist.yml`
# file. See https://docs.gitlab.com/ee/user/application_security/container_scanning/index.html#overriding-the-container-scanning-template
# for details
GIT_STRATEGY: none
IMAGE: "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
allow_failure: true
before_script:
- trivy image --reset
- git config --global http.sslVerify false
- git clone $CI_REPOSITORY_URL
- echo "the project directory is - $CI_PROJECT_DIR"
- echo "the registry image is - $CI_REGISTRY_IMAGE"
- ls -la
- trivy -h | grep cache
- mkdir -p /root/.cache/trivy/db
- ls -la
- cp "eval-trivy-4/trivy-offline.db.tgz" "/root/.cache/trivy/db"
- cd /root/.cache/trivy/db
- tar xvf trivy-offline.db.tgz
- ls -la
#- export TRIVY_VERSION=${TRIVY_VERSION:-v0.19.2}
#- apk add --no-cache curl docker-cli
#- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
#- curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin ${TRIVY_VERSION}
#- curl -sSL -o /tmp/trivy-gitlab.tpl https://github.com/aquasecurity/trivy/raw/${TRIVY_VERSION}/contrib/gitlab.tpl
script:
- TRIVY_INSECURE=true trivy image --skip-update -f json -o "$CI_PROJECT_DIR/gl-container-scanning-report.json" $CI_REGISTRY/$IMAGE
#unable to write results: failed to initialize template writer: error retrieving template from path: open /tmp/trivy-gitlab.tpl: no such file or directory
# - TRIVY_INSECURE=true trivy image --skip-update --format template --template "#/tmp/trivy-gitlab.tpl" -o gl-container-scanning-report.json $CI_REGISTRY/$IMAGE
#scan error
#- trivy --skip-update --format template --template "#/tmp/trivy-gitlab.tpl" -o gl-container-scanning-report.json $CI_REGISTRY/$IMAGE
#- trivy --exit-code 0 --cache-dir .trivycache/ --no-progress --format template --template "#/tmp/trivy-gitlab.tpl" -o gl-container-scanning-report.json $IMAGE
# cache:
# paths:
# - .trivycache/
artifacts:
reports:
container_scanning: gl-container-scanning-report.json
reference and modified for my env
https://gitlab.com/aquasecurity/trivy-ci-test/-/blob/master/.gitlab-ci.yml

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

Find url/ip of container running in docker-compose in gitlab ci

I have an application that runs in docker-compose (for acceptance testing). The acceptance tests work locally, but they require the host (or ip) of the webservice container running in docker-compose in order to send requests to it. This works fine locally, but I cannot find the ip of the container when it is running in a gitlab ci server. I've tried the following few solutions (all of which work when running locally, but none of which work in gitlab ci) to find the url of the container running in docker-compose in gitlab ci server:
use "docker" as the host. This works for an application running in docker, but not docker-compose
use docker-inspect to find the ip of the container (docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' reading-comprehension)
assign a static ip to the container using a network in docker-compose.yml (latest attempt).
The gitlab ci file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/.gitlab-ci.yml
image: connorbutch/gradle-and-java-11:alpha
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: "overlay2"
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
services:
- docker:stable-dind
stages:
- build
- docker_build
- acceptance_test
unit_test:
stage: build
script: ./gradlew check
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
build:
stage: build
script:
- ./gradlew clean quarkusBuild
- ./gradlew clean build -Dquarkus.package.type=native -Dquarkus.native.container-build=true
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
artifacts:
paths:
- reading-comprehension-server-quarkus-impl/build/
docker_build:
stage: docker_build
script:
- cd reading-comprehension-server-quarkus-impl
- docker build -f infrastructure/Dockerfile -t registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA .
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker push registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA
acceptance_test:
stage: acceptance_test
only:
- merge_requests
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- cd reading-comprehension-server-quarkus-impl/infrastructure
- export IMAGE_TAG=$CI_COMMIT_SHORT_SHA
- docker-compose up -d & ../../wait-for-it-2.sh
- cd ../..
- ./gradlew -DBASE_URL='192.168.0.8' acceptanceTest
artifacts:
paths:
- reading-comprehension/reading-comprehension-server-quarkus-impl/build/
The docker-compose file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/reading-comprehension-server-quarkus-impl/infrastructure/docker-compose.yml
Find the output of one of the failed jobs here:
https://gitlab.com/connorbutch/reading-comprehension/-/jobs/734771859
#This file is NOT ever intended for use in production. Docker-compose is a great tool for running
#database with our application for acceptance testing.
version: '3.3'
networks:
network:
ipam:
driver: default
config:
- subnet: 192.168.0.0/24
services:
db:
image: mysql:5.7.10
container_name: "db"
restart: always
environment:
MYSQL_DATABASE: "rc"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
MYSQL_ROOT_PASSWORD: "password"
MYSQL_ROOT_HOST: "%"
networks:
network:
ipv4_address: 192.168.0.4
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- db:/var/lib/mysql
reading-comprehension-ws:
image: "registry.gitlab.com/connorbutch/reading-comprehension:${IMAGE_TAG}"
container_name: "reading-comprehension"
restart: on-failure
environment:
WAIT_HOSTS: "db:3306"
DB_USER: "user"
DB_PASSWORD: "password"
DB_JDBC_URL: "jdbc:mysql://192.168.0.4:3306/rc"
networks:
network:
ipv4_address: 192.168.0.8
ports:
- 8080:8080
expose:
- 8080
volumes:
db:
Does anyone have any idea on how to access the ip of the container running in docker-compose on gitlab ci server? Any suggestions are welcome.
Thanks,
Connor
This is little bit tricky, just few days ago I had similar problem but with VPN from CI to client :)
EDIT: Solution for on-premise gitlab instances
Create custom network for gitlab runners:
docker network create --subnet=172.16.0.0/28 \
--opt com.docker.network.bridge.name=gitlab-runners \
--opt com.docker.network.bridge.enable_icc=true \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
--opt com.docker.network.bridge.host_binding_ipv4=0.0.0.0 \
--opt com.docker.network.driver.mtu=9001 gitlab-runners
Attach new network to gitlab-runners
# /etc/gitlab-runner/config.toml
[[runners]]
....
[runners.docker]
....
network_mode = "gitlab-runners"
Restart runners.
And finally gitlab-ci.yml
start-vpn:
stage: prepare-deploy
image: docker:stable
cache: {}
variables:
GIT_STRATEGY: none
script:
- >
docker run -it -d --rm
--name vpn-branch-$CI_COMMIT_REF_NAME
--privileged
--net gitlab-runners
-e VPNADDR=$VPN_SERVER
-e VPNUSER=$VPN_USER
-e VPNPASS=$VPN_PASSWORD
auchandirect/forticlient || true && sleep 2
- >
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
vpn-branch-$CI_COMMIT_REF_NAME > vpn_container_ip
artifacts:
paths:
- vpn_container_ip
And in next step you can use something like:
before_script:
- ip route add 10.230.163.0/24 via $(cat vpn_container_ip) # prod/dev
- ip route add 10.230.164.0/24 via $(cat vpn_container_ip) # test
EDIT: Solution for gitlab.com
Base on gitlab issue answer port mapping in DinD is bit different from nonDinD gitlab-runner and for exposed ports you should use hostname 'docker'.
Example:
services:
- docker:stable-dind
variables:
DOCKER_HOST: "tcp://docker:2375"
stages:
- test
test env:
image: tmaier/docker-compose:latest
stage: test
script:
# containous/whoami with exposed port 80:80
- docker-compose up -d
- apk --no-cache add curl
- curl docker:80 # <-------
- docker-compose down
I'm using docker and not docker-compose and the solution above doesn't work for me/
I am using my own image based on node in which I install docker & buildx like this:
ARG NODE_VER=lts-alpine
FROM node:${NODE_VER}
ARG BUILDX_VERSION=0.5.1
ARG DOCKER_VERSION=20.10.6
ARG BUILDX_ARCH=linux-arm-v7
RUN apk --no-cache add curl
# install docker
RUN curl -SL "https://download.docker.com/linux/static/stable/armhf/docker-${DOCKER_VERSION}.tgz" | \
tar -xz --strip-components 1 --directory /usr/local/bin/
COPY docker/modprobe.sh /usr/local/bin/modprobe
# replace node entrypoint by docker one /!\
COPY docker/docker-entrypoint.sh /usr/local/bin/
ENV DOCKER_TLS_CERTDIR=/certs
RUN mkdir /certs /certs/client && chmod 1777 /certs /certs/client
# download buildx
RUN mkdir -p /usr/lib/docker/cli-plugins \
&& curl -L \
--output /usr/lib/docker/cli-plugins/docker-buildx \
"https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.${BUILDX_ARCH}"
RUN chmod a+x /usr/lib/docker/cli-plugins/docker-buildx
RUN mkdir -p /etc/docker && echo '{"experimental": true}' > /usr/lib/docker/config.json
My gitlab-ci.yml contains:
image: myimageabove
variables:
DOCKER_DRIVER: overlay2
PLATFORMS: linux/arm/v7
IMAGE_NAME: ${CI_PROJECT_NAME}
TAG: ${CI_COMMIT_BRANCH}-latest
REGISTRY: registry.gitlab.com
REGISTRY_ROOT: mygroup
WEBSOCKETD_VER: 0.4.1
# DOCKER_GATEWAY_HOST: 172.17.0.1
DOCKER_GATEWAY_HOST: docker
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" ${REGISTRY}
- docker buildx create --use
- docker buildx build --platform $PLATFORMS --tag "${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}" --push .
test:
stage: test
variables:
WSD_DIR: /tmp/websocketd
WSD_FILE: /tmp/websocketd/websocketd
cache:
key: websocketd
paths:
- ${WSD_DIR}
before_script:
# download websocketd and put in cache if needed
- if [ ! -f ${WSD_FILE} ]; then
mkdir -p ${WSD_DIR};
curl -o ${WSD_FILE}.zip -L "https://github.com/joewalnes/websocketd/releases/download/v${WEBSOCKETD_VER}/websocketd-${WEBSOCKETD_VER}-linux_arm.zip";
unzip -o ${WSD_FILE}.zip websocketd -d ${WSD_DIR};
chmod 755 ${WSD_FILE};
fi;
- mkdir /home/pier
- cp -R ./test/resources/* /home/pier
# get websocketd from cache
- cp ${WSD_FILE} /home/pier/Admin/websocketd
# setup envt variables
- JWT_KEY=$(cat /home/pier/Services/Secrets/WEBSOCKETD_KEY)
# - DOCKER_GATEWAY_HOST=$(/sbin/ip route|awk '/default/ { print $3 }')
# - DOCKER_GATEWAY_HOST=$(hostname)
- ENVT="-e BASE_URL=/ -e JWT_KEY=$JWT_KEY -e WEBSOCKETD_KEY=$JWT_KEY -e WEBSOCKET_URL=ws://${DOCKER_GATEWAY_HOST:-host.docker.internal}:8088 -e SERVICES_DIR=/home/pier/Services"
- VOLUMES='-v /tmp:/config -v /home/pier/Services:/services -v /etc/wireguard:/etc/wireguard'
script:
# start websocketd
- /home/pier/start.sh &
# start docker pier admin
- docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# run postman tests
- newman run https://api.getpostman.com/collections/${POSTMAN_COLLECTION_UID}?apikey=${POSTMAN_API_KEY}
deploy:
stage: deploy
script:
# just push to docker hub
- docker login -u "$DOCKERHUB_REGISTRY_USER" -p "$DOCKERHUB_REGISTRY_PASSWORD" ${DOCKERHUB}
- docker buildx build --platform $PLATFORMS --tag "${DOCKERHUB}/mygroup/${IMAGE}:${TAG}" --push .
When I run this, the build job works alright, then the test "before_script" works but when the script starts, I get the following trace:
# <= this starts the websocketd server locally on port 8088 =>
$ /home/pier/start.sh &
# <= this starts the image I just built which should connect to the above websocketd server =>
$ docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# <= trace of the websocketd server start with url ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/ =>
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Serving using application : ./websocket-server.py
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Starting WebSocket server : ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/
# <= trace of the image start saying it tires to conenct to the websocketd server
Websocket connecting to ws://docker:8088 ...
Listen on port 4000
# <= trace with ENOTFOUND on "docker" address =>
websocket connection failed: Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
/pier/storage/websocket-client.js:52
throw err;
^
Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I tried other ways like:
websocket connection failed: Error: getaddrinfo ENOTFOUND host.docker.internal
websocket connection failed: Error: connect ETIMEDOUT 172.17.0.1:8088 # <= same error when trying $(/sbin/ip route|awk '/default/ { print $3 }') =>
websocket connection failed: Error: getaddrinfo ENOTFOUND runner-meuessxe-project-22314059-concurrent-0 # using $(hostname)
Out of new idea...
Would greatly appreciate any help on that one.

Build FAILED but job status is SUCCESS in Gitlab

My Dockerfile:
FROM mm_php:7.1
ADD ./docker/test/source/entrypoint.sh /work/entrypoint.sh
ADD ./docker/wait-for-it.sh /work/wait-for-it.sh
RUN chmod 755 /work/entrypoint.sh \
&& chmod 755 /work/wait-for-it.sh
ENTRYPOINT ["/work/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash -e
/work/wait-for-it.sh db:5432 -- echo "PostgreSQL started"
./vendor/bin/parallel-phpunit --pu-cmd="./vendor/bin/phpunit -c phpunit-docker.xml" tests
docker-compose.yml:
version: '2'
services:
test:
build:
context: .
args:
ssh_prv_key: ${ssh_prv_key}
application_env: ${application_env}
dockerfile: docker/test/source/Dockerfile
links:
- db
db:
build:
context: .
dockerfile: docker/test/postgres/Dockerfile
environment:
PGDATA: /tmp
.gitlab-ci.yml:
image: docker:latest
services:
- name: docker:dind
command: ["--insecure-registry=my.domain:5000 --registry-mirror=http://my.domain"]
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
test:
stage: test
script:
- export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
All works good. But if tests are failed, status of job in Gitlab is SUCCESS instead of FAILED.
How to obtain status FAILED if tests are failed?
UPD
If I run docker-compose up locally, it return no error code:
$ export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
Building db
Step 1/2 : FROM mm_postgres:9.6
...
test_1 | FAILURES!
test_1 | Tests: 1, Assertions: 1, Failures: 1.
test_1 | Success: 2 Fail: 2 Error: 0 Skip: 2 Incomplete: 0
mmadmin_test_1 exited with code 1
$ echo $?
0
It looks to me like it's reporting failed on the test without necessarily reporting failure on the return value of the docker-compose call. Have you tried capturing the return value of docker-compose when tests fail locally?
In order to get docker-compose to return the exit code from a specific service, try this:
docker-compose up --exit-code-from=service
When Gitlab CI runs something, if the process executed returns something different from zero, then, your build will fail.
In your case, you are running a docker-compose and this program returns zero when the container finish, what is correct.
You are trying to get phpunit's failure.
I think that is better you split your build in steps and not use docker-compose in this case:
gitlab.yml:
stages:
- build
- test
build:
image: docker:latest
stage: build
script:
- docker build -t ${NAME_OF_IMAGE} .
- docker push ${NAME_OF_IMAGE}
test:
image: ${NAME_OF_IMAGE}
stage: test
script:
- ./execute_your.sh

Resources