Build FAILED but job status is SUCCESS in Gitlab - docker

My Dockerfile:
FROM mm_php:7.1
ADD ./docker/test/source/entrypoint.sh /work/entrypoint.sh
ADD ./docker/wait-for-it.sh /work/wait-for-it.sh
RUN chmod 755 /work/entrypoint.sh \
&& chmod 755 /work/wait-for-it.sh
ENTRYPOINT ["/work/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash -e
/work/wait-for-it.sh db:5432 -- echo "PostgreSQL started"
./vendor/bin/parallel-phpunit --pu-cmd="./vendor/bin/phpunit -c phpunit-docker.xml" tests
docker-compose.yml:
version: '2'
services:
test:
build:
context: .
args:
ssh_prv_key: ${ssh_prv_key}
application_env: ${application_env}
dockerfile: docker/test/source/Dockerfile
links:
- db
db:
build:
context: .
dockerfile: docker/test/postgres/Dockerfile
environment:
PGDATA: /tmp
.gitlab-ci.yml:
image: docker:latest
services:
- name: docker:dind
command: ["--insecure-registry=my.domain:5000 --registry-mirror=http://my.domain"]
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
test:
stage: test
script:
- export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
All works good. But if tests are failed, status of job in Gitlab is SUCCESS instead of FAILED.
How to obtain status FAILED if tests are failed?
UPD
If I run docker-compose up locally, it return no error code:
$ export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
Building db
Step 1/2 : FROM mm_postgres:9.6
...
test_1 | FAILURES!
test_1 | Tests: 1, Assertions: 1, Failures: 1.
test_1 | Success: 2 Fail: 2 Error: 0 Skip: 2 Incomplete: 0
mmadmin_test_1 exited with code 1
$ echo $?
0

It looks to me like it's reporting failed on the test without necessarily reporting failure on the return value of the docker-compose call. Have you tried capturing the return value of docker-compose when tests fail locally?
In order to get docker-compose to return the exit code from a specific service, try this:
docker-compose up --exit-code-from=service

When Gitlab CI runs something, if the process executed returns something different from zero, then, your build will fail.
In your case, you are running a docker-compose and this program returns zero when the container finish, what is correct.
You are trying to get phpunit's failure.
I think that is better you split your build in steps and not use docker-compose in this case:
gitlab.yml:
stages:
- build
- test
build:
image: docker:latest
stage: build
script:
- docker build -t ${NAME_OF_IMAGE} .
- docker push ${NAME_OF_IMAGE}
test:
image: ${NAME_OF_IMAGE}
stage: test
script:
- ./execute_your.sh

Related

Why Cypress service is failing?

Below is my pipeline on which I'm trying to get Cypress job to run tests against
Nginx service (which points to the main app) which is built at the build stage
The below is based on official template from here https://gitlab.com/cypress-io/cypress-example-docker-gitlab/-/blob/master/.gitlab-ci.yml
:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
job:
stage: build
script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose npm
- docker-compose up -d --build
e2e:
image: cypress/included:9.1.1
stage: test
script:
- export CYPRESS_VIDEO=false
- export CYPRESS_baseUrl=http://nginx:80
- npm i randomstring
- $(npm bin)/cypress run -t -v $PWD/e2e -w /e2e -e CYPRESS_VIDEO -e CYPRESS_baseUrl --network testdriven_default
- docker-compose down
Error output:
Cypress encountered an error while parsing the argument config
You passed: if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
elif [ -x /busybox/sh ]; then
exec /busybox/sh
else
echo shell not found
exit 1
fi
The error was: Cannot read properties of undefined (reading 'split')
What is wrong with this set up ?
From #jparkrr on GitHub : https://github.com/cypress-io/cypress-docker-images/issues/300#issuecomment-626324350
I had the same problem. You can specify entrypoint: [""] for the image in .gitlab-ci.yml.
Read more about it here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#overriding-the-entrypoint-of-an-image
In your case :
e2e:
image:
name: cypress/included:9.1.1
entrypoint: [""]

Circle CI - Can't connect to Redis or memcached using Docker Compose, but I can do so on my local machine

I'm developing a Node.js program that connects to both Redis and memcached. I am testing my Node.js program with Jest, and before running the test I run docker-compose up. My Node.js program connects to the Docker Redis and memcached Docker containers fine, and my tests pass fine on my local machine.
However, I want the tests to run on Circle CI so that every time I git push, the CI environment will verify the program is buildable and that tests are passing.
When I try to do the same on Circle CI, it seems that the Docker containers spin up fine, however the tests aren't able to connect to the Redis or memcached servers in the containers, despite it working fine on my local PC.
My config.yml for Circle CI:
version: 2
jobs:
build:
docker:
- image: circleci/node
steps:
- checkout
- setup_remote_docker
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.28.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- run:
name: Start Container
command: |
docker-compose up -d
docker-compose ps
- restore_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
- run:
name: Install Dependencies
command: npm ci
- save_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
paths:
- /home/circleci/.npm
- run:
name: Ensure Test Parity
command: |
chmod +x ./validateTestCases.sh
./validateTestCases.sh
- run:
name: Run Tests
command: npm test
My docker-compose.yml:
services:
redis:
image: redis
container_name: redis-container
ports:
- 6379:6379
memcached:
image: memcached
container_name: memcached-container
ports:
- 11211:11211
My build failing test log in Circle CI:
#!/bin/bash -eo pipefail
npm test
> easy-cache#1.0.0 test
> jest
FAIL memcached/memcached.test.js
● Test suite failed to run
Error: connect ECONNREFUSED 127.0.0.1:11211
FAIL redis/redis.test.js
● Test suite failed to run
Timeout - Async callback was not invoked within the 5000 ms timeout specified by jest.setTimeout.Error: Timeout - Async callback was not invoked within the 5000 ms timeout specified by jest.setTimeout.
at mapper (node_modules/jest-jasmine2/build/queueRunner.js:27:45)
Test Suites: 2 failed, 2 total
Tests: 0 total
Snapshots: 0 total
Time: 36.183 s
Ran all test suites.
npm ERR! code 1
npm ERR! path /home/circleci/project
npm ERR! command failed
npm ERR! command sh -c jest
npm ERR! A complete log of this run can be found in:
npm ERR! /home/circleci/.npm/_logs/2021-02-05T20_29_26_896Z-debug.log
Exited with code exit status 1
CircleCI received exit code 1
Link to my current source code
I am not sure what to try next. I have tried moving the npm test block right after docker-compose up -d but that had no effect.
It turns out that Docker Compose is not required for what I'm trying to do. Instead, you can include multiple Docker images in Circle CI.
Here's my updated Circle CI yaml file, where my tests run successfully (connection to Redis and memcached works like on my local PC using Docker Compose):
version: 2
jobs:
build:
docker:
- image: circleci/node
- image: redis
- image: memcached
steps:
- checkout
# - setup_remote_docker
# - run:
# name: Install Docker Compose
# command: |
# curl -L https://github.com/docker/compose/releases/download/1.28.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
# chmod +x ~/docker-compose
# sudo mv ~/docker-compose /usr/local/bin/docker-compose
# - run:
# name: Start Container
# command: |
# docker-compose up -d
# docker-compose ps
- restore_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
- run:
name: Install Dependencies
command: npm ci
- save_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
paths:
- /home/circleci/.npm
- run:
name: Ensure Test Parity
command: |
chmod +x ./validateTestCases.sh
./validateTestCases.sh
- run:
name: Run Tests
command: npm test

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

Find url/ip of container running in docker-compose in gitlab ci

I have an application that runs in docker-compose (for acceptance testing). The acceptance tests work locally, but they require the host (or ip) of the webservice container running in docker-compose in order to send requests to it. This works fine locally, but I cannot find the ip of the container when it is running in a gitlab ci server. I've tried the following few solutions (all of which work when running locally, but none of which work in gitlab ci) to find the url of the container running in docker-compose in gitlab ci server:
use "docker" as the host. This works for an application running in docker, but not docker-compose
use docker-inspect to find the ip of the container (docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' reading-comprehension)
assign a static ip to the container using a network in docker-compose.yml (latest attempt).
The gitlab ci file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/.gitlab-ci.yml
image: connorbutch/gradle-and-java-11:alpha
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: "overlay2"
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
services:
- docker:stable-dind
stages:
- build
- docker_build
- acceptance_test
unit_test:
stage: build
script: ./gradlew check
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
build:
stage: build
script:
- ./gradlew clean quarkusBuild
- ./gradlew clean build -Dquarkus.package.type=native -Dquarkus.native.container-build=true
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
artifacts:
paths:
- reading-comprehension-server-quarkus-impl/build/
docker_build:
stage: docker_build
script:
- cd reading-comprehension-server-quarkus-impl
- docker build -f infrastructure/Dockerfile -t registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA .
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker push registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA
acceptance_test:
stage: acceptance_test
only:
- merge_requests
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- cd reading-comprehension-server-quarkus-impl/infrastructure
- export IMAGE_TAG=$CI_COMMIT_SHORT_SHA
- docker-compose up -d & ../../wait-for-it-2.sh
- cd ../..
- ./gradlew -DBASE_URL='192.168.0.8' acceptanceTest
artifacts:
paths:
- reading-comprehension/reading-comprehension-server-quarkus-impl/build/
The docker-compose file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/reading-comprehension-server-quarkus-impl/infrastructure/docker-compose.yml
Find the output of one of the failed jobs here:
https://gitlab.com/connorbutch/reading-comprehension/-/jobs/734771859
#This file is NOT ever intended for use in production. Docker-compose is a great tool for running
#database with our application for acceptance testing.
version: '3.3'
networks:
network:
ipam:
driver: default
config:
- subnet: 192.168.0.0/24
services:
db:
image: mysql:5.7.10
container_name: "db"
restart: always
environment:
MYSQL_DATABASE: "rc"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
MYSQL_ROOT_PASSWORD: "password"
MYSQL_ROOT_HOST: "%"
networks:
network:
ipv4_address: 192.168.0.4
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- db:/var/lib/mysql
reading-comprehension-ws:
image: "registry.gitlab.com/connorbutch/reading-comprehension:${IMAGE_TAG}"
container_name: "reading-comprehension"
restart: on-failure
environment:
WAIT_HOSTS: "db:3306"
DB_USER: "user"
DB_PASSWORD: "password"
DB_JDBC_URL: "jdbc:mysql://192.168.0.4:3306/rc"
networks:
network:
ipv4_address: 192.168.0.8
ports:
- 8080:8080
expose:
- 8080
volumes:
db:
Does anyone have any idea on how to access the ip of the container running in docker-compose on gitlab ci server? Any suggestions are welcome.
Thanks,
Connor
This is little bit tricky, just few days ago I had similar problem but with VPN from CI to client :)
EDIT: Solution for on-premise gitlab instances
Create custom network for gitlab runners:
docker network create --subnet=172.16.0.0/28 \
--opt com.docker.network.bridge.name=gitlab-runners \
--opt com.docker.network.bridge.enable_icc=true \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
--opt com.docker.network.bridge.host_binding_ipv4=0.0.0.0 \
--opt com.docker.network.driver.mtu=9001 gitlab-runners
Attach new network to gitlab-runners
# /etc/gitlab-runner/config.toml
[[runners]]
....
[runners.docker]
....
network_mode = "gitlab-runners"
Restart runners.
And finally gitlab-ci.yml
start-vpn:
stage: prepare-deploy
image: docker:stable
cache: {}
variables:
GIT_STRATEGY: none
script:
- >
docker run -it -d --rm
--name vpn-branch-$CI_COMMIT_REF_NAME
--privileged
--net gitlab-runners
-e VPNADDR=$VPN_SERVER
-e VPNUSER=$VPN_USER
-e VPNPASS=$VPN_PASSWORD
auchandirect/forticlient || true && sleep 2
- >
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
vpn-branch-$CI_COMMIT_REF_NAME > vpn_container_ip
artifacts:
paths:
- vpn_container_ip
And in next step you can use something like:
before_script:
- ip route add 10.230.163.0/24 via $(cat vpn_container_ip) # prod/dev
- ip route add 10.230.164.0/24 via $(cat vpn_container_ip) # test
EDIT: Solution for gitlab.com
Base on gitlab issue answer port mapping in DinD is bit different from nonDinD gitlab-runner and for exposed ports you should use hostname 'docker'.
Example:
services:
- docker:stable-dind
variables:
DOCKER_HOST: "tcp://docker:2375"
stages:
- test
test env:
image: tmaier/docker-compose:latest
stage: test
script:
# containous/whoami with exposed port 80:80
- docker-compose up -d
- apk --no-cache add curl
- curl docker:80 # <-------
- docker-compose down
I'm using docker and not docker-compose and the solution above doesn't work for me/
I am using my own image based on node in which I install docker & buildx like this:
ARG NODE_VER=lts-alpine
FROM node:${NODE_VER}
ARG BUILDX_VERSION=0.5.1
ARG DOCKER_VERSION=20.10.6
ARG BUILDX_ARCH=linux-arm-v7
RUN apk --no-cache add curl
# install docker
RUN curl -SL "https://download.docker.com/linux/static/stable/armhf/docker-${DOCKER_VERSION}.tgz" | \
tar -xz --strip-components 1 --directory /usr/local/bin/
COPY docker/modprobe.sh /usr/local/bin/modprobe
# replace node entrypoint by docker one /!\
COPY docker/docker-entrypoint.sh /usr/local/bin/
ENV DOCKER_TLS_CERTDIR=/certs
RUN mkdir /certs /certs/client && chmod 1777 /certs /certs/client
# download buildx
RUN mkdir -p /usr/lib/docker/cli-plugins \
&& curl -L \
--output /usr/lib/docker/cli-plugins/docker-buildx \
"https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.${BUILDX_ARCH}"
RUN chmod a+x /usr/lib/docker/cli-plugins/docker-buildx
RUN mkdir -p /etc/docker && echo '{"experimental": true}' > /usr/lib/docker/config.json
My gitlab-ci.yml contains:
image: myimageabove
variables:
DOCKER_DRIVER: overlay2
PLATFORMS: linux/arm/v7
IMAGE_NAME: ${CI_PROJECT_NAME}
TAG: ${CI_COMMIT_BRANCH}-latest
REGISTRY: registry.gitlab.com
REGISTRY_ROOT: mygroup
WEBSOCKETD_VER: 0.4.1
# DOCKER_GATEWAY_HOST: 172.17.0.1
DOCKER_GATEWAY_HOST: docker
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" ${REGISTRY}
- docker buildx create --use
- docker buildx build --platform $PLATFORMS --tag "${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}" --push .
test:
stage: test
variables:
WSD_DIR: /tmp/websocketd
WSD_FILE: /tmp/websocketd/websocketd
cache:
key: websocketd
paths:
- ${WSD_DIR}
before_script:
# download websocketd and put in cache if needed
- if [ ! -f ${WSD_FILE} ]; then
mkdir -p ${WSD_DIR};
curl -o ${WSD_FILE}.zip -L "https://github.com/joewalnes/websocketd/releases/download/v${WEBSOCKETD_VER}/websocketd-${WEBSOCKETD_VER}-linux_arm.zip";
unzip -o ${WSD_FILE}.zip websocketd -d ${WSD_DIR};
chmod 755 ${WSD_FILE};
fi;
- mkdir /home/pier
- cp -R ./test/resources/* /home/pier
# get websocketd from cache
- cp ${WSD_FILE} /home/pier/Admin/websocketd
# setup envt variables
- JWT_KEY=$(cat /home/pier/Services/Secrets/WEBSOCKETD_KEY)
# - DOCKER_GATEWAY_HOST=$(/sbin/ip route|awk '/default/ { print $3 }')
# - DOCKER_GATEWAY_HOST=$(hostname)
- ENVT="-e BASE_URL=/ -e JWT_KEY=$JWT_KEY -e WEBSOCKETD_KEY=$JWT_KEY -e WEBSOCKET_URL=ws://${DOCKER_GATEWAY_HOST:-host.docker.internal}:8088 -e SERVICES_DIR=/home/pier/Services"
- VOLUMES='-v /tmp:/config -v /home/pier/Services:/services -v /etc/wireguard:/etc/wireguard'
script:
# start websocketd
- /home/pier/start.sh &
# start docker pier admin
- docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# run postman tests
- newman run https://api.getpostman.com/collections/${POSTMAN_COLLECTION_UID}?apikey=${POSTMAN_API_KEY}
deploy:
stage: deploy
script:
# just push to docker hub
- docker login -u "$DOCKERHUB_REGISTRY_USER" -p "$DOCKERHUB_REGISTRY_PASSWORD" ${DOCKERHUB}
- docker buildx build --platform $PLATFORMS --tag "${DOCKERHUB}/mygroup/${IMAGE}:${TAG}" --push .
When I run this, the build job works alright, then the test "before_script" works but when the script starts, I get the following trace:
# <= this starts the websocketd server locally on port 8088 =>
$ /home/pier/start.sh &
# <= this starts the image I just built which should connect to the above websocketd server =>
$ docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# <= trace of the websocketd server start with url ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/ =>
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Serving using application : ./websocket-server.py
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Starting WebSocket server : ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/
# <= trace of the image start saying it tires to conenct to the websocketd server
Websocket connecting to ws://docker:8088 ...
Listen on port 4000
# <= trace with ENOTFOUND on "docker" address =>
websocket connection failed: Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
/pier/storage/websocket-client.js:52
throw err;
^
Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I tried other ways like:
websocket connection failed: Error: getaddrinfo ENOTFOUND host.docker.internal
websocket connection failed: Error: connect ETIMEDOUT 172.17.0.1:8088 # <= same error when trying $(/sbin/ip route|awk '/default/ { print $3 }') =>
websocket connection failed: Error: getaddrinfo ENOTFOUND runner-meuessxe-project-22314059-concurrent-0 # using $(hostname)
Out of new idea...
Would greatly appreciate any help on that one.

Container Scanning feature does not work for multiple images

I've successfully setup the Container Scanning feature from GitLab for a single Docker image. Now I'd like to scan yet another image using the same CI/CD configuration in .gitlab-ci.yml
Problem
It looks like it is not possible to have multiple Container Scanning reports on the Merge Request detail page.
The following screenshot shows the result of both Container Scanning jobs in the configuration below.
We scan two Docker images, which both have CVE's to be reported:
iojs:1.6.3-slim (355 vulnerabilities)
golang:1.3 (1139 vulnerabilities)
Expected result
The Container Scanning report would show a total of 1494 vulnerabilities (355 + 1139). Currently it looks like only the results for the golang image are being included.
Relevant parts of the configuration
container_scanning_first_image:
script:
- docker pull golang:1.3
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-first-image.json
container_scanning_second_image:
script:
- docker pull iojs:1.6.3-slim
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-second-image.json
Full configuration for reference
image: docker:stable
stages:
- scan
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
container_scanning_first_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull golang:1.3
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
paths:
- gl-container-scanning-report-first-image.json
reports:
container_scanning: gl-container-scanning-report-first-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
container_scanning_second_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull iojs:1.6.3-slim
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
paths:
- gl-container-scanning-report-second-image.json
reports:
container_scanning: gl-container-scanning-report-second-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
Question
How should the GitLab Container Scanning feature be configured in order to be able to report the results of two Docker images?

Resources