I have a GitLab CI job running a series of Postman requests using a custom environment. I'm using Newman to run them alongside the newman-reporter-htmlextra npm plugin to generate a test report.
The job looks like the following:
postman-tests:
stage: postman-tests
image:
name: wojciechzurek/newman-ci
before_script:
- cd ci/tests/postman
- npm install -g newman-reporter-htmlextra
script:
- newman run Non-regression_tests.postman_collection.json -e Tests.postman_environment.json \
--reporters htmlextra --reporter-htmlextra-export newman-results.html
- ls -la # Check report generation
artifacts:
when: always
paths:
- newman-results.html
allow_failure: true
When I run newman on my mac (newman 4.5.0), the requests and associated tests run properly and the report is generated. However, the job fails and the report is not generated:
$ newman run Non-regression_tests.postman_collection.json -e Tests.postman_environment.json --reporters htmlextra --reporter-htmlextra-export newman-results.html --color
Uploading artifacts...
WARNING: newman-results.html: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
It seems that the issue may be caused by the testing series in itself rather than the report generation, as the job fails even when I don't generate the report.
I tried different runners: Docker with official newman images, SSH and shell over machines where I had installed newman (version 4.5.6) and the htmlextra reporter beforehand. All fail.
It's interesting to note that the tests series and report generation both succeed when run locally on the machines behind the SSH and shell runners, but they do fail when launched from GitLab CI.
What did I forget/do wrong that prevents the test report generation from GitLab CI?
My .yml for testing, look like this - It's very basic but I've just run it again and it was running fine:
stages:
- test
newman_tests:
stage: test
image:
name: postman/newman_alpine33
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-htmlextra
- newman run collection.json -e environment.json --reporters cli,htmlextra --reporter-htmlextra-export testReport.html
artifacts:
when: always
paths:
- testReport.html
One thing that I do have is entrypoint: [""] in the image block.
Related
I have this pipeline that I cant figure out why its running into issues. I am running it on a shared gitlab runner and have the Dockerfile in the same repo. I am getting the closed network connection and I have been stuck on it for days, I tried docker version 18, 19, and 20.
This is to build a custom docker container and deploy the code.
.gitlab-ci.yml
before_script:
- docker --version
#image: ubuntu:18.04 #
#services:
# - docker:18.09.7-dind
stages: # List of stages for jobs, and their order of execution
- build
- test
- deploy
build-image:
stage:
- build
tags:
- docker
- shared
image: docker:20-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
# entrypoint: ["env", "-u", "DOCKER_HOST"]
# command: ["dockerd-entrypoint.sh"]
script:
- echo "FROM ubuntu:18.04" > Dockerfile
- docker build .
unit-test-job:
tags:
- docker # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- sleep 60
- echo "Code coverage is 90%"
lint-test-job:
tags:
- docker # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- sleep 10
- echo "No lint issues found."
deploy-job:
tags:
- docker # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
script:
- echo "Deploying application..."
- echo "Application successfully deployed."
Output
Running with gitlab-runner 14.8.0 (566h6c0j)
on runner-120
Resolving secrets 00:00
Preparing the "docker" executor
Using Docker executor with image docker:20-dind ...
Starting service docker:20-dind ...
Pulling docker image docker:20-dind ...
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Waiting for services to be up and running...
*** WARNING: Service runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0 probably didn't start properly.
Health check error:
service "runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-04-25T06:27:22.962117515Z ip: can't find device 'ip_tables'
2022-04-25T06:27:22.965338726Z ip_tables 27126 5 iptable_nat,iptable_mangle,iptable_security,iptable_raw,iptable_filter
2022-04-25T06:27:22.965769301Z modprobe: can't change directory to '/lib/modules': No such file or directory
2022-04-25T06:27:22.984812613Z mount: permission denied (are you root?)
2022-04-25T06:27:22.984847849Z Could not mount /sys/kernel/security.
2022-04-25T06:27:22.984853848Z AppArmor detection and --privileged mode might break.
2022-04-25T06:27:22.984858696Z mount: permission denied (are you root?)
*********
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Preparing environment 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Running on runner-120-concurrent-0 via nikobelly-docker...
Getting source from Git repository 00:01
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/nikobelly/test_pipeline/.git/
Checking out 5d3bgbe5 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
$ docker --version
Docker version 20.10.14, build a224086
$ echo "FROM ubuntu:18.04" > Dockerfile
$ docker build .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": write tcp 172.14.0.4:46336->10.24.125.200:2375: use of closed network connection
Cleaning up project directory and file based variables 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
ERROR: Job failed: exit code 1
So - you're trying to build a docker image inside a container.
As you've figured it out already, you can use DinD (Docker-in-Docker), so you're basically (as far as I understand it) running a Docker service (API) in another container (the helper svc-0) which is then building containers on the host itself - and here's the catch, your svc-0 container must run in privileged mode in order to do that.
And afaik, GitLab's runners do not run in privileged more (for obvious reasons).
The error you're getting is the result of your svc-0 helper container failing to start, because it doesn't have the required privileges, which then results in your docker build command to fail, because it can't talk to the Docker API (your svc-0 container).
Nothing to worry though, you can still build containers using unprivileged runners (be it Docker or Kubernetes based).
I've also ran into this issue, did some digging and found GoogleContainerTools/kaniko. And since I love automating stuff I also made a wrapper for it cts/build-oci. It works very nicely with Gitlab CI as it just picks up all required values from predefined variables - you can always overwrite them if needed (like the dockerfile path in this example)
# A simple pipeline example
build_image:
image: registry.gitplac.si/cts/build-oci:1.0.4
script: [ "/build.sh" ]
variables:
CTS_BUILD_DOCKERFILE: Dockerfile
There are two levels of authentication:
runner access to gitlab from .gitlab-ci.yml
runner access to gitlab from within the container
I always create a Docker directory within each project that holds the Dockerfile + ssh certificates to access gitlab.
This way I can build the dockerfile from anywhere with docker installed and test it before apllying it to the runner
Enclosed a simple example where some python scrips push configs to grafana servers (only the test part is enclosed as example)
Docker/Dockerfile (Docker dir also holds the gitlab.priv + gitlab.publ for a personal gitlab ssh-key that are copied into):
FROM xxxx.yyyy.zzzz:4567/testtools/python/python:3.10.4
ENV DIR /fido2-grafana
ENV GITREPO git#xxxx.yyyy.zzzz:id-pro/test/fido2-grafana.git
ENV KEY_GEN_PATH /root/.ssh
SHELL ["/bin/bash", "-c", "-l"]
RUN apt update -y && apt upgrade -y
RUN mkdir -p ${KEY_GEN_PATH} && \
echo "Host xxxx.yyyy.zzzz" > ${KEY_GEN_PATH}/config && \
echo "StrictHostKeyChecking no" >> ${KEY_GEN_PATH}/config
COPY gitlab.priv ${KEY_GEN_PATH}/id_rsa
COPY gitlab.publ ${KEY_GEN_PATH}/id_rsa.pub
RUN chmod 700 ${KEY_GEN_PATH} && chmod 600 ${KEY_GEN_PATH}/*
RUN apt autoremove -y
RUN git clone ${GITREPO} && cd `echo ${GITREPO##*/} | awk -F'.' '{print $1}'`
RUN cd ${DIR} && pip install -r requirements.txt
WORKDIR ${DIR}
.gitlab-ci.yml:
variables:
TAG: latest
JOBNAME: fido2-grafana
MYPATH: $CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$JOBNAME
stages:
- build
- deploy
build-execution-container:
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker build --pull -t $MYPATH:$TAG Docker
- docker push $MYPATH:$TAG
deploy-boards:
before_script:
- echo "Running ${JOBNAME}:${TAG} to deploy boards"
stage: deploy
image: ${MYPATH}:${TAG}
script:
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS health.json'| tee output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS status.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Fido2 BKS Metrics.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Service uptime.json'| tee -a output.log; exit $?"
artifacts:
name: "${JOBNAME} report"
when: always
paths:
- output.log
I have the following code in my gitlab yml:
stages:
- unit_test
- deploy
Test:
stage: unit_test
script:
- docker run --rm -d --name myimage widgets:0.1 bash -c "tail -f /dev/null"
- docker exec -w /opt/source-code/tests myimage pwsh -c "dotnet test --test-adapter-path:. --logger:\"junit;LogFilePath=..\TestResults\test-results.xml;MethodFormat=Class;FailureBodyFormat=Verbose\""
- docker cp myimage:/opt/source-code/TestResults/test-results.xml ./
artifacts:
when: always
paths:
- ./test-results.xml
reports:
junit:
- ./test-results.xml
tags:
- docker-azure
deploy_to_dev:
stage: deploy
script:
- docker exec myimage pwsh -c "./mydeploymentscript.ps1"
only:
- master
tags:
- docker-azure
what the team wants is for a)unit tests to always run whenever the pipeline is triggered but b) the actual deployment logic to only trigger if the branch is master.
The pipeline is currently failing when it gets to the deploy stage with the error:
Error: No such container: myimage
I was trying to test to see if I could re-use the same container in between jobs since I'm not explicitly doing a "docker stop" on it in the unit test job. but I guess not.
I know I can repeat all the same commands / do another docker run in the deploy stage, but wondering if there's another way that I just don't know about.
Thank you
i'm not sure to understand your question. If you want to execute your job when you create a merge request, you can use "rules" like this
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
and the result of your tests will be available in your MR. If you job fail, your pipeline fail and your MR is not merged.
For this part, if your job fail, your pipeline fail too.
I am testing a GitLab CI pipeline with gitlab-runner exec. During a script, Boost ran into an error, and it created a log file. I want to view this log file, but I do not know how to.
.gitlab-ci.yml in project directory:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
I test this on my machine with:
sudo gitlab-runner exec docker build --timeout 3600
The last several lines of the output:
Building Boost.Build engine with toolset ...
Failed to build Boost.Build build engine
Consult 'bootstrap.log' for more details
ERROR: Job failed: exit code 1
FATAL: exit code 1
bootstrap.log is what I would like to view.
Appending - cat bootstrap.log to .gitlab-ci.yml does not output the file contents because the runner exits before this line. I tried looking though past containers with sudo docker ps -a, but this does not show the one that GitLab Runner used. How can I open bootstrap.log?
You can declare an artifact for the log:
image: alpine
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
script:
- apk add cmake
- cd include/boost
- sh bootstrap.sh
artifacts:
when: on_failure
paths:
- include/boost/bootstrap.log
Afterwards, you will be able to download the log file via the web interface.
Note that using when: on_failure will ensure that bootstrap.log will only be collected if the build fails, saving disk space on successful builds.
I used Docker-dind to build and test my python code. I confused how to run coverage in gitlab-ci between two following options.
1) Gitlab has coverage by itself [here]
2) I follow python's coverage tutorial and create my own coverage with following:
coverage:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE python -m coverage run tests/tests.py
- docker run $CONTAINER_TEST_IMAGE python -m coverage report -m
When gitlab throws an exception No data to report.:
I guess coverage report command can not access/find .coverage file in the container.
So my question is What is the elegant way to run coverage in this situation?
since const's answer has already made the first part easier i.e to get the coverage details, I have tried solve how to get reports?
This is given by Gitlab coverage doc.
So your coverage job must be written like this
coverage:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
the regex was mentioned in mondwan blog
Addon
If you add the below line in your README.md file you will get a nice badge(in master README.md) that captures your coverage details.
[![coverage report](https://gitlaburl.com/group_name/project_name/badges/master/coverage.svg?job=unittest)](https://gitlaburl.com/group_name/project_name/commits/master)
I guess coverage report command can not access/find .coverage file in the container.
Yes, your assumption is correct. By running:
- docker run $CONTAINER_TEST_IMAGE python -m coverage run tests/tests.py
- docker run $CONTAINER_TEST_IMAGE python -m coverage report -m
you actually start two completely separate containers one after the another.
In order to extract coverage report you will have to run coverage report command after the coverage run command is finished in the same container like so (I'm assuming bash shell here):
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"
This is my first time to use testing in my project. I use Gitlab CI and gitlab runner to perform test. But something weird happened, when phpunit executed the output is failure, but the test result in gitlab is passed. Gitlab should be show failed result.
I use Lumen 5.1. And Gitlab Runner using docker.
This is my .gitlab-ci.yml file
image: dragoncapital/comic:1.0.0
stages:
- test
cache:
paths:
- vendor/
before_script:
- bash .gitlab-ci.sh > /dev/null
test:7.0:
script:
- phpunit
This is my .gitlab-sh.sh file
#!/bin/bash
# We need to install dependencies only for Docker
[[ ! -e /.dockerenv ]] && exit 0
set -xe
composer install
cp .env.testing .env
The log and result:
As you can see the phpunit test fail, but the status in gitlab CI is passed.
Update:
The log ouput is quite different in my local computer, but the results are error/fail.
At least I figured out what wrong with this test. There are two phpunit in this system, and I called the wrong one.
First, I installed phpunit using apt-get command, so phpunit is installed as Ubuntu package.
And secondly, Laravel/Lumen provided phpunit in vendor/bin.
When I just typing phpunit in terminal, it call phpunit that provided by Ubuntu, and this give me unexpected results. But, everything ok when I call vendor/bin/phpunit instead of just phpunit.