Code coverage in gitlab CI/CD - docker

I used Docker-dind to build and test my python code. I confused how to run coverage in gitlab-ci between two following options.
1) Gitlab has coverage by itself [here]
2) I follow python's coverage tutorial and create my own coverage with following:
coverage:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE python -m coverage run tests/tests.py
- docker run $CONTAINER_TEST_IMAGE python -m coverage report -m
When gitlab throws an exception No data to report.:
I guess coverage report command can not access/find .coverage file in the container.
So my question is What is the elegant way to run coverage in this situation?

since const's answer has already made the first part easier i.e to get the coverage details, I have tried solve how to get reports?
This is given by Gitlab coverage doc.
So your coverage job must be written like this
coverage:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
the regex was mentioned in mondwan blog
Addon
If you add the below line in your README.md file you will get a nice badge(in master README.md) that captures your coverage details.
[![coverage report](https://gitlaburl.com/group_name/project_name/badges/master/coverage.svg?job=unittest)](https://gitlaburl.com/group_name/project_name/commits/master)

I guess coverage report command can not access/find .coverage file in the container.
Yes, your assumption is correct. By running:
- docker run $CONTAINER_TEST_IMAGE python -m coverage run tests/tests.py
- docker run $CONTAINER_TEST_IMAGE python -m coverage report -m
you actually start two completely separate containers one after the another.
In order to extract coverage report you will have to run coverage report command after the coverage run command is finished in the same container like so (I'm assuming bash shell here):
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"

Related

Run Test Coverage inside Docker container for Pyspark test cases

I have a pyspark project with few unit test case files
test case files
test_testOne.py
test_testcaseTwo.py
These test classes are executed inside a docker container. While running the tests inside the container i want to get the test coverage reports also. Therefore I added the following line in my requirements.txt file
coverage==6.0.2
And inside the docker container I run he following command
python -m coverage discover -s path/to/test/files
I am getting the following output
/opt/conda/bin/python: No module named coverage
Can anybody help me to run my tests successfully with test coverage. Please note that all test cases r successfully running inside the container with the following command. But its not generating the test coverage
python -m unittest discover -s path/to/test/files
If you are using coverage the command:
python -m unittest discover -s path/to/test/files
Becomes:
coverage run -m unittest discover -s path/to/test/files
As specified in the documentation: Quick Start
Since you are using docker, a good option is to create a volume inside a docker container and when the tests are finished, coverage can generate a report and store it on your host machine. Like that you could automate the whole process and save reports.
Create a volume using -v flag when you start a docker container (more info: Use Volumes)
After the tests, run coverage html -d /path/to/your/volume/inside/docker (look in the documentation for more option: coverage html)

Docker build: returned a non-zero code: 1, when test is failed

When I run Docker build with my project Docker+Selenium+Pytest in Jenkins CI with tests that end with the SUŠ”CESS status - the build is pushed and the results are published to reports, and if at least one test fails - the build fails and the results are not published
Build Error: The command 'pytest test_page.py -s -v --alluredir=reports/allure-results' returned a non-zero code: 1
Maybe my instructions for Docker are incorrectly configured.
My DockerFile
FROM python:latest as python3
FROM selenium/standalone-chrome
USER root
WORKDIR /my-projest
ADD . /my-projest
RUN pip3 install --no-cache-dir --user -r requirements.txt
RUN sudo pip3 install pytest
RUN ["pytest", "test_page.py", "-s", "-v", "--alluredir=reports/allure-results"]
and SHELL Command
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
docker run -d --name $CONTAINER_NAME $IMAGE_NAME
echo "Copy allure-results into Jenkins container"
rm -rf reports; mkdir reports;
docker cp $CONTAINER_NAME:my-project/reports/allure-results reports
It may be that your tests are failing on an assertion and that failed assertion may be throwing the non 0 error code.
this link outlines the expected exit codes for each scenario
Exit code 0
All tests were collected and passed successfully
Exit code 1
Tests were collected and run but some of the tests failed
Exit code 2
Test execution was interrupted by the user
Exit code 3
Internal error happened while executing tests
Exit code 4
pytest command line usage error
Exit code 5
No tests were collected
Problem is when testcases are failing docker build is exiting with non-zero code.
One way around to generate report even when testcases are failed
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo "Copy allure-results into Jenkins container"
rm -rf reports
docker create -it --name $CONTAINER_NAME $IMAGE_NAME /bin/bash
docker cp $CONTAINER_NAME:my-project/reports/allure-results ./reports
docker rm -f $CONTAINER_NAME
You can user report copy part in Jenkins pipeline in post stage under always block, so that whether build pass or fail you can always get reports.
I found a solution to this issue:
added at the end of the RUN command - exit 0

Install wasmtime on gitlab CI docker image

I need a wasm runtime to unit test my code on GitLab, so I have the following in my .gitlab-ci.yml:
default:
image: emscripten/emsdk
before_script:
- curl https://wasmtime.dev/install.sh -sSf | bash
- source /root/.bashrc
The wasmtime.dev script installs the binaries and updates PATH in ~/.bashrc. Running my tests fails with the message wasmtime: command not found (specified as below):
unit-test:
stage: test
script:
- bash test.sh
What do I need to do to make sure the changes of the wasmtime install script apply? Thanks!
Edit
Adding export PATH="$PATH:$HOME/.wasmtime/bin" before bash test.sh in the unit-test job sucesfully got the wasmtime binary on the path, but I'm not quite sure I'm happy with this solution - what if the path of wasmtime changes later on? Shouldn't sourcing .bashrc do this? Thanks!

Newman report generation works locally but not from CI

I have a GitLab CI job running a series of Postman requests using a custom environment. I'm using Newman to run them alongside the newman-reporter-htmlextra npm plugin to generate a test report.
The job looks like the following:
postman-tests:
stage: postman-tests
image:
name: wojciechzurek/newman-ci
before_script:
- cd ci/tests/postman
- npm install -g newman-reporter-htmlextra
script:
- newman run Non-regression_tests.postman_collection.json -e Tests.postman_environment.json \
--reporters htmlextra --reporter-htmlextra-export newman-results.html
- ls -la # Check report generation
artifacts:
when: always
paths:
- newman-results.html
allow_failure: true
When I run newman on my mac (newman 4.5.0), the requests and associated tests run properly and the report is generated. However, the job fails and the report is not generated:
$ newman run Non-regression_tests.postman_collection.json -e Tests.postman_environment.json --reporters htmlextra --reporter-htmlextra-export newman-results.html --color
Uploading artifacts...
WARNING: newman-results.html: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
It seems that the issue may be caused by the testing series in itself rather than the report generation, as the job fails even when I don't generate the report.
I tried different runners: Docker with official newman images, SSH and shell over machines where I had installed newman (version 4.5.6) and the htmlextra reporter beforehand. All fail.
It's interesting to note that the tests series and report generation both succeed when run locally on the machines behind the SSH and shell runners, but they do fail when launched from GitLab CI.
What did I forget/do wrong that prevents the test report generation from GitLab CI?
My .yml for testing, look like this - It's very basic but I've just run it again and it was running fine:
stages:
- test
newman_tests:
stage: test
image:
name: postman/newman_alpine33
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-htmlextra
- newman run collection.json -e environment.json --reporters cli,htmlextra --reporter-htmlextra-export testReport.html
artifacts:
when: always
paths:
- testReport.html
One thing that I do have is entrypoint: [""] in the image block.

Lumen: PHPUnit give failure but testing passed in Gitlab CI Runner

This is my first time to use testing in my project. I use Gitlab CI and gitlab runner to perform test. But something weird happened, when phpunit executed the output is failure, but the test result in gitlab is passed. Gitlab should be show failed result.
I use Lumen 5.1. And Gitlab Runner using docker.
This is my .gitlab-ci.yml file
image: dragoncapital/comic:1.0.0
stages:
- test
cache:
paths:
- vendor/
before_script:
- bash .gitlab-ci.sh > /dev/null
test:7.0:
script:
- phpunit
This is my .gitlab-sh.sh file
#!/bin/bash
# We need to install dependencies only for Docker
[[ ! -e /.dockerenv ]] && exit 0
set -xe
composer install
cp .env.testing .env
The log and result:
As you can see the phpunit test fail, but the status in gitlab CI is passed.
Update:
The log ouput is quite different in my local computer, but the results are error/fail.
At least I figured out what wrong with this test. There are two phpunit in this system, and I called the wrong one.
First, I installed phpunit using apt-get command, so phpunit is installed as Ubuntu package.
And secondly, Laravel/Lumen provided phpunit in vendor/bin.
When I just typing phpunit in terminal, it call phpunit that provided by Ubuntu, and this give me unexpected results. But, everything ok when I call vendor/bin/phpunit instead of just phpunit.

Resources