Jenkins parameterization issue using Cucumber - jenkins

I'm trying to find the right sintax for an instrucion that runs a docker image, maps a volume, and calls tests written in Cucumber with a JUnit output.
When I set the following instruction with "Execute command shell" in a job configuration and I don't map any volume, tests run:
docker run docker-registry.dev.xoom.com/agrimaldi/jasper:${VERSION} cucumber -t #co -f junit -o /opt/xbp_stamp_jasper/features/output
Problem is, I need a volume in order to read the output of the tests. So I try out with the following line:
docker run --rm -v /var/lib/jenkins/jobs/qacolombia/workspace/default/features/output:/opt/xbp_stamp_jasper/features/output docker-registry.dev.xoom.com/agrimaldi/jasper:${VERSION} cucumber -t #co -f junit -o /opt/xbp_stamp_jasper/features/output
But Jenkins doesn't seem to recognize the "#" symbol. I've tried with several positions of single quotes, for example: '#co' or 'cucumber -t #co -f junit -o /opt/xbp_stamp_jasper/features/output', using backslashes, double quotes... and Jenkins doesn't recognize the whole instruction. Would you please help me with a suggestion of a way of sending parameters?
Any help is highly appreciated.

Related

expected 2 keywords got 4 in robot framework

How can this problem be solved for example, The Execute special command on accepts 2 arguments, but I want to make it accept more. I want these two commands to be together so that I can replace the yaml file of a docker image at once. I have tried putting the other commands in brackets but it didnt still work
Execute special command on ${cluster} kubectl exec -n ${namespace} get statefulsets/postgresql-pod -o yaml | sed "s#image: docker repo /stolon#image: bbbdocker repo/stolon#"
Execute special command on ${cluster} kubectl -n ${namespace} replace -f -

Run Test Coverage inside Docker container for Pyspark test cases

I have a pyspark project with few unit test case files
test case files
test_testOne.py
test_testcaseTwo.py
These test classes are executed inside a docker container. While running the tests inside the container i want to get the test coverage reports also. Therefore I added the following line in my requirements.txt file
coverage==6.0.2
And inside the docker container I run he following command
python -m coverage discover -s path/to/test/files
I am getting the following output
/opt/conda/bin/python: No module named coverage
Can anybody help me to run my tests successfully with test coverage. Please note that all test cases r successfully running inside the container with the following command. But its not generating the test coverage
python -m unittest discover -s path/to/test/files
If you are using coverage the command:
python -m unittest discover -s path/to/test/files
Becomes:
coverage run -m unittest discover -s path/to/test/files
As specified in the documentation: Quick Start
Since you are using docker, a good option is to create a volume inside a docker container and when the tests are finished, coverage can generate a report and store it on your host machine. Like that you could automate the whole process and save reports.
Create a volume using -v flag when you start a docker container (more info: Use Volumes)
After the tests, run coverage html -d /path/to/your/volume/inside/docker (look in the documentation for more option: coverage html)

Why is docker build not showing any output from commands?

Snippet from my Dockerfile:
FROM node:12.18.0
RUN echo "hello world"
RUN psql --version
When I run docker build . I don't see any output from these two commands even if they are not cached. The documentation says that docker build is verbose by default. Why am I not seeing the output from commands? I used to see them before.
The output while building:
=> [7/18] RUN echo "hello world" 0.9s
The output I am seeing after building finishes:
=> CACHED [6/18] RUN apt-get install postgresql -y 0.0s
=> [7/18] RUN echo "hello world" 6.4s
=> [8/18] RUN psql --version 17.1s
The Dockerfile is created from node:12.18.0 which is based on Debian 9.
Docker version 19.03.13, build 4484c46d9d.
The output you are showing is from buildkit, which is a replacement for the classic build engine that docker ships with. You can adjust output from this with the --progress option:
--progress string Set type of progress output (auto, plain, tty). Use plain to show container output
(default "auto")
Adding --progress=plain will show the output of the run commands that were not loaded from the cache. This can also be done by setting the BUILDKIT_PROGRESS variable:
export BUILDKIT_PROGRESS=plain
If you are debugging a build, and the steps have already been cached, add --no-cache to your build to rerun the steps and redisplay the output:
docker build --progress=plain --no-cache ...
If you don't want to use buildkit, you can revert to the older build engine by exporting DOCKER_BUILDKIT=0 in your shell, e.g.:
DOCKER_BUILDKIT=0 docker build ...
or
export DOCKER_BUILDKIT=0
docker build ...
Just use this flag --progress=plain after build.
For example:
docker-compose build --progress=plain <container_name>
OR
docker build --progress=plain .
If you don't want to use this flag every time, then permanently tell docker to use this flag by doing:
export BUILDKIT_PROGRESS=plain
Here is the official documentation when you type docker build --help.
--progress string Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
In Docker 20.10 i had to use the --no-cache flag, too. Otherwise cached output is not shown.
docker build --progress=plain --no-cache .
As an alternative to specifying the --progress=plain option, you can also permanently disable the "pretty" output by setting this env variable in your shell config:
export BUILDKIT_PROGRESS=plain
Do 2 things
Instead of docker build . use this
docker build . --progress=plain
Add random junk to your RUN command every build (this tricks docker into thinking it hasn't seen the command before, so it doesn't use the cached version)
Example. If your command is RUN ls use this instead RUN ls && echo sdfjskdflsjdf (change the sdfjskdflsjdf to something else each time you build).
Why this works
I tried other answers and they all presented problems and imperfections. It's highly frustrating that Docker doesn't have some simple functionality like --verbose=true.
Here's what I ended up using (it's ludicrous but it works).
Suppose you want to see the output of ls command, this won't work docker build .
RUN ls
but this will print the output docker build --progress=plain:
RUN ls
now try again, it won't print! - that's because docker caches the unchanged layer, so the trick is to alter the command each time by adding some nonsense to it && echo sdfljsdfljksdfljk, and changing the nonsense each time docker build --progress=plain:
# This prints
RUN ls && echo sdfljsdfljksdfljk
# Next time you run it use a different token
RUN ls && echo sdlfkjsldfkjlskj
So each and every time, I mash the keyboard and come up with a new token. Stupifying. (note that I tried something like && openssl rand -base64 12 to generate a random string, but docker realises the code hasn't changed that doesn't work).
This solution is highly inferior to genuine docker support for printing output to console.
If your error looks something like this:
#7 0.584 /bin/sh: 1: /install.sh: not found
it's telling you the error is in line number 1. you are running into windows line endings
I was using VS code and I solved it pretty easily by converting the file from CRLF to LF using VS code.
just click on the CRLF button in the bottom right corner of the editor and save the file.
everything should work fine when you build the image now.

Docker build requires exactly 1 argument

When I run this command on my gitlab pipeline
docker build --build-arg NPM_TOKEN=${NPM_TOKEN} --tag $REGISTRY_IMAGE/web-public:$CI_COMMIT_SHA --tag $REGISTRY_IMAGE/web-public:$CI_COMMIT_REF_NAME packages/web-public
it fails with
build requires exactly 1 argument
It looks to me like I am actually passing one argument, the path; packages/web-public. Flags are not arguments as far as I know.
What am I missing here?
This is the structure of my project
Quote your variables. Something in those variables is expanding to be more than the single arg to the flag.
docker build --build-arg "NPM_TOKEN=${NPM_TOKEN}" --tag "$REGISTRY_IMAGE/web-public:$CI_COMMIT_SHA" --tag "$REGISTRY_IMAGE/web-public:$CI_COMMIT_REF_NAME" packages/web-public
You can also echo that command to see how the variables are expanding, e.g.
echo docker build ...
from https://docs.docker.com/engine/reference/commandline/build/
docker build [OPTIONS] PATH | URL | -
It looks like there's something wrong with your PATH. Try using the absolute path or change to the directory containing the Dockerfile and use .
see also: "docker build" requires exactly 1 argument(s)
My issue was that I had a multi line script entry, eg
script:
- >
docker build \
--network host \
-t ${CI_REGISTRY}/kylehqcom/project/image:latest \
....
As soon as I added to a single line, we were all ok. So I guess the line breaks got "entered" after the first line which meant that the subsequent lines were ignored and the error was returned. Also note, that I CI linted via the GitLab ui and all was syntactically correct.

Code coverage in gitlab CI/CD

I used Docker-dind to build and test my python code. I confused how to run coverage in gitlab-ci between two following options.
1) Gitlab has coverage by itself [here]
2) I follow python's coverage tutorial and create my own coverage with following:
coverage:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE python -m coverage run tests/tests.py
- docker run $CONTAINER_TEST_IMAGE python -m coverage report -m
When gitlab throws an exception No data to report.:
I guess coverage report command can not access/find .coverage file in the container.
So my question is What is the elegant way to run coverage in this situation?
since const's answer has already made the first part easier i.e to get the coverage details, I have tried solve how to get reports?
This is given by Gitlab coverage doc.
So your coverage job must be written like this
coverage:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
the regex was mentioned in mondwan blog
Addon
If you add the below line in your README.md file you will get a nice badge(in master README.md) that captures your coverage details.
[![coverage report](https://gitlaburl.com/group_name/project_name/badges/master/coverage.svg?job=unittest)](https://gitlaburl.com/group_name/project_name/commits/master)
I guess coverage report command can not access/find .coverage file in the container.
Yes, your assumption is correct. By running:
- docker run $CONTAINER_TEST_IMAGE python -m coverage run tests/tests.py
- docker run $CONTAINER_TEST_IMAGE python -m coverage report -m
you actually start two completely separate containers one after the another.
In order to extract coverage report you will have to run coverage report command after the coverage run command is finished in the same container like so (I'm assuming bash shell here):
- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"

Resources