Run unit tests and make unit test report available within jenkins - jenkins

Newbie to jenkins.
Experimenting to run unit tests (not even sure if the step to run is correct)
and also need to make a report available within jenkins (any suggestions how i can make this possible?)
pipeline {
agent any
stages
{
stage("Build") {
steps {
echo 'Building the appication...'
}
}
stage ('Unit test') {
steps {
sh 'npm run test'
}
}
stage ("Deploy") {
steps {
echo 'Deploying the appication...'
}
}
}
}

Use for example xunit tool (can be invoked on post actions or simply as step) to record a report from tests.
Here is the documentation:
https://plugins.jenkins.io/xunit/
To create a report with npm run test command you have to define it in package.json of the project or wherever the command is defined
Rwmeber also to add script { after steps { in the stage where you call Bash script

Related

Jenkins Pipeline with Dockerfile configuration

I am struggling, to get the right configuration for my Jenkins Pipeline.
It works but I could not figure out how to seperate test & build stages.
Requirements:
Jenkins Pipeline with seperated test & build stage
Test stage requires chromium (I currently use node alpine image + adding chromium)
Build stage is building a docker image, which is published later (publish stage)
Current Setup:
Jenkinsfile:
pipeline {
environment {
...
}
options {
...
}
stages {
stage('Restore') {
...
}
stage('Lint') {
...
}
stage('Build & Test DEV') {
steps {
script {
dockerImage = docker.build(...)
}
}
}
stage('Publish DEV') {
steps {
script {
docker.withRegistry(...) {
dockerImage.push()
}
}
}
}
Dockerfile:
FROM node:12.16.1-alpine AS build
#add chromium for unit tests
RUN apk add chromium
...
ENV CHROME_BIN=/usr/bin/chromium-browser
...
# works but runs both tests & build in the same jenkins stage
RUN npm run test-ci
RUN npm run build
...
This works, but as you can see "Build & Test DEV" is a single stage,
I would like to have 2 seperate jenkins stages (Test, Build)
I already tried using Jenkins agent docker and defining the image for the test stage inside the jenkins file, but I dont know how to add the missing chromium package there.
Jenkinsfile:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
//add chromium package here?
//set Chrome_bin env?
}
}
I also thought about using a docker image that already includes chromium, but couldnt find any official images
Would really appreciate your help / insights how to make this work.
You can either build your customized image (which includes the installation of Chromium) and push it to a registry and then pull it from that registry:
node {
docker.withRegistry('https://my-registry') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
Or build the image directly with Jenkins with your Dockerfile:
node {
def testImage = docker.build("test-image", "./dockerfiles/test")
testImage.inside {
sh 'make test'
}
}
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Reference: Using Docker with Pipeline
So in general I would execute the npm run commands inside the groovy syntax and not inside the dockerfile. So your code would look something like that:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
args '-u root:root' // better would be to use sudo, but this should work
}
}
stages {
stage('Preparation') {
steps {
sh 'apk add chromium'
}
}
stage('build') {
steps {
sh 'npm run build'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
I would also suggest that you collect the results within Jenkins with the warnings ng jenkins plugin

How to run a stage only when build is launched manually

My team owns a non regression testing project. In this project, there is code and non regression test. Like classic project we want to analyse our code with linter or other tools. But we don't want to run our tests on each branch for each commit, they last hours.
We want to launch these tests manually.
To run test exclusively on master we have this in our Jenkinsfile:
stage("Test") {
when {branch "master"}
steps {
sh 'pipenv run pytest -n5 --dist=loadscope --junitxml report.xml |
}
post {
always {
junit 'report.xml'
}
}
}
But once we merge our branch into master, a build on master is triggered and the tests are launched.
To avoid this, I think I have to play with the triggeredBy parameter of the when block: https://jenkins.io/doc/book/pipeline/syntax/
But I cannot find which triggeredBy map a manual launch event (event that is sent when we click on the run button within Jenkins interface).
Thx for your help. The following code behaves as expected.
stage("Test") {
when {allOf {branch "master"; triggeredBy 'UserIdCause'}}
steps {
sh 'pipenv run pytest -n5 --dist=loadscope --junitxml report.xml '
}
post {
always {
junit 'report.xml'
}
}
}
You can use this:
stage('Test') {
when {
expression {
currentBuild.buildCauses.toString().contains('UserIdCause')
}
}
steps {
sh 'pipenv run pytest -n5 --dist=loadscope --junitxml report.xml
}
}

Jenkinsfile: publish test results even if test step fails

In my Jenkinsfile I have a stage Test where I run a npm test command step as well as a junit step to archive test results.
stage('Test') {
steps {
sh 'npm run test-ci'
junit 'test-results.xml'
}
}
How can I use try/finally correctly to run the junit step even if the sh 'npm run test-ci' step fails?
You want to use the post stage, https://jenkins.io/doc/book/pipeline/syntax/#post.
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'npm run test-ci'
}
}
post {
always {
junit 'test-results.xml'
}
}
}
Also have a look at this blog post, it explains it further, https://jenkins.io/blog/2017/02/10/declarative-html-publisher/

Pipeline step having trouble resolving a file path

I am having trouble getting a shell command to complete in a stage I have defined:
stages {
stage('E2E Tests') {
steps {
node('Protractor') {
checkout scm
sh '''
npm install
sh 'protractor test/protractor.conf.js --params.underTestUrl http://192.168.132.30:8091'
'''
}
}
}
}
The shell command issues a protractor call which takes a config file argument, but this file fails to be found when protractor tries to retrieve it.
If I take a look at the workspace directory for where the repo is checked out to from the checkout scm step I can see the test directory is present with the config file present the sh step is referencing.
So I'm unsure why the file cannot be found.
I thought about trying to verify the files that can be seen around the time the protractor command is being issued.
So something like:
stages {
stage('E2E Tests') {
steps {
node('Protractor') {
checkout scm
def files = findFiles(glob: 'test/**/*.conf.js')
sh '''
npm install
sh 'protractor test/protractor.conf.js --params.underTestUrl http://192.168.132.30:8091'
'''
echo """${files[0].name} ${files[0].path} ${files[0].directory} ${files[0].length} ${files[0].lastModified}"""
}
}
}
}
But this doesnt work, I dont think findFiles can be used inside a step?
Can anyone offer any suggestions about what may be going on here?
Thanks
to do the debugging you were attempting (to see if the file is actually there) you could wrap the findFiles in a script (making sure your echo is before the step that fails) or use a basic find in an "sh" step like this:
stages {
stage('E2E Tests') {
steps {
node('Protractor') {
checkout scm
// you could use the unix find command instead of groovy's findFiles
sh 'find test -name *.conf.js'
// if you're using a non-dsl-step (like findFiles), you must wrap it in a script
script {
def files = findFiles(glob: 'test/**/*.conf.js')
echo """${files[0].name} ${files[0].path} ${files[0].directory} ${files[0].length} ${files[0].lastModified}"""
sh '''
npm install
sh 'protractor test/protractor.conf.js --params.underTestUrl http://192.168.132.30:8091'
'''
}
}
}
}
}

Jenkins Pipeline: How to archive artifacts when the build fails?

When our browser based tests fail, we take a screenshot of the browser window to better illustrate the problem. However, I don't understand how to archive them in my pipeline, because the pipeline stops after the failure. Same for the junit.xml, I'd also like to use it in error cases.
I've checked, the screenshots are generated and stored correctly.
My definition looks like this (irrelevant things mostly trimmed):
node {
stage('Build docker container') {
checkout([$class: 'GitSCM', ...])
sh "docker build -t webapp ."
}
stage('test build') {
sh "mkdir -p rspec screenshots"
sh "docker run -v /var/jenkins_home/workspace/webapp/rspec/junit.xml:/myapp/junit.xml -v /var/jenkins_home/workspace/webapp/screenshots:/myapp/tmp/capybara -v webapp bundle exec rspec"
}
stage('Results') {
junit 'rspec/junit*.xml'
archive 'screenshots/*'
}
}
You can use simple Java try/catch to avoid pipeline failure on test failure, or Jenkins catchError like this :
node {
catchError {
// Tests that might fail...
}
// Archive your tests artifacts
}
From here, you can use the post section in your pipeline:
pipeline {
agent any
stages {
stage('Build') {
...
}
stage('Test') {
...
}
}
post {
always {
archive 'build/libs/**/*.jar'
}
}
}

Resources