I'm using this command to run my tests:
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd"
but sometimes I want to run a specific test. How can I do it?
I read that I can use "This project is parameterized" but didn't understand how to use it.
I also saw this - https://plugins.jenkins.io/selected-tests-executor/ but it's not good enough since it required an external file.
If you use the maven-surefire-plugin you can simply run
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd -Dtest=com.example.MyJavaTestClass"
or
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd -Dtest=com.example.MyJavaTestClass#myTestMethod"
I suggest to add a parameter for the test class/method to your pipeline definition.
pipeline {
agent any
parameters {
string defaultValue: '', description: 'Test Name', name: 'TEST_NAME', trim: false
}
stages {
stage('run tests') {
steps {
script {
def optionalParameters = ""
if (params.TEST_NAME != null) {
optionalParameters += " -Dtest=" + params.TEST_NAME
}
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd" + optionalParameters
}
}
}
...
}
...
}
Jenkins isn't really the tool for that. It's typical use case us running all tests. If you really want to run only one test, you should do that in your development environment
But the simplest thing to do, if you want the results for one of the tests, us to have Jenkins run all your tests and simply ignore the results for the other tests.
Generally, you should run (quick, cheap) unit tests in your development environment, and submit code for (expensive, slow) integration tests by Jenkins only once the code passes the unit tests (And Jenkins should run the unit tests again, just in case).
I suspect your real question is "how do I debug failure of an integration test run by Jenkins". Jenkins is a build and test tool not a debugging tool. It is not itself a suitable tool for debugging test failures. But the way you use Jenkins can help debugging.
Do not use integration tests as a substitute for unit tests.
If your software fails an integration test, but no unit tests, as usual with debugging a test failure you should be making hypotheses about the kinds of defect in your software that could cause the failure. Then check whether a unit test could be added that would detect that kind of defect. If so, add that unit test.
Ensure that your tests produce useful diagnostic messages on failure. Test assertions should have helpful messages. Test a should have descriptive names.
If an integration test checks a sequence of actions, ensure you also have tests for the individual actions.
Related
I have an application which has unit and integration tests. Inside Jenkins, only unit tests are being called, and if any of them fail, the build will fail as well. Integration tests are not being called because some of them depend on external servers, which can be offline in the moment of a new build, thus making the build fail. Is it possible to run these tests on Jenkins without failing the build? If so, how should I configure it?
Just to make clear, the expected behavior is:
Build App
Run Unit Tests
Build Failure (if any unit tests fail)
Run Integration Tests
Build Success
If you are using pipeline you can use try-catch block:
node {
stage('Unit') {
// run unit tests
}
stage('Integration') {
try {
// run integration tests
} catch (e) {
// ignore
} finally {
// archive test results
}
}
}
one very simple way is to put "exit 0" at the end of the tests relying on external servers.
for example using a unix shell script you may write:
#!/bin/bash
# If remote check fails, exit with rc=0
./my_remote_server_check1 || exit 0
I am trying to write the test cases to validate the Jenkinsfile, But the load script function not working expecting the extension to be provided and throwing ResourceException exception loadScript("Jenkinsfile")
Is their better way to test the Jenkinsfile
The problem is that there are not enough tools for the development of pipelines. Pipelines is DSL and it imposes a restrictions.
There is an interesting approach to using flags. For example, test which defines outside pipeline(in job). If test=true, a pipeline change some "production" logic to "test" - select another agent, load artifacts into another repository, run another command and so on.
But recently appeared Pipeline Unit Testing Framework. It allows you to unit test Pipelines and Shared Libraries before running them in full. It provides a mock execution environment where real Pipeline steps are replaced with mock objects that you can use to check for expected behavior.
Useful links:
Jenkins World 2017: JenkinsPipelineUnit: Test your Continuous Delivery Pipeline
Pipeline Development Tools
You can validate your Declarative Pipeline locally thanks to Jenkins built-in features.This can be done using a Jenkins CLI command or by making an HTTP POST request with appropriate parameters.
The command is the following:
curl -s -X POST -F "jenkinsfile=<YourJenkinsfile" \
https://user:password#jenkins.example.com/pipeline-model-converter/validate
For a practical example follow this guide:
https://pillsfromtheweb.blogspot.com/2020/10/validate-jenkinsfile.html
Using Protractor/Jasmine conjunction automation framework I want to run test suite. When my Jenkins job runs I do not want to fail my job if any test case fails.
Pytest in python provides #pytest.mark.xfail feature to mark expected failures and this does not impact the jenkins job.
Is there any such feature in Protractor which can mark test cases as expected to fail?
I saw xit and xdescribe features but it skips the test case rather then expected failure
Can jenkins pipeline scripts be tested using groovysh or groovy scriptname to run tests for validation without using the Jenkins UI
For example for a simple script
pipeline {
stages {
stage ('test') {
steps {
sh '''
env
'''
}
}
}
}
running a test like this, depending on the subset of scripting gives:
No signature of method: * is applicable for argument types
groovysh_evaluate.pipeline()
or for
stage('test'){
sh '''
env
'''
}
reports:
No signature of method: groovysh_evaluate.stages()
or simply
sh '''
env
'''
reports:
No signature of method: groovysh_evaluate.sh()
The question may be which imports are required and how to install them outside of a jenkins installation?
Why would anyone want to do this?
Simplify and shorten iterating over test cases, validation of library versions without modifying jenkins installations and other unit and functional test scenarios.
JenkinsPipelineUnit is what you're looking for.
This testing framework lets you write unit tests on the configuration and conditional logic of the pipeline code, by providing a mock execution of the pipeline. You can mock built-in Jenkins commands, job configurations, see the stacktrace of the whole execution and even track regressions.
On my Jenkins pipleline I run unit tests on both: debug and release configurations. Each test configuration generates separate JUnit XML results file. Test names on both: debug and release configuration are same. Currently I use the following junit command in order to show test results:
junit allowEmptyResults: true, healthScaleFactor: 0.0, keepLongStdio: true, testResults: 'Test-Dir/Artifacts/test_xml_reports_*/*.xml'
The problem is that on Jenkins UI both: debug and release tests results are shown together and it is not possible to know which test (from debug or release configuration) is failed.
Is it possible to show debug and release tests results separately? If yes, how can I do that?
We run the same integration tests against two different configurations with different DB types. We use maven and the failsafe plugin, so I take advantage of the -Dsurefire.reportNameSuffix so I can see the difference between the two runs.
The following is an example block of our Jenkinsfile:
stage('Integration test MySql') {
steps {
timeout(75) {
sh("mvn -e verify -DskipUnitTests=true -DtestConfigResource=conf/mysql-local.yaml " +
"-DintegrationForkCount=1 -DdbInitMode=migrations -Dmaven.test.failure.ignore=false " +
"-Dsurefire.reportNameSuffix=MYSQL")
}
}
post {
always {
junit '**/failsafe-reports/*MYSQL.xml'
}
}
}
In the report, the integration tests run against mysql then show up with MYSQL appended to their name.
It looks no solution for my question.
As a workaround I changed JUnit XML report format and included build variant name (debug/release) as a package name.