I have a test1 which is a sh_test and a test2 sh_test that depends on test1 but I can't seem to add test1 as a dependency to test2. Is there any way to only run test2 if test1 completes and is successful?
Split test1 into two parts:
the "build" part: the build rules that produces the outputs that test2 needs to consume
the "assertion" part: whatever assertions the test is making
The test rules (test1 and test2) can depend on the genrule and test the correctness of the outputs.
Related
It seems that I can put commands, such as echo "helloworld", in script or in jobs section of .travis.yml. What is their difference?
They are completely different functionality defined in .travis.yml
script: is a build/job phrase that you run commands in this specific step. [1]
job: is a step where you will be able to define multiple ones within .travis.yml file and each job can run an additional build job that you can define their own script inside it. [2]
[1]https://docs.travis-ci.com/user/job-lifecycle/#the-job-lifecycle
[2]https://docs.travis-ci.com/user/build-matrix/#listing-individual-jobs
I'm using this command to run my tests:
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd"
but sometimes I want to run a specific test. How can I do it?
I read that I can use "This project is parameterized" but didn't understand how to use it.
I also saw this - https://plugins.jenkins.io/selected-tests-executor/ but it's not good enough since it required an external file.
If you use the maven-surefire-plugin you can simply run
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd -Dtest=com.example.MyJavaTestClass"
or
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd -Dtest=com.example.MyJavaTestClass#myTestMethod"
I suggest to add a parameter for the test class/method to your pipeline definition.
pipeline {
agent any
parameters {
string defaultValue: '', description: 'Test Name', name: 'TEST_NAME', trim: false
}
stages {
stage('run tests') {
steps {
script {
def optionalParameters = ""
if (params.TEST_NAME != null) {
optionalParameters += " -Dtest=" + params.TEST_NAME
}
sh "${mvnHome}/bin/mvn clean test -e -Dgroups=categories.dbd" + optionalParameters
}
}
}
...
}
...
}
Jenkins isn't really the tool for that. It's typical use case us running all tests. If you really want to run only one test, you should do that in your development environment
But the simplest thing to do, if you want the results for one of the tests, us to have Jenkins run all your tests and simply ignore the results for the other tests.
Generally, you should run (quick, cheap) unit tests in your development environment, and submit code for (expensive, slow) integration tests by Jenkins only once the code passes the unit tests (And Jenkins should run the unit tests again, just in case).
I suspect your real question is "how do I debug failure of an integration test run by Jenkins". Jenkins is a build and test tool not a debugging tool. It is not itself a suitable tool for debugging test failures. But the way you use Jenkins can help debugging.
Do not use integration tests as a substitute for unit tests.
If your software fails an integration test, but no unit tests, as usual with debugging a test failure you should be making hypotheses about the kinds of defect in your software that could cause the failure. Then check whether a unit test could be added that would detect that kind of defect. If so, add that unit test.
Ensure that your tests produce useful diagnostic messages on failure. Test assertions should have helpful messages. Test a should have descriptive names.
If an integration test checks a sequence of actions, ensure you also have tests for the individual actions.
We have a multiple jenkins pipeline jobs with steps like:
Build -> unit-tests -> push to artifactory
Build -> unit-tests -> deploy
Build -> unit-tests -> integration tests
etc.
Management wants to unify all that to a one big ass pipeline, and currently my team has 2 approaches how to do it:
a) Create on big ass pipeline job with all the stages inside
The cons of this is that we do not need to deploy or publish to artifactory each single build, so there would be some if statements inside that will skip stages if needed - which will make build history a total mess - because one build can do different thing from another (e.g. build #1 publish binaries, and build #2 run integration tests). Pros is that we have all in one workspace and jenkinsfile.
b) Create a separate job for each unit of task.
Like 'build', 'integration tests', 'publishing' and 'deploying', and then create one orchestrator job that will call smaller jobs in sequence wrapped in stages. Cons of this is that we still have CI spread over different jobs, and artifacts have to be passed in between. Pros, of course, is that we can run them independently if needed, so if you only need unit-tests - you run only unit-tests job, which will also result in normal and meaningful build history.
Could you please point out if you would go with a or b, or otherwise how would you do it instead?
If the reason for unifying them is code repetition, look at shared libraries. Your Build and unit-tests which is common to all pipelines can go into the shared library and you can just call the library code from different pipelines.
We have one "big ass pipeline", spiced up with
stage('Push') {
when {
expression { env.PUSH_TO_ARTIFACTORY }
beforeAgent true
}
steps {
etc.
Regarding history, you can change your build description, so for builds that push you can add a * symbol in the end, e.g.
def is_push = env.PUSH_TO_ARTIFACTORY ? " *" : ""
currentBuild.displayName += "${is_push}"
Having everything in one file means that you don't need to figure out which file to look at as you fix things.
In Travis docs, it states that Build stages is a way to group jobs, and run jobs in each stage in parallel, but run one stage after another sequentially.
I know that all jobs in a stage in run in parallel, but do these tests run in the same instance, i.e. do they share the same env variables?
Say I have 3 tests under a stage.
- stage: 'Tests'
name: 'Test1'
script: ./dotest1
-
name: 'Test2'
script: ./dotest2
-
name: 'Test3'
script: ./dotest3
If I set export $bleh_credential=$some_credential in test1, does it get carried over to test2? It seems like it shouldn't, as they run in parallel, correct? If that's the case, can I set a stage-wide env variable, or should I set them every time I run a new test?
No, jobs are all run on new containers, so nothing in the job process can be shared between. If you need some persistence between them, Travis requires you to use an external storage system like S3. Read more about it here: https://docs.travis-ci.com/user/build-stages/#data-persistence-between-stages-and-jobs
I would set the env vars for each job, perhaps using YAML anchors for the defaults: https://gist.github.com/bowsersenior/979804
I have a job A in Jenkins for my automated testing that is triggered if another job B build is successful. The job A run several tests. Some of the test are flaky so I would like to run them again few times and let them the chance to pass so my build won't be unstable/failed.
Is there any plugin I can use?
I would suggest to fix your tests or rewrite them so they will only fail if something is broken. Maybe you can mock away the things that tend to fail. If you are depnending on a database connection, maybe you could use a sqlite or smething which is local.
But there is also a plugin which can retry a build:
https://wiki.jenkins-ci.org/display/JENKINS/Naginator+Plugin
Simply install the plugin, and then check the Post-Build action "Retry build after failure" on your project's configuration page.
If you want to rerun tests in JUnit-context, take a look here: SO: How to Re-run failed JUnit tests immediately?
Don't know of any plugin to run just the flaky/failed tests again, only the whole build. It should be possible, I just have not found any (and don't have enough time on my hand to write one). Here's what we did on a large java project where the build was ant based:
The build itself was pretty simple (using xml as formatter inside the junit ant task):
ant clean compile test
The build also accepted a single class name as parameter (using batchtest include section inside the junit ant task):
ant -Dtest.class.pattern=SomeClassName test
At the end of the jenkins job, we used the "Execute shell" build step. The idea was to search for all test results that had errors or failures, figure out the name of the class, then run that particular test class again. The file containing the failure will be overwritten, and the test collector at the end of the build will not see the flaky test failure, during the post build steps.
#!/bin/bash +x
cd ${WORKSPACE}
for i in $(seq 1 3); do
echo "Running failed tests $i time(s)"
for file in `find -path '*/TEST-*.xml' | xargs grep 'errors\|failures' | grep '\(errors\|failures\)="[1-9]' | cut -d ':' -f 1`; do
class=`basename ${file} .xml | rev | cut -d '.' -f 1 | rev`
ant -Dtest.class.pattern=${class} test
done
done
After getting the build back under control, you definitely need to address the flaky tests. Don't let the green build fool you, there's still work to be done.