I've created one Jenkins job to execution 5 test cases and it works, but not in terms of build creation.
Build status is coming as SUCCESS even if there is any failure test
cases during execution.
For instance, 2 test cases out of 5 got failed during my recent run, but the build status has become SUCCESS!!!
Please help me in correcting this.....
Note: I'm using Jenkins 1.617 & ANT 1.9.4 for the integration.
If you are executing junits in jenkins, default status is unstable for jenkins job and success for maven/ant build.
If you are using an ant build add the below property to junit task to make the build failed failureproperty="test.failed". If you want to fail the build even if one junit fails, add property haltonfailure="yes"
If you are using maven add the argument -Dmaven.test.failure.ignore=false
After the build is passed for source code, some .jar files are created.
Those jar files have to be put in specific path before triggering relevant testcases in Jenkins.
In a short,
How to configure/set up the test cases after the successful build and before triggering them in Jenkins?
Have 1 job to build your code.
Add a post exec command to this job to move the jar files to the correct path
After this build job is finished have it trigger a second job that does only the tests.
I am using the jenkins multijob plugin to execute a number of parallel builds in the same build phase and i want to display the test results in the main multijob project so i select a post-build action step to 'Aggregate down stream test results' and select both options 'Automatically aggregate all downstream tests' and 'Include failed builds in results' but when the jobs complete and i go into the main multijob project it shows 'no tests' under 'Latest Test Result' link...
Has anyone else encountered this issue? My downstream 'child' projects that run in parallel are multi-configuration projects.
As a previous poster indicated, this is an open issue in the Jenkins JIRA and does not work. There is a workaround to achieve what your looking for. Your going to need the Copy Artifact Plugin and to also Archive the test result files as Artifacts in your jobs that are doing the test runs.
After you have installed this and configured your test run jobs properly, go to your Multijob and after all your test phases add a build step "Copy artifacts from another project" for each of the jobs you want the test results from. You can use "Specified by permalink" and use the "Last build" permalink to always retrieve the latest artifacts. Select the artifacts you want to copy (i.e. *.xml), and input your target directory as something like "job1". If you add multiple build steps to copy artifacts from another project, just name your target directories for the copied artifacts something similar like "job2", "job3", etc.
Then select a Post-build action in your Multijob as you would to Publish JUnit test result report (or whatever you prefer) and input **/job*/*.xml (or similar).
This is what I did, and it works just fine. It is a bit manual in the setup, but it works great once its configured.
I have a job that is running unit tests on a project build and then ssh into a staging server and pulling down from the master branch. Right now I'm using the post-build-script but this is running regardless of pass/fail. I'm trying to use the parameterized build plugin to trigger a new job when the build is passed. So far I've created the new job and set to trigger in the configuration of the original.
The new job is building ok on its own but the original job isn't triggering it. From 'Add post-build action' I've selected 'Trigger parameterized build on other projects' with build triggers:
Projects to build: new_job, Trigger when build is: Stable or unstable but not failed.
Any ideas appreciated!
C
If you don't actually need to pass a parameter to the second build, make sure that "Trigger build without parameters" is checked in the parameterized build trigger options.
The "Post build task" allows you to query the console log of the build step, and is executed only when criteria is met.
Jenkins writes BUILD SUCCESSFUL in the console log for every build step that had passed.
In your "Post build task" step, under Log text just put BUILD SUCCESSFUL, and under Script put your linux script/commands.
This way your script/commands will only be executed if the Build Step was successful
In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together).
Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed").
Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after?
Edit:
When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with.
However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.
Pipeline plugin
You can use the Pipeline Plugin (formerly workflow-plugin).
It comes with many examples, and you can follow this tutorial.
e.g.
// build
stage 'build'
...
// deploy
stage 'deploy'
...
// run tests in parallel
stage 'test'
parallel 'functional': {
...
}, 'performance': {
...
}
// promote artifacts
stage 'promote'
...
Build flow plugin
You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen).
Setting up the jobs
Create jobs for:
build
deploy
performance tests
functional tests
promotion
Setting up the upstream
in the upstream (here build) create a unique artifact, e.g.:
echo ${BUILD_TAG} > build.tag
archive the build.tag artifact.
record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent.
Configure to get promoted when promotion job is successful.
Setting up the downstream jobs
I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER
Copy the artifacts from upstream build job using the Copy Artifact Plugin
Project name = ${PARENT_JOB_NAME}
Which build = ${PARENT_BUILD_NUMBER}
Artifacts to copy = build.tag
Record fingerprints; that's crucial.
Setting up the downstream promotion job
Do the same as the above, to establish upstream-downstream relationship.
It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn".
Create a build flow job
// start with the build
parent = build("build")
parent_job_name = parent.environment["JOB_NAME"]
parent_build_number = parent.environment["BUILD_NUMBER"]
// then deploy
build("deploy")
// then your qualifying tests
parallel (
{ build("functional tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) },
{ build("performance tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) }
)
// if nothing failed till now...
build("promotion",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
// knock yourself out...
build("more expensive QA tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
good luck.
There are two solutions that I have used for this scenario in the past:
Use the Join Plugin on your "deploy" job and specify "promote" as the targeted job. You would have to specify "Functional Tests" and "Performance Tests" as the joined jobs and start them via in some fashion, post build. The Parameterized Trigger Plugin is good for this.
Use the Promoted Builds Plugin on your "deploy" job, specify a promotion that works when downstream jobs are completed and specify Functional and Performance test jobs. As part of the promotion action, trigger the "promote" job. You still have to start the two test jobs from "deploy"
There is a CRITICAL aspect to both of these solutions: fingerprints must be correctly used. Here is what I found:
The "build" job must ORIGINATE a new fingerprinted file. In other words, it has to fingerprint some file that Jenkins thinks was originated by the initial job. Double check the "See Fingerprints" link of the job to verify this.
All downstream linked jobs (in this case, "deploy", "Functional Tests" and "Performance tests") need to obtain and fingerprint this same file. The Copy Artifacts plugin is great for this sort of thing.
Keep in mind that some plugins allow you change the order of fingerprinting and downstream job starting; in this case, the fingerprinting MUST occur before a downstream job fingerprints the same file to ensure the ORIGIN of the fingerprint is properly set.
The Multijob plugin works beautifully for that scenario. It also comes in handy if you want a single "parent" job to kick off multiple "child" jobs but still be able to execute each of the children manually, by themselves. This works by creating "phases", to which you add 1 to n jobs. The build only continues when the entire phase is done, so if a phase as multiple jobs they all must complete before the rest are executed. Naturally, it is configurable whether the build continues if there is a failure within the phase.
Jenkins recently announced first class support for workflow.
I believe the Workflow Plugin is now called the Pipeline Plugin and is the (current) preferred solution to the original question, inspired by the Build Flow Plugin. There is also a Getting Started Tutorial in GitHub.
Answers by jason & dnozay are good enough. But in case someone is looking for easy way just use JobFanIn plugin.
This diamond dependency build pipeline could be configured with
the DepBuilder plugin. DepBuilder is using its own domain
specific language, that would in this case look like:
_BUILD {
// define the maximum duration of the build (4 hours)
maxDuration: 04:00
}
// define the build order of the existing Jenkins jobs
Build -> Deploy
Deploy -> "Functional Tests" -> Promote
Deploy -> "Performance Tests" -> Promote
After building the project, the build visualization will be shown on the project dashboard page:
If any of the upstream jobs didn't succeed, the build will be automatically aborted. Abort behavior could be tweaked on a per job basis, for more info see the DepBuilder documentation.