Jenkins pipeline how to detect downstream build status when using parallel()? - jenkins

I have a pipeline that invokes a couple hundred runs of a downstream job in a parallel block. The downstream job contains a check that may abort the job.
How can I check the build status of the downstream jobs ran in the parallel block? Ideally, the parallel step returns the status of each downstream build in a map or something, but as far as I can tell it returns only a single build status. In my case, if downstream jobs have been SUCCESS, FAILURE, and ABORTED, the upstream build sets it's status as ABORTED.
currentBuild.status and currentBuild.currentResult both seem to have the wrong status set, and if I catch the exception thrown from the parallel step, it's just a hudson control flow exception that doesn't let me know the status of the downstream build.
What is the best way to get the correct downstream build status from jobs invoked from the parallel step?

Related

Jenkins - Not to have failed upstream job if downstream job fails

I have a master job which triggers multiply downstream jobs. If some of the downstream jobs fail, I don't want to have failed upstream job. Is there any condition I could set?
Because of that one failed job, I have this status of upstream job, and I want it to be success.
Ok if you want the the downstream jobs are success or not and you don't want to wait until those finish. then you can use in your master jobs in build job command propagate: false, wait: false

Jenkins: Mark build as success if the last step succeded even if a previous step was unstable

I've in jenkins a flow like this:
Wrapper Job1
Trigger Job2
(Conditionally) if the job 2 is unstable it triggers the Job3
Below you can see JOB1 (wrapper) configuration pics:
JOB2 trigger configuration :
JOB3 conditional trigger configuration
Now, to give you a little bit of context:
I'm running tests with selenium and cucumber, these tests can randomly fail and if they fail, the job2 is marked unstable (if not the wrapper just finish with success status), in case the job2 fails will be triggered the job3, this is a "RERUN FAILED TESTS" task, then obviously in the case this last will be completed with success I want the wrapper to be marked as SUCCESS.
This should be really easy, but it's not working, below the wrapper (JOB1) jenkins job log:
FIRST STEP (JOB2) UNSTABLE BECAUSE SOME TESTS FAILED:
Waiting for the completion of REM_Parallel_Tag_Sub_Runner
REM_Parallel_Tag_Sub_Runner #9 completed. Result was UNSTABLE
Build step 'Trigger/call builds on other projects' changed build result
to **UNSTABLE**
IF THE JOB2 IS UNSTABLE THE WRAPPER TRIGGER THE JOB 3:
[Current build status] check if current [UNSTABLE] is worse or equals then
[UNSTABLE] and better or equals then [UNSTABLE]
Run condition [Current build status] enabling perform for step [BuilderChain]
Waiting for the completion of REM_Parallel_Sub_ReRuns
THE JOB 3 SUCCEEDED, THIS MEANT THAT THE TESTS THAT WERE FAILING NOW ARE SUCCEEDING, AND I WANT THAT THIS STEP UPDATE THE JOB1 FROM UNSTABLE TO SUCCESS, IT SHOULD BE A NORMAL BEHAVIOUR
REM_Parallel_Sub_ReRuns #6 completed. Result was SUCCESS
[CucumberReportPublisher] Compiling Cucumber Html Reports ...
[CucumberReportPublisher] Copying all json files from: /PATH/workspace /TiaCukes to reports directory: /PATH/cucumber-html-reports
[CucumberReportPublisher] there were no json results found in: /u01/app/build/jenkins/jobs/REM_Parallel_Tag_Runner_Orchestrator/builds/9/cucumber-html-reports
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
No emails were triggered.
Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered
Finished: UNSTABLE
AS YOU CAN SEE THE BUILD STATUS HAS NOT BEEN UPDATED, EVEN IF THE LAST TRIGGERED STEP SUCCEEDED, THE BUILD STATUS REMAINS UNSTABLE
How can i fix it? Should not be so hard goal to accomplish!
Thanks a lot!
Resolved with the use of variables set by Parameterized Trigger Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
Pics below:
JOB2 trigger configuration:
JOB3 conditional trigger configuration:
Feel free to ask about details

Identify which downstream job failed and send notification in Jenkins

I have created a wrapper job in Jenkins which will get triggered every hour if there are any new commits in my GIT repository. This wrapper job in turn calls 6 other downstream jobs. So the structure of my wrapper job (W) is like this:
W -> A -> B -> C -> D -> E -> F
I am using Jenkins Parameterized Trigger Plugin to stitch one job to the other so that my upstream jobs fail if the downstream job fails. Upon completion of the last downstream job (F), Wrapper job (W) is copying the artifacts from all the downstream jobs in its current workspace.
Now when one of my downstream job (lets say E) fails, I get failure notifications from the failed downstream job (E) as well as from all the other upstream jobs (D, C, B, A and W). So I get 6 mails in total and it creates some noise.
If I activate the email notification on only the Wrapper job (W), then I get a single failure notification mentioning that Job A has failed. Then I will check Job A's logs only to find out that it was Job B that failed and continue the log checks until I reach Job E.
How can I customize the notification to send a single mail identifying the specific downstream job (in this case E) that caused the failure?
OR
Is there a better way to trigger the downstream jobs, wait for all the downstream jobs to get completed and copy the artifacts from all the downstream jobs to the trigger job?
Wrote a Groovy script in Groovy Postbuild to iterate through all the subprojects of wrapper job and mark the wrapper job as failure if any of the subprojects have failed.
Also changed the exit criteria in "Trigger/call builds on other projects" to never mark job as failure/unstable. Instead the call of setting the job as failure is handled in the groovy script itself based on the status of the downstream subprojects.

Downstream Jenkins project gets wrong upstream run parameter

I'm having a problem with a Jenkins build pipeline. All jobs after the first one are parameterized with the "Run Parameter" of the first job. By default, this should reference the most recent stable build of the first job. Each subsequent job uses the "Run Parameter" of the first job to access saved artifacts from the first job. Each subsequent job triggers the next job of the pipeline as a parameterized build and passes the aforementioned "Run Parameter". The first job of the pipeline triggers the second job as a simple (i.e., not parameterized) build.
Here's a screenshot of the relevant configuration of a downstream job:
My problem is that the job number in the "Run Parameter" isn't the job number of the first job of the pipeline. Instead, it's the job number of the first job of the previous pipeline. Thus, if the first job is on build #11, then all subsequent job of that pipeline will access the archive of build #10 of the first job.
How can I get the subsequent jobs of the pipeline to access the archive directory of the first job of the pipeline?
I discovered the answer. Apparently, the reason the downstream job was using the artifacts from the upstream job of the previous pipeline was because I had set the "Run Parameter" filter in the configuration of the downstream job to "Stable Builds Only". Setting this filter to "All Builds" results in correct behavior.
It's as if Jenkins doesn't consider an upstream job to be stable when it's starting another build in its post-build section.
Quote: "By default, this should reference the most recent stable build of the first job."
Do you mean the last successful build of the Top job. Since in that case there might be a case where the last successful build of the top job was #7 and current build is #11. So you want the downstream jobs to look for #7 and not #10.
If that is the case then I will suggest putting a groovy build step. Install the groovy plugin for that. But before that test the script.
Open: YourJenkinsServerURL/script
Run this script.
def tmp = hudson.model.Hudson.instance
def myJobName="YourTopJobName";
tmp.getItems(hudson.model.Job).each {job ->
if(job.displayName == myJobName)
{
println(job.getLastSuccessfulBuild())
}
}
In groovy you can access and set an environment variable (injected via envInject plugin maybe) to this last successful build number and then pass on this variable to downstream job.
If that is not the case then I will suggest use Nant Script.
Use "int::parse()" to convert the string format build number to integer. Decrement the value and then pass on the value to the downstream job.

Why doesn't Jenkins stop processing after a failed build step?

I'm running into an issue where Jenkins is continuing on to subsequent build steps even when the prior build step has failed. This is for setting up a Jenkins free-style job.
The build steps I'm running into an issue with are for "Trigger/call builds on other projects" steps, I am selecting/checking the option for "Block until the triggered projects finish their builds" and setting the parent job to mark the build result the same as the triggered jobs.
So say I have Job_1, Job_2, and Job_3 scheduled in sequence using the above options. Job_1 passes just fine, then Job_2 fails. In the Jenkins logs it shows Job_2 failing and marking the parent job as failed. However, the parent job still continues on to Job_3 even after marking itself as failed.
Here's an example from the Jenkins console output; notice how Job_2 failed and build result was changed to failure, but 1 second later Jenkins still kicks of Job_3 even though the build is already marked as failure:
12:34:54 Waiting for the completion of Job_1
12:48:44 Job_1 #7 completed. result was SUCCESS
12:48:44 Build step 'Trigger/call builds on other projects' changed build result to SUCCESS
12:48:45 Waiting for the completion of Job_2
18:18:44 Job_2 #169 completed. result was FAILURE
18:18:44 Build step 'Trigger/call builds on other projects' changed build result to FAILURE
18:18:45 Waiting for the completion of Job_3
18:38:25 Job_3 #180 completed. result was SUCCESS
It turns out the issue is with the trigger parameterized builds plug-in. For some reason they thought it would be a good idea to continue on to subsequent build steps even if a build step that is a blocking call fails and calls the invoking parent job to fail.
Looks like I gotta do things myself or switch to Bamboo...

Resources