Mark Jenkins Job chain as failed if any downstream job fails - jenkins

I have a set of chained freestyle jenkins jobs which we use as a build pipeline for several projects, we recently integrated it with our source repository management (phabricator) so it reports the continuous integration build result back to the merge request (whether if it failed or passed).
Due to some limitations with Phabricator, the way we are triggering the pipeline is through an AWS lambda function which knows what's the first job of the chain and starts it, since it's all chained together, triggering the first job executes the whole pipeline.
The issue is, technically we are triggering a single job (which triggers a downstream job as well and so on), so, if the first job passes, it will return a green build to Phabricator no matter if the second job fails, it won't wait for any of the downstream projects to finish, if the first one passed, it will say it build is green.
As I see it, there are two questions that come to my mind to solve this:
1.- Is there a way to mark the job as failed if the downstream project fails?
2.- Is there a way to trigger the chain instead of a single job? that way I think it will return the result of the chain instead of the single first job.
Any thoughts and advice are welcome.

Have you considered rewriting your Jenkins chain as a single pipeline job? I think this would make your life a lot easier.
Otherwise, you need to pass the Phabricator build ID down through your chain, and only post back a success response in the final job, with a fail response from any job that fails.

Related

parameterized remote trigger for multiple parallel calls

I am not sure whats the best way to implement multiple parallel calls in Jenkins remotely.. Any inputs will be greatly appreciated.
How to get the build number for the multiple parallel calls (2-10 multiple calls) to the Jenkins Server for a parameteried job if it gets triggered remotely. One requirement is there will be no change in the build parameter. The development team is using tool/python program to invoke 50 POST calls in that case how do we track the build number.
Scenario -- I have a freestyle parameterized job with Enable concurrent build if necessary box checked for this job Sequential call for with same build parameters request when initiated remotely, we see build number using https://jenkinsurl/queue/item with filtering out the build number and then https://jenkins url/build/Consoletext -- This works
Scenario 2 -- Same request with no change with parameters when triggered multiple times more than 2 calls we can see the build number /Consoletext for the first call and later ones were unable to track with the build number.
Sorry, I am a beginner and trying to implement multiple parallel calls. My Jenkins job is configured to run a python script on the Jenkins server that will return success along with work id and other responses that the dev Team needs for further processing.
When the team triggers API remotely 50 times, only for the first call we see the build number/full response from the ConsoleText for the rest of the calls we don't see any build number. I don't see any failures also in Jenkins. FYI This is a free-style parameterized job concurrent build option enabled.

Get Jenkins build job duration once finished

I am using Jenkins integration server for my CI/CD.
I am working with freestyle projects.
I want to get build duration once finished (in seconds) using RESTFUL API (JSON).
This is what i tried:
"duration=$(curl -g -u login:token--silent "$BUILD_URL/api/json?pretty=true&tree=duration" | jq -r '.duration')"
Duration is always equel to 0 even though i did this shell script in a post buil task.
There are a few reasons why this may not work for you, but the most probable reason is that the build hasn't finished when you make the API call.
I tried it on our instance and for finished jobs it works fine and for running jobs it always returns 0. If your post build task is executed as part of the job, then the job has probably not finished executing yet, which is why you are always getting 0.
The Api call for the build will not contain the duration attribute as long as the build is running, and therefore you cannot use that mechanism during the build duration.
However you have a nice alternative for achieving what you want using Freestyle jobs.
The solution, which still uses the Api method, is to create a separate generic job for updating your data base with the results, this jobs will receive as parameters the project name and the build number, run the curl command for receiving the duration, update your database and run any other logic you need.
This job can now be called from any freestyle job using the Parameterized Trigger plugin post task with the relevant build environment parameters.
This has the additional benefit that the duration update mechanism is controlled in a single job and if updates are needed they can be made in a single location avoiding the need to update all separate jobs.
Assuming your job is called Update-Duration and it receives two parameters Project and Build the post trigger can look like the following:
And thats it, just add this tigger to any job that needs it, and in the future you can update the logic without changing the calling jobs.
Small thing, in order to avoid a race condition that can be caused if the caller job has not yet finished the execution you can increase the quite period of your data base updater job to allow enough time for the caller jobs to finish so the duration will be populated.

How may I configure a Jenkins job to run at a specific time if an upstream job succeeds?

My use case:
Job A is set to run Monday through Friday at 18:00.
Job B is dependent upon Job A succeeding but should only run Monday through Friday at 06:00. (Monday morning's run would depend upon Friday evening's run). I prefer set times rather than delays between jobs.
On any given morning, if I see that Job A failed (thus Job B never ran), I would like to be able to run (fix) Job A then immediately trigger Job B.
What I have found so far only offers part of this use case. I have tinkered with Pipeline and recently upgraded my Jenkins instance to 2.89.3, so I have access to the most recent features and plugins. Filesystem triggering seems doable.
Any suggestions are appreciated.
You can use the options available in "Build Triggers".
Ex:
Build Trigger
Hope this work for you!
This is a tricky Use Case as generally you want a job to immediately follow on from another one rather than waiting for potentially three days.
Further complicated by wanting it to run straight away when you want it to.
I do not believe there is a "I have finished so kick this job at this time" downstream trigger So for the first part the only things I can think of are:
Job A kicks Job B as soon as it is finished and job B sits there with a time checker in it and starts its task when the time matches.
or Job A artefacts a file with its exit status and job B has a cron trigger for 6am mon-fri and picks up this artefact and then runs or doesn't dependent on the file contents
For the second part you could get the build Cause (see how to get $CAUSE in workflow for pipeline implementation and vote on https://issues.jenkins-ci.org/browse/JENKINS-41272 to get the feature when using sandbox).
And then get your pipeline to behave differently depending on trigger
i.e. if you went for the second option above then In job B you could do if triggered by Cron then read the artefact and do as needed. If triggered by Upstream then just run regardless.

Jenkins - Webhooks OR PollSCM

In a scenario where continuous integration is important, for triggering builds which is a better option Webhooks or PollSCM.
These are my current understanding on both methods:
PollSCM is a heavy operation and depending on it to trigger build means we need to fire it frequently. But the configuration is easier and it is safer than web hooks as Jenkins will be communicating to code repo.
Web hooks can give you exact build trigger time without checking for it constantly. But on the other hand, there are security concerns when you are opening up a connection from outside and configuration is not easy compared to PollSCM.
Looking forward to know the exact pros and cons of both ways.
If your build cycle is very short (a few minutes) and if you want to trigger a build for each commit, the Webhooks solution is better.
But if your build cycle is longer (15/20 minutes) and if you don't need to build for each commit, the PollSCM is a good candidate :)
In my company, we are using Git/Stash and Jenkins + a Webhook to trigger a build every time something is committed. For the pull requests, we are using the Stash pullrequest builder plugin for Jenkins.

Blocking a triggered Jenkins job until something *outside* Jenkins is done

I have a Jenkins job which starts a long-running process outside of Jenkins. The job itself is triggered by Gerrit.
If this job is triggered again while the long-running process is ongoing, I need to ensure that the job remains on the Jenkins queue until said process has completed. Effectively I want to ensure that the job never runs in parallel with itself, with the wrinkle that "the job" is really the Jenkins job plus the external long-running process.
I can't find any way to achieve this. The External Resource Dispatcher plugin seems like it could work, but every time I've configured it on our system, Jenkins got extremely unstable (refusing page loads for minutes on end, slave threads dying with NPEs). Everything else I can see, such as the Exclusions plugin, depend on Jenkins itself controlling the entirety of the job.
I've tried hacking something together with node labels - having the job depend on a label "can_run", assigning that label to master, and then having the job execute a Groovy script that removes that label from master. (Theoretically there would be another Jenkins job that adds the label back, which would be triggered by the end of the long-running process.) But it didn't work: if there were any queued instances of the job on Jenkins, they went ahead and started right away even though the label had been removed.
I don't know what else to try! Is there anything other than a required node label being missing which will cause Jenkins to queue the job if it is triggered, but not start it?
I guess the long-running process is triggered and your job return immediately, which make it an async process, right? I would suggest you handle the long-running process detection and waiting logic in your trigger process. Every time before you trigger the job, check if the long-running process is running, if not, trigger it.
Actually I am not quite getting what you are trying to do. Basically because of that long-running process, it is impossible for you to run 2 jobs in parallel. If this is true, make it non parallel job.

Resources