Jenkins webhook payload overwritten by next trigger - jenkins

We have a pipeline heavily parameterised which is invoked by a webhook from multiple Bitbucket repositories.
This job is setup not to allow concurrent builds.
It works perfectly fine one job at time. Or with one job running + 1 in the queue.
However, when a 3rd job is triggered Jenkins will overwrite the payload contents of the job waiting in the queue with the payload of the newly arrived one.
Let's say:
Job from developer A is running
Job from developer B is waiting
Developer C pushes to Bitbucket, which triggers a new job
Job from developer B has it's payload overwritten by Developer C payload
Job will still shows Develops B name
Even more confusing, if a 4th trigger happens, the 1st job waiting in the queue is lost and all the next ones will be off-by-one.
We have an output configured to display the cause like this:
And when the issue presents itself I can see on the logs:
Bitbucket Build By - Developer 1
Bitbucket Build By - Developer 2
This makes me think Jenkins is somehow squashing everything together.
I am trying to find if this is a configuration I can fiddle with, but couldn't find anything so far.
Suggestions?
Thanks!

Related

Trigger Jenkins Job from Bitbucket on Pull Request

Hoping to gather insight from professionals. My end goal is to trigger a jenkins build whenever a bitbucket pull request happens. If anyone could give me an ELI5(explain like I am 5) answer it would be greatly appreciated. Sorry if this is the wrong format, I am new to jenkins and stackoverflow.
What I have done so far:
Created webhook in bitbucket and gave the url to my jenkins job. example: http://jenkinsURL:8080/job/boulevard-dev/generic-webhook-trigger/invoke?token=myPull_Request_Token
Pull request webhook trigger
In Jenkins, under source code management I have: Source Code Management Settings. This is currently fetching a ton of branches, failing, then building the master branch when the job starts?
For build triggers, other stackoverflow articles have pointed me to the "Generic Webhook Trigger". https://github.com/jenkinsci/generic-webhook-trigger-plugin
I am not entirely sure how this generic webhook trigger should effectively be setup? Hoping someone has experience using it and could explain what is needed.
This is what have seen referenced in other articles.Build Triggers settings Build triggers settings 2
Questions:
What does a correct setup / example of the generic webhook trigger look like?
Currently, my job triggers when a change is made to master or merged to master, how can I specify to my job that I want the bitbucket pull request branch to be built?
Also, I found this, not sure if its related to my issue or not? https://jira.atlassian.com/browse/BCLOUD-5814
As per your requirement, you can trigger a Jenkins build whenever a bitbucket pull request happens by following the below steps, in my case, it's working fine.
Step(1) - Configure Jenkins
(i) Add your bitBucket repo and branch to source code management
(ii) On build Triggers setup Poll SCM to * * * * * for run every minute to check pull request from bitBucket.
Step(2) - configure Bit Bucket Hook
(i) Go to settings and add a new hook, now setup pull request trigger as per your requirement.
Step(3) - Make a pull request and see the new job automatically triggered on Jenkins.

How to combine multiple build started Jenkins messages into one email

I am using Jenkins, Gerrit and repo for my project. Often times I make code changes that span across git repositories (all managed through repo). When I submit a CL it triggers multiple Jenkins jobs (pre-submits, cross reference checks, linters...) which sends flurry of build started emails and finally one email with +/-verified status. Wondering if it is possible to combine all the build started emails into one (just like the final verified status email)
I would suggest you to use Pipeline where only one job will be triggered as part of Gerrit trigger and the Pipeline will take care of calling all other jobs and update the Gerrit with the final message.

jenkins pipeline : how to avoid that a gerrit change verify job is superseded by a newer change

I have a pipeline running, triggered by several gerrit review hooks.
e.g.
branch-v1.0
branch-v2.0
normally i receive my verifies accordingly to the result of the appropriate job run. E.g. run finished successfully with passed tests, i get the verified+1 back in my gerrit system.
My problem:
If there is running a job for verifying my gerrit change, a newer "verify job" of another change or patch, is always canceling the current running job. It doesn't matter whether the change comes from a different branch or not. Also no difference if the new change has something to do with the current one. The current running change is always superseded.
in the console:
In this case the job A canceled an older B and later A was canceled by a newer job C
Canceling older #3128
Waiting for builds [3126]
Canceled since #3130 got here
So, does anybody know how to avoid the canceling of the current running job?
I wanted to use the Multi-Branch pipeline (but i really do not know if this helps), but the gerrit plug-in is currently not supported by the Multi-Branch pipeline or the blue ocean project. As far as i know.
https://issues.jenkins-ci.org/browse/JENKINS-38046
There is a new gerrit plug-in in development, but there is no information when this will be available (or is 'production ready'). See the following comment in the issue.
lucamilanesio added a comment - 2017-08-18 15:40
Thanks for your support!

Jenkins job wait for first successful build of other job

I have a Jenkins job that should not start building until another job has been built successfully at least once. They are not related per se, so I don't want to use triggers. Is there a way to do this?
Some background: I'm using SCM polling to trigger the second job. I've looked at the Files Found Trigger plugin, but that would keep on triggering the second job after the first on has been built. I've also found the Run Condition Plugin, but that seems to work only on build steps, not on the entire build.
Update - The second job copies artifacts from the first job. As long as the first job has never completed successfully, the Copy Artifact step fails. I am trying to prevent that failure, by not even attempting to build the second job until the first job has completed once.
One solution is to use the Build Flow plugin.
You can create a new flow job and use this DSL:
I've used 10 in the retry section but you can use any numbers.
This flow job can be triggered by monitoring the same SCM URL of your second job.
Update, here is a second solution.
You can use the HTTP Request plugin.
If you want to test that your first job has been built successfully at least once, you can test this URL:
http://your.jenkins.instance/job/your.job/lastSuccessfulBuild/
One example:
As my build has never been successful, the lastSuccessfulBuild URL doesn't exist. The HTTP Request changes my build status to failure.
Does it help?
The Block queued job plugin can be used for this: https://wiki.jenkins-ci.org/display/JENKINS/Block+queued+job+plugin

How can I configure execution start between dependent jobs?

My Jenkins server is set up with two jobs A and B say.
Job A is triggered from changes in subversion, runs unit tests and if successful, creates a WAR and deploys it to another environment.
If Job A succeeds, then Job B triggers. This job runs tests against the deployed WAR.
The problem is that the deployment process takes a while and the WAR is not ready in time for when Job B starts and tries to use it.
I'm looking for ideas on how to delay Job B until the WAR is up and running.
Is there a way, once Job B is triggered to wait for x seconds? I really don't want to put it into the tests in Job B if I can avoid it.
Thanks
There is definitely a way for a job to wait - just put sleep into the first shell build step. Alternatively, you can set 'Quiet period' - it's in Advanced Project Options when you create a build.
That, however, is a band-aid solution to be employed only if other approaches fail. You may try the following: if there is a way to make the deployment process (that job A triggers) right before it finishes to touch a file that Jenkins has access to, then you can use FSTrigger Plugin. See use case 3 there.
The most reliable way to make this work would be to make job A not complete until the deployment is successful, e.g. by testing for a valid response from the URL of the deployed web app. This blog post describes one way to do that.

Resources