How to avoid scheduling/starting multiple runs of a Jenkins job at the same time - jenkins

We are moving over our build system from Hudson to Jenkins and also to declarative pipelines in SCM. Alas, it looks like there are some hiccups. In Hudson, when a job was scheduled and waiting in the queue, no new jobs were scheduled for that project, which makes all the sense. In Jenkins, however, I observe there are e.g. 5 instances of a job started at the same time, triggered by various upstream or SCM change events. They have all even kind of started, one of them is actually running on the build node and the rest are waiting in "Waiting for next available executor on (build node)". When the build node becomes available, they all dutifully start running in turn and all dutifully run through, most of them without no purpose at all as there are no more changes, and this all takes a huge amount of time.
The declarative pipeline script in SCM starts with the node declaration:
pipeline {
agent {
label 'BuildWin6'
}
...
I guess the actual problem is that Jenkins starts to run these jobs even though the specified build node is busy. Maybe it thinks I might have changed the Jenkinsfile in the SCM and specified another build node to run the thing on? Anyway, how to avoid this? This is probably something obvious as googling does not reveal any similar complaints.

For the record, answering myself. It looks like the best solution is to define another trigger job which is triggered itself by SCM changes. It should do nothing else, only checks out the needed svn repos (with depthOption: 'empty' for space and speed). The job needs to be bound to run on the same agent than the main job.
The main job is triggered only by the first job, not by SCM changes. Now if the main job is building for an hour, and there are 10 svn commits during that time, Jenkins will schedule 10 trigger job builds to run. They are all waiting in the queue as the agent is busy. When the agent becomes available, they all run quickly through and trigger the main job. The main job is triggered only once, for that one must ensure its grace/quiet period is larger than the trigger job run time.

Related

How to trigger Jenkins slave clean-up before every job (not just workspace)?

What I'd like to do:
Before/after any build allocates a slave: Reset slave to a pristine state (eg restore/delete files, kill rogue processes, etc. In other words, run some script). I know how to do this in one job, I don't know how to have Jenkins trigger it for every job of every type without modifying them all.
Unsatisfactory Ways I can think of accomplishing this:
Enforce a pre/post step in every AbstractProject (old style jobs). This is only scalable if using the Job DSL Plugin to generate all jobs.
Enforce a pre/post step in every PipelineJob. This is only scalable if using a Jenkins shared library (eg: myBuild() that encapsulates these steps).
Make use of the Global Post Script plugin and run a groovy script after every build. This works, but the script runs after the slave has been released, so another build may have grabbed it already (it's too late). Figuring out what nodes were allocated during the build is also fairly complicated
Switch all slaves to one-shot type (ie Docker) that require no clean-up. This doesn't work for my use cases, but may work for someone else.
Periodically run a job that uses a System Groovy Script to edit all other jobs to add a "clean-up" "pre" build step if not present. (Will not work for Pipeline jobs)
(I currently do #1 and #2)
Ideal Solution
Theoretical options:
A plugin to hook into some event (ie WorkspaceListener.beforeUse() ) and execute something then (unfortunately, WorkspaceListener does not apply to Pipeline Jobs). This should trigger right before a slave is used. It gets a little complicated when a slave has multiple executors (mine don't).
A plugin to enforce execution of some steps, similar to "Execute builders from job X", but in every job. (also doesn't work for Pipeline jobs)
Assuming that I'm using Swarm for all slaves, modify the swarm client to handle this logic (perform a task when slave becomes idle). A poor man's way would be to make the swarm client run in "one-shot" mode, in a bash loop.
Question:
What am I overlooking? Is there a better way?

Jenkins multibranch pipelines waiting for other executors instead of completing post actions

I'm running a single-node Jenkins instance on a project where I want to clean up closed PRs intermittently. The system can keep up just fine with the average PR submission rate; however, when we re-run the scan repository against the project the system encounters a ton of parallel build congestion.
When we run scan repository the multi-branch pipeline fires off the separate stages and those generally complete sequentially without issue and without hitting timeouts; however, the priority of declarative post actions seems lower than the other stages (no special plugins are installed that would cause that AFAIK). The behavior exhibited is that parallel builds start running impacting the total run-time for any one branch or pull request such that all of the builds might indicate say 60 or 90 minute build times instead of the usual 6-10 minutes when the only task remaining is filling out the checkstyle reports or whatever minor notifications tasks there are.
Is there a way to dedicate a single executor thread on the master node for declarative post actions so one branch or PR can be ran from end-to-end without being stuck waiting for available executors that have suddenly been picked to start a different PR or branch and run the earlier (and computationally expensive stages like linting code and running unit tests) in order to avoid ultimately hitting a timeout?

Jenkins: how to I automatically restart a triggered build

I have one Jenkins job that triggers another job via "Trigger/call builds on other projects."
The triggered downstream job sometimes fails due to environmental reasons. I'd like to be able to restart the triggered job multiple times until it passes.
More specifically, I have a job which does the following:
Triggers a downstream job to configure my test environment. This process is sensitive to environmental issues and may fail. I'd like this to restart multiple times over a period of about an hour or two until it succeeds.
Trigger another job to run tests in the configured environment. This should not restart multiple times because any failure here should be inspected.
I've tried using Naginator for step 1 above (the configuration step). The triggered job does indeed re-run until it passes. Naginator looks so promising, but I'm disappointed to find that when the first execution of the job fails, the upstream job fails immediately despite a subsequent rebuild of the triggered job passing. I need the upstream job to block until the downstream set of jobs passes (or fails to pass) via Naginator.
Can someone help me know what my options are to accomplish this? Can I configure things differently for the upstream job so it relates to the Naginator-managed job better? I'm not wed to Naginator and am open to other plugins or options.
In case its helpful, my organization is currently using Jenkins 1.609.3 which is a few years old. I'd consider upgrading if that leads to a solution.

Jenkins Git SCM Polling stops when the job for which polling is happening is running

We have a job in a jenkins environment which is triggered based on the changes found in the git source code repository.
When the job is running, the git polling log shows nothing and until the job finishes the execution, polling log doesn't have anything on it.
It always shows log after completing the job and another note is that, enable concurrent builds option is not set to make sure only one build runs at a time.
I would like to understand whether it is a known behavior on jenkins front to halt polling when the job is running and whether the concurrent builds option is enabled or not?
I had a similar problem and discovered this: https://issues.jenkins-ci.org/browse/JENKINS-7423
It looks like it's related to the polls requiring a workspace in order to perform the checkout. You can manually kick off new builds and they will pick up SCM changes.

In Jenkins, if next trigger build is in pending state then how to abort running build and start running next pending build?

In Jenkins, If one build is currently running and next one is in pending state then what should i do so that running one should get aborted and next pending one should start running and so on.
I have to do it for few projects and each project has few jobs in it, I tried to save build_number as env variable in one text file (build_number.txt) and take that number to abort previous triggered build but making build_number.txt file for each job is not looking efficient and then I have to create many build_number files for each job for every project.
Can anyone please suggest me some better approach
Thanks
Based on the comments, if sending too many emails is the actual problem, you can use Poll SCM to poll once in 15 minutes or so, or even specify quiet time for a job. This will ensure that build is taken once in 15 minutes. Users should locally test before they commit. But if Jenkins itself is used for verifying the commits I don't see anything wrong in sending an email if build fails. After all, they are supposed to know that, no matter even if they fixed it in a later update intentionally or unintentionally.
But if you still want to abort a running job if there are updates, you can try the following. Lets call the job to be aborted as JOB A
Create another job that listens on updates same as that of the job that needs to be aborted
Add build step to execute groovy script
In the groovy script use Jenkins APIs to check if JOB A is running. If yes, again use APIs to abort the job.
Jenkins APIs are available here

Resources