How to execute only 1 job build at a time in Jenkins - jenkins

I have only 1 job in Jenkins. This job gets build multiple number of times. I want Jenkins to execute only 1 build at a time and keep remaining builds (to be executed next) in the build queue. I have changed the no. of build executors available from 2 to 1, still some part of the next build gets executed along with the current build. How to achieve this?

So I searched and learnt how Jenkins actually works, even if your build is in build queue and is waiting for an executor, it will still have building state as TRUE. So eventually only one build will be executed at a time but all those other builds in the queue will also be in building phase waiting for an executor.

Related

How to avoid scheduling/starting multiple runs of a Jenkins job at the same time

We are moving over our build system from Hudson to Jenkins and also to declarative pipelines in SCM. Alas, it looks like there are some hiccups. In Hudson, when a job was scheduled and waiting in the queue, no new jobs were scheduled for that project, which makes all the sense. In Jenkins, however, I observe there are e.g. 5 instances of a job started at the same time, triggered by various upstream or SCM change events. They have all even kind of started, one of them is actually running on the build node and the rest are waiting in "Waiting for next available executor on (build node)". When the build node becomes available, they all dutifully start running in turn and all dutifully run through, most of them without no purpose at all as there are no more changes, and this all takes a huge amount of time.
The declarative pipeline script in SCM starts with the node declaration:
pipeline {
agent {
label 'BuildWin6'
}
...
I guess the actual problem is that Jenkins starts to run these jobs even though the specified build node is busy. Maybe it thinks I might have changed the Jenkinsfile in the SCM and specified another build node to run the thing on? Anyway, how to avoid this? This is probably something obvious as googling does not reveal any similar complaints.
For the record, answering myself. It looks like the best solution is to define another trigger job which is triggered itself by SCM changes. It should do nothing else, only checks out the needed svn repos (with depthOption: 'empty' for space and speed). The job needs to be bound to run on the same agent than the main job.
The main job is triggered only by the first job, not by SCM changes. Now if the main job is building for an hour, and there are 10 svn commits during that time, Jenkins will schedule 10 trigger job builds to run. They are all waiting in the queue as the agent is busy. When the agent becomes available, they all run quickly through and trigger the main job. The main job is triggered only once, for that one must ensure its grace/quiet period is larger than the trigger job run time.

Queuing the entire Jenkins pipeline

I have three jobs in my jenkins pipeline. The requirement is to queue the pipeline, ie, it should run as it is for the first build; for the second build (before the first gets completed), the flow should be - get all the 3 jobs done from the first build, then go for the second build.
PS: Is it possible through Build Blocker plugin? If so, how?

Jenkins - How to pause queued job's runs and let a new build take priority

Jenkins any version.
I have two versions: 1.642.3 or 2.32.3
I have a Jenkins job jobA. Let's assume this job deploys an artifact to a target deploy server.
It takes 2 parameters, artifact name and target deploy server.
Execute concurrent build is currently DISABLED i.e. not check marked.
Assuming I launched multiple builds on this job manually -or via Jenkins CLI -or via Rest API way (i.e. via some automation/integration/parent upstream job calling this job).
I now see, there is one Jenkins build in-progress -and- all other N
no. of runs are in "queued" mode.
Let's assume I have close to 100+ such builds in the queue (ready to be launched as soon as the in-progress one completes), I'm trying to see if there's a way I can PAUSE the existing queued builds (PS: I do NOT want to cancel them) and launch a new build (which I want to deploy urgently) and once that's done, UNPAUSE the queued builds (so that I don't have to cancel all of them and re-submitting/re-building them again by remembering what were the parameters passed -- for artifact name and target deploy server).
My 3 Conditions:
1) One of my current situation is the server where this job is running is one Jenkins master/slave machine (which have some credentials that can't be taken to other machines i.e. I can't replicate the source Jenkins machine (where the job is running as a slave(s) and thus, I can't use bunch of slaves) and
2) This job also creates some runtime folders/files at a common location on the source machine which I don't want to get overwritten by running concurrent / parallel builds if I enable "Execute concurrent builds". I know, the workspace for concurrent builds is individual to each job run, but not if the job is creating a common folder/file during it's run.
3) I don't want to create a copy of this job :)
In one sentence, is it possible to PAUSE existing queued builds (or some of the queued builds) so that I can launch a new build or make other ones take priority as the next build and then UNPAUSE the paused ones to resume (as launched without requiring them to relaunch)?
You can use Priority Sorter Plugin for this.
Add a new string parameter to the job, for example BUILD_PRIORITY and set default value to 2. Then in Job Priorities menu select Use Priority from Build Parameter priority strategy and put that parameter there.
Now you can run 100+ jobs with the default BUILD_PRIORITY parameter value (2) and if you need to launch a new build urgently just set that parameter to 1 and it will be the first build in the queue.

How to have all jobs of a build be executed exclusively on the same node?

I have a Jenkins server with half a dozen builds. Each of these builds is composed of a parent job that triggers anywhere between 4 and 6 smaller jobs that run in parallel. I use the EC2 plugin to keep the number of active slaves in line with the number of queued builds. Or in other words, slaves are coming and going all the time. Each slave has 7 executors (parent job + max(4, 6)).
It is absolutely crucial that all jobs of a build are executed on the same machine. I also cannot allow any jobs from build A to execute on a machine that has jobs from build B running.
What I'm looking for is a way that prevents Jenkins from using any inactive executors of a node as long as any jobs from a previous build are still active on it.
I've spent the day experimenting with a combination of the Throttle Concurrent Builds Plugin and the NodeLabel Parameter Plugin. Unfortunately, there seems to be a bug somewhere that causes throttled builds to not contribute to the Load Statistics of a slave. This in turn means that throttled build will never trigger Jenkins to spin up additional slaves. Obviously this is totally unacceptable.
You can try and use "This build is parameterized"
and pass the $NODE_NAME as a parameter between the builds and then use it at the "Restrict where this project can be run"

In Jenkins, if next trigger build is in pending state then how to abort running build and start running next pending build?

In Jenkins, If one build is currently running and next one is in pending state then what should i do so that running one should get aborted and next pending one should start running and so on.
I have to do it for few projects and each project has few jobs in it, I tried to save build_number as env variable in one text file (build_number.txt) and take that number to abort previous triggered build but making build_number.txt file for each job is not looking efficient and then I have to create many build_number files for each job for every project.
Can anyone please suggest me some better approach
Thanks
Based on the comments, if sending too many emails is the actual problem, you can use Poll SCM to poll once in 15 minutes or so, or even specify quiet time for a job. This will ensure that build is taken once in 15 minutes. Users should locally test before they commit. But if Jenkins itself is used for verifying the commits I don't see anything wrong in sending an email if build fails. After all, they are supposed to know that, no matter even if they fixed it in a later update intentionally or unintentionally.
But if you still want to abort a running job if there are updates, you can try the following. Lets call the job to be aborted as JOB A
Create another job that listens on updates same as that of the job that needs to be aborted
Add build step to execute groovy script
In the groovy script use Jenkins APIs to check if JOB A is running. If yes, again use APIs to abort the job.
Jenkins APIs are available here

Resources