Jenkins executors configuration mismatch - jenkins

Recently I see some strange behavior with Jenkins, configured to use 5 executors (on master) only, but despite the configuration, I see this:
My build queue has few hundreds of jobs waiting and dispatching correctly.
But I can't understand why there are so many running jobs on my master if only 5 executors configured?
Running Jenkins ver. 2.89.1

I guess you are using Pipeline jobs. All the pipeline code which is not inside a node is executed in a Jenkins master thread and it's shown in the UI as an executor, usually called lightweight executor. They do not consume a real executor in a node.
This is why your pipelines should not do any heavy work other than orchestrating the build. Any real build (calls to build tools, computation or resources demanding tasks) should be done inside a node block.

Related

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

Why build executor status showing two jobs for one pipeline job?

I am using a groovy pipeline script for a build job, so in jenkins pipeline is like,
node
{
git url : 'myurl.git'
load 'mydir/myfile.groovy'
}
Its working well as expected. but in build executor status, It is showing it as two jobs running.
Why it is showing one job as two jobs with same name ?.
Is there something which i have missed to tell something to jenkins for pipeline job?
I can't find a better documentation source than this README (issue JENKINS-35710 also has some information), but the short of it is the Groovy pipeline executes on master (on a flyweight executor) while node blocks run on an allocated executor.
Here is a relevant snippet taken from the linked documentation:
[...]
Why are there two executors consumed by one Pipeline build?
Every Pipeline build itself runs on the master, using a flyweight executor — an uncounted slot that is assumed to not take any significant computational power.
This executor represents the actual Groovy script, which is almost always idle, waiting for a step to complete.
Flyweight executors are always available.

How to run Jobs in a Multibranch Project sequential instead of parallel

I have configured a multibranch-pipeline project in Jenkins. This project run integration test on all my feature branches (git). For each job in the pipeline project it creates an instance of my webapp (start tomcat and other dependencies). Because of port binding issues this result in many broken jobs.
Can I throttle the builds in the multibranch-pipeline project, so that the jobs for each feature branch run sequentially instead of parallel?
Or if there any more elegant solution?
Edit:
Situation and problem:
I want to have a multibranch pipeline project in Jenkins (because I have many feature branches in git)
The jobs which are created from the multibranch pipeline (for each feature branch in git), run in parallel
Polling scm is at midnight (commits on x branches are new, so the related jobs started at midnight)
every job started one instance of my webapp (and other dependencies) which bind to some ports
The problem is, that there can start many of these jobs at midnight. Every job will try to start an instance of my webapp. The first job can start the webapp without any problem. The second job cannot start the webapp because the ports are already taken from the first instance.
I don't want to configure a new port binding for each feature branch in my git repository. I need a solution to throttle the builds in the multibranch pipeline, so that only on "feature" can run concurrently.
From what I've read in other answers the disableConcurrentBuilds command only prevents multiple builds on the same branch.
If you want only one build running at a time, period, go to your Nodes/Build Executor configuration for the specific VM that your app is running on, drop the number of executors to 1 and configure the node labels so that only jobs from your multibranch pipeline can run on that VM.
My project has strict memory, licensing and storage constraints, so with this setup, all the jobs on the master and feature branches start, but only one can run at a time until the executor becomes available.
The most elegant solution would be to make your Integration Tests to be able to run concurrently.
One solution would be to use an embedded tomcat with a dynamic port. In that way each job instance would run in tomcat with different ports.
This is also a better solution than relying on an external server.
If this is too much work, you can always use the following code in your "jenkinsfile" pipeline:
node {
// This limits build concurrency to 1 per branch
properties([disableConcurrentBuilds()])
// continue your pipeline ...
}
The solution comes from this SO answer.

Is it possible to make Jenkins create workers from attached clouds faster?

I have an instance of Jenkins that uses the mesos plugin. Nearly all of my jobs get triggered via Mesos tasks. I would like to make worker generation a bit more aggressive.
The current issue is that, for the mesos plugin, I have all of the jobs marking the mesos tasks as one-time usage slaves and when a build is in progress on one of these slaves Jenkins forces any queued jobs to wait for a potential executor on these slaves, as opposed to spinning up new instances.
Based on the logs, it also seems like Jenkins has a timer that periodically checks to see if any slaves should be spun up based on the # of queued jobs / excess workload. Is it possible to decrease the polling interval for that process?
From Mesos Jenkins Plugin Readme: over provisioning flags
By default, Jenkins spawns slaves conservatively. Say, if there are 2 builds in queue, it won't spawn 2 executors immediately. It will spawn one executor and wait for sometime for the first executor to be freed before deciding to spawn the second executor. Jenkins makes sure every executor it spawns is utilized to the maximum. If you want to override this behavior and spawn an executor for each build in queue immediately without waiting, you can use these flags during Jenkins startup:
-Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85

How to have all jobs of a build be executed exclusively on the same node?

I have a Jenkins server with half a dozen builds. Each of these builds is composed of a parent job that triggers anywhere between 4 and 6 smaller jobs that run in parallel. I use the EC2 plugin to keep the number of active slaves in line with the number of queued builds. Or in other words, slaves are coming and going all the time. Each slave has 7 executors (parent job + max(4, 6)).
It is absolutely crucial that all jobs of a build are executed on the same machine. I also cannot allow any jobs from build A to execute on a machine that has jobs from build B running.
What I'm looking for is a way that prevents Jenkins from using any inactive executors of a node as long as any jobs from a previous build are still active on it.
I've spent the day experimenting with a combination of the Throttle Concurrent Builds Plugin and the NodeLabel Parameter Plugin. Unfortunately, there seems to be a bug somewhere that causes throttled builds to not contribute to the Load Statistics of a slave. This in turn means that throttled build will never trigger Jenkins to spin up additional slaves. Obviously this is totally unacceptable.
You can try and use "This build is parameterized"
and pass the $NODE_NAME as a parameter between the builds and then use it at the "Restrict where this project can be run"

Resources