Jenkins Per Slave Pipeline Build Enforcement - jenkins

I'm trying to find a way to ensure that an entire pipeline completes on a specific executor without allowing other jobs to be run on that executor:
my pipeline essentially looks like:
Build -> Deploy -> API testing -> selenium testing
as we have multiple teams, all running multiple paralell pipelines i want to ensure that on a per slave basis all builds complete in the pipeline before any others begin
Is anyone aware of a plugin that does this?

You can look into the Locks and Latches plugin that can help you enforce this by assigning a lock to the relevant pipeline.
I have never tried your setup, but it might work.
Also, consider restricting the number of executors on the slave to 1, so only a single pipeline can "fit in".
I hope this helps.

Related

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

Execute Build Jobs/Pipelines not on Master but only on Build Agent

Following the Jenkins Best Practices, I want to avoid that Build Jobs/Pipelines could be executed into my Jenkins Master.
To do so, I've installed the Job Restrictions Plugin, using it to configure the Master to run only some Maintenance Pipelines.
The problem is that now Build Pipelines that are configured to run on specific Agents, are not executed anymore. I see that the Build Queue continuously grows, and the Pipelines are not runned. I think that this behaviour could be related to Flyweight Executors of the Master.
So, the question is the following: How can I execute on Master just a little subset of Maintenance Pipelines and, in the mean time, execute Build Pipelines only on specific Agent?
You can configure the master node to only be used when explicitly named. Just click the master node > go to configure and change Use this node as much as possible to Only build jobs with label expressions matching this node
I found the solution that perfectly fits with my needs, here.
To quickly sum up the solution, I was to able to exclude all the user Builds from Master and run on it only the Jobs/Pipelines of a specific Jenkins folder (IuA in my case), configuring the Job Restrictions Plugin in the following way:
In order to better understand the logic behind this solution, I recommend you to give a look at link that I posted above.

Jenkins: Run only one build for a job on each node

We have a project where we have several Jenkins jobs:
One type of jobs that runs delivery (A),
one that does just compilation and unit tests (B)
and
one that runs integration tests, static code analysis et cetera (C).
We run on four Jenkins nodes (master + three slaves), and our jobs are a mix of declarative pipeline jobs and manually clicked in Jenkins-jobs.
We only want to run one integration test build per node at a time. However, we want to run as many deliveries (A) and code quality (B) builds as there are executors.
Until now, the Throttle concurrent builds (https://github.com/jenkinsci/throttle-concurrent-builds-plugin) plugin has satisfied our needs. However, this plugin does not support declarative pipeline builds, nor does it at all seem updated.
The Lockable resources plugin (https://github.com/jenkinsci/lockable-resources-plugin) seems promising, but we haven't found any way to lock the entire build with a resource-name dynamically set. That is, when we start a C build, we want it to lock "resource_{name of server}".
This plugin allows setting an entire-build-lock in the options directive,
נut we haven't figured out how to evaluate an environment variable there.
Any suggestions would be highly appreciated!
So the workaround on our side was to rewrite the pipeline script from declarative to scripted syntax. Then, the throttle concurrent builds-plugin works as a charm.
When the bug https://issues.jenkins-ci.org/browse/JENKINS-49173 is fixed, the plugin will work with declarative pipelines as well.

How can I ensure that only one if a kind of Jenkins job is run?

I have several integration tests within my Jenkins jobs. They run on several application servers, and I want to make sure that only one integration test job is run at the same time on one application server.
I would need something like a tag or variable within my jobs which create a group of jobs and then configure the logic that within that group, only one job may run at the same time.
Could I use the Exclusion plugin for that? Does anyone have experience with it?
Use the Throttle Concurrent Builds Plugin. It replaces the Locks and Latches plugin, and provides the capability to restrict the number of jobs running for specific labels.
For example: you create a project category 'Integration Test Server A' and tie jobs to it with a maximum concurrent count of 1, and a second 'Integration Test Server B' label and tie other jobs to it, both categories will only run a single concurrent build (assuming you've set a max job count of 1), and the other jobs in that category will queue until the 'lock' has cleared.
Using this method, you don't have to restrict the number of executors available on any specific Jenkins instance, and can easily add further slaves in the future without having to reconfigure all your jobs.
If I understand you right, you have a pool of application servers and it doesn't matter on what server your tests run. They only need to be the only test on that server.
I haven't seen a plugin that can do that. However, you can get easily around it. You need to configure a slave for each application server. (1 slave = 1 app server) You need to assign the same label to all slaves and every slave can only have one executor. Then you assign the jobs that run the integration tests, to run on that label. Jenkins will assign the jobs then to the next available slave (or node) that has that label.
Bare in mind that you can have more than one slave running on the same piece of hardware and even a master and a slave can coexist on the same server.
Did you check below parameter in the Jenkins -> Manage Jenkins -> Configure system
# of executors
The above parameter helps you restrict the number of jobs to be executed at a time.
A Jenkins executor is one of the basic building blocks which allow a build to run on a node/agent (e.g. build server). Think of an executor as a single “process ID”, or as the basic unit of resource that Jenkins executes on your machine to run a build. Please see Jenkins Terminology for more details regarding executors, nodes/agents, as well as other foundational pieces of Jenkins.
You can find information on how to set the number of Jenkins executors for a given agent on the Remoting Best Practices page, section Number of executors.
Source - https://support.cloudbees.com/hc/en-us/articles/216456477-What-is-a-Jenkins-Executor-and-how-can-I-best-utilize-my-executors

Build Pipeline Plugin & Manual Deployment With Parameter

Let's say I have this situation. I have three jobs. Job number one has two manually triggered downstream jobs (deploy to test, deploy to prod for example). Something like this:
I want the deployment jobs (test-job-2, test-job-3) to require a password before they are triggered. How can I solve this with Jenkins?
The only option right now supported by the Build Pipeline Plugin is to have a manually deployed downstream job. But this job starts right after you click on it. I would like to require the user to manually enter some parameters (password for example).
Is there some workaround? I was thinking of using the Promoted Builds Plugin. So the deployment jobs would run in a "dry run mode" - just checking that we have ssh access to the server and some other basic stuff. And then in order to deploy you will have to promote the build.
This approach isn't very nice though. Build pipeline and promoted builds plugins don't interact with each other very well.
This is not exactly what you want, but I guess it would some how solve your problem.
View Job Filters
Using this feature in tandem with a security feature such as the Standard matrix based security can help you create a view that will show different jobs depending on who is logged in.
I use different Jenkins Servers to "complete the pipeline" using Build Publisher job to publish the last part of the pipeline job to the other jenkins. I then pick it up from there. Operations teams have access to the "prod" jenkins system, and developers have access to the "dev" system.

Resources