Jenkins - How to install a setup of plugins on an agent if I want a slave to do a specific task - jenkins

In Jenkins, I'm trying to setup Controller and Agent nodes.
I've two questions here.
I want to understand how to install a setup of plugins on an agent?
2 Can I have different agent with different plugins so that I can assign a agent(based on the plugins installed) for a specific build?

Did you figure this out?
Plugins are installed on master, but jobs steps invoke the plugin on the agent (slave) where run. You can have many agents, but for example if you run a job invoking a windows batch cmd on a linux node, job will just fail.
Use labels on the nodes and your jobs to restrict where a job can run. Search http://plugins.jenkins.io for "label" for some helpful plugins to manage labels. Prefer capability/feature labels instead of tying to hostnames.
Generally, a plugin has a global configuration, and job specific parameters to configure. Not likely to have node by node plugin config but would be plugin specific in any case

Related

Understanding Jenkins plugins and agent

I was trying to understand jenkins agents. This page asks to first create jenkins docker agent. But it doesnt say where to execute these steps?
Q1. Should we be executing these steps on node or a machine which we want to designate as agent?
The next step asks to setup an agent through Jenkins UI:
Q2. Above is nothing but the Jenkins controller UI right?
But above UI does not seem to accept IP address of the agent node on which we staarted docker agent.
Q3. Does Jenkins controller automatically discovers running agents reachable on the network?
Q4. What are exactly Jenkins plugins in relation with agents? Jenkin glossary defines plugin as "an extension to Jenkins functionality provided separately from Jenkins Core." But that does not explain much of its nature or functionality. This page also explain plugin installation and management on the controller, but doesnt explain exact nature of their functionality.
Q4.1. Do plugins run jobs of agent nodes? For example, does Android Emulator plugin installed on controller installs and runs android emulator on available agent?
Q4.2. If yes is the answer to Q4.1, does every plugin need corresponding process to be installed on the agent so that agent can carry out functionality specified in the pluin on the controller?
PS: Am a noob in Jenkins and overall DevOps stuff and just trying to wrap my head around Jenkins

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

Preventing a build from running on the same node if it failed previously in Jenkins

Any plugins or default behavior in Jenkins that I could configure to force a build to be done on a different node if the previous build on a node was a failure?
To answer my own question, it appears there are not many plugins that allow you control over jobs running on Jenkins slaves and none that do this.

How to run Jobs in a Multibranch Project sequential instead of parallel

I have configured a multibranch-pipeline project in Jenkins. This project run integration test on all my feature branches (git). For each job in the pipeline project it creates an instance of my webapp (start tomcat and other dependencies). Because of port binding issues this result in many broken jobs.
Can I throttle the builds in the multibranch-pipeline project, so that the jobs for each feature branch run sequentially instead of parallel?
Or if there any more elegant solution?
Edit:
Situation and problem:
I want to have a multibranch pipeline project in Jenkins (because I have many feature branches in git)
The jobs which are created from the multibranch pipeline (for each feature branch in git), run in parallel
Polling scm is at midnight (commits on x branches are new, so the related jobs started at midnight)
every job started one instance of my webapp (and other dependencies) which bind to some ports
The problem is, that there can start many of these jobs at midnight. Every job will try to start an instance of my webapp. The first job can start the webapp without any problem. The second job cannot start the webapp because the ports are already taken from the first instance.
I don't want to configure a new port binding for each feature branch in my git repository. I need a solution to throttle the builds in the multibranch pipeline, so that only on "feature" can run concurrently.
From what I've read in other answers the disableConcurrentBuilds command only prevents multiple builds on the same branch.
If you want only one build running at a time, period, go to your Nodes/Build Executor configuration for the specific VM that your app is running on, drop the number of executors to 1 and configure the node labels so that only jobs from your multibranch pipeline can run on that VM.
My project has strict memory, licensing and storage constraints, so with this setup, all the jobs on the master and feature branches start, but only one can run at a time until the executor becomes available.
The most elegant solution would be to make your Integration Tests to be able to run concurrently.
One solution would be to use an embedded tomcat with a dynamic port. In that way each job instance would run in tomcat with different ports.
This is also a better solution than relying on an external server.
If this is too much work, you can always use the following code in your "jenkinsfile" pipeline:
node {
// This limits build concurrency to 1 per branch
properties([disableConcurrentBuilds()])
// continue your pipeline ...
}
The solution comes from this SO answer.

Jenkins: how to test the slaves

I am creating a list of Jenkins jobs for sanity test of our Jenkins build environment. I want to create layers of jobs. The first layer of jobs will check the environment, e.g. if all slaves are up, the 2nd layer then can check the integration to other tools such as GitHub, TFS, SonarQube, then the 3rd layer can run some typical build projects. This sanity test can also be used to verify the environment after any major changes to the Jenkins servers.
We have about 10 slaves created on two servers, one Windows and one Linux. I know I can create a job to run on a specific slave, therefore test if the slave is online, but this way I need to create 10 jobs just to test all slaves. Is there a best approach to check if all slaves are online?
One option is to use Jenkins Groovy scripting for a task like this. The Groovy plugin provides the Jenkins Script Console (a useful way to experiment) and the ability to run groovy scripts as build steps. If you're going to use the Script Console for periodic maintenance, you'll also want the Scriptler plugin which allows you to manage the scripts that you run.
From Manage Jenkins -> Script Console, you can write a groovy script that iterates through the slaves and checks whether they are online:
for (node in Jenkins.instance.nodes) {
println "${node.name}, ${node.numExecutors}"
def computer = node.computer
println "Online: ${computer.online}, ${computer.connectTime} (${computer.offlineCauseReason})"
}
Once you have the basic checks worked out, you can create either a standalone script in Scriptler, or a special build to run these checks periodically.
It often takes some iteration to figure out the right set of properties to examine. As I describe in another answer, you can write functions to introspect the objects available to scripting. And so with some trial and error, you can develop a script performs the checks you want to run.

Resources