Run Job on either node from a pool of dynamic nodes - jenkins

I have many different jobs. Each job requires exclusive access to a physical device during its execution. There are five such identical devices, i.e. each job can use any of the five devices for its execution.
I'd like to be able to trigger 20 jobs and then want Jenkins to provision five agents and execute 5 jobs in parallel until the queue is emptied.
I've setup 5 dynamic agent templates. Each template having a different IP of a physical device as an environment variable and each template being limited to an Instance Capacity of 1. That should take care of the exclusive access.
First I have given each template the same label: 'my5devices' and set the jobs to use this label:
agent {label 'my5devices'}
yet Jenkins only ever provisioned one agent. In the queued jobs I could see, they were all waiting for this single dynamic agent to have a free executor.
I was observing the same, when given each template a different label and changing the job:
agent { label 'device1 || device2 || device3 || device4 || device5' }
It seems Jenkins never provisions more than one agent and therefore only ever runs one job after the other.

Related

Only allow one job on a machine

The way we're using Jenkins requires us to have two nodes defined for each machine. One Jenkins node runs as a normal user (called Normal), and the other runs as the administrator (called Admin). So they show up as two separate nodes, even though they exist on the same slave machine.
But, we're running into a concurrency problem. Because our job switches between the two nodes, there is a possibility of another job (Job B) being assigned to (for example) the Normal node, while the Admin node is working on its part of (e.g.) Job A.
Is there a way to tell Jenkins that if either the Normal node or the Admin node of a machine is being used, then it should NOT give the other node some other job?
To elaborate on this question--we have a test suite that we currently run serially. All of our Jenkins masters have multiple slaves, so naturally we would like to take advantage of parallelization, so the suite doesn't spent 2 hours using one machine while the other ones sit idle. So it's not really a matter of ensuring only one job runs at once, it really is a matter of telling Jenkins not to use a node when its partner node is busy.
The issue is not related to two nodes on the same machine or one privileged or not; it's a matter of blocking one job from running while the other is still running.
I trust you are using labels to restrict what jobs can run on what nodes
You can use Build Blocker plugin to block the job from running while others are . There are other plugin options which may work for you as well.
You can also use the Paramterized Trigger to in-line the execution of the other job. It can be run as a build step or a post-build step.
You can also restrict the number of executors on a given node via ${JENKINS_URL}/computer/myNode/configure | # of executors, so you don't run multiple jobs one the same node if that's an issue.
Here's the way I solved this problem:
Set the number of executors on each slave node to 1.
Force my job to take an executor for the whole length of the job.
Specifically, in the groovy script that we use for all our jobs, at the very top, after we find which two (admin and normal, running on the same slave) nodes we need, we use the following:
n
node(myNormalNode)
{
//all the rest of the job, including:
node(myAdminNode)
{
//other commands for the admin node
}
//back to commands for the normal node
node('master')
{
//code to run on master
}
//and so forth
}
This makes Jenkins not assign any other jobs to this computer until the first job is done.

How to increase maximum concurrent jobs?

In my newly installed Jenkins, I have four jobs. I can only run two concurrently. If I trigger the build of a third job, it is set in the queue and triggered once one of the first two finishes.
I know my server can handle more than two concurrent jobs at a time. How can I increase this default threshold of two?
If it means anything, these are not build-a-deployable package kind of jobs but environment prep jobs that instantiate various DBs. So the jobs simply invoke a python script on the Jenkins server, which is the same script across multiple jobs but each job invokes it with different input params. The jobs are 100% independent of one another and do not share any resource except the script.
You go to Manage Jenkins --> Configure System, then change # of executors:

Deactivate jenkins job while other is running

Is it possible to block a certain jenkins job while another one is running as some sort of condition?
I would like job "A" to be disabled while job "B" is running, and once job "B" is done, then job "A" should be available again.
I have read that it is possible to block upstream jobs when the jobs are part of a flow and the flow is running, but I would like to know if it s possible for 2 completely independent jobs.
Use this Build Blocker Jenkins Plugin.
You can block A in job B's config. As long as 'A' is running, 'B' would not run.
You can block Both A & B in B's config, thus B would run only no other As or Bs are running.
Additionally, You can block B in A's config, they will block each other.
*job names are case sensitivity
You can use the Throttle Concurrent Builds Plugin to do this.
You're going to want to:
create a category for the two jobs in the Jenkins global
configuration.
In each build check the Throttle Concurrent Builds
option.
Choose Throttle this project as part of one or more
categories.
Set Maximum Total Concurrent Builds (and Maximum
Concurrent Builds per Node if applicable) to 1
Check the category box to mark the job as part of the category.
This will limit either job from running while the other one runs, so if Job A is running then B won't start, and if Job B is running than A won't start.

node specific locks on Jenkins

I have around three Jenkins slave that are configured to run the same job allowing only one concurrent run on each slave. Each of these slave is connected to an embedded hardware that we run the job on. The total duration of the job is around 2 hours. The first 1 hour 50 mins is just taken to compile and configure the slave and the last 10 mins is when the embedded device is used. So basically I was looking for something that I can lock on for the last 10 mins. This would allow us to run multiple concurrent builds on the same slave.
Locks and Latches locks are shared across nodes.
What I am looking for is a node specific lock
If you can separate the problematic section from the compilation process you can just create another job to handle the last 10 minutes and call it using Parameterized Trigger Plugin. This job will run one instance at a time and will act as a native blocker for the run. That way, you can configure concurrent executions and throttling (if needed) on the main job, and create a "gate" to the problematic section.

Run Jenkins job immediately

I have a very lightweight job that should be executed immediately when it is triggered, rather than waiting an hours for current jobs to finish.
As I understand, a flyweight task is what I want. It will create a ephemeral executor, just for that task.
How can I make a job be run as flyweight?
I have recently had the same problem. My company has a lot of jenkins projects and some have more precedence over others, and we limit the number of executors to only 4.
Therefore, we decided to create some slaves, instead of always building on the master. Create a slave node that only builds your "very lightweight job".
Go to Manage Jenkin -> Manage nodes -> New node -> Dumb slave.
Then configure your slave node to your liking. Now configure the "very lightweight job". Make sure that This build is parameter is checked, then Add parameter -> Node.
Then select the slave node that you just created. There are a lot of configurations, such as which node do you want to default, but I think you can customize that to your liking.
Try out this FlyWeightProject Plugin. It is an extension of the Freestyle type that runs in Flyweight.
AFAIU the issue is that all your executors are occupied when it comes to running this high priority job.
What about:
Establishing another slave node; in a VM, for instance; with an appropriate number of executors, in case there are more of these high priority jobs
Assigning a label like high-priority to this new node
Restricting where this high priority job can be run to this high-priority label
Assigning a label like long-running to all other nodes
Restricting where all other jobs can be run to this long-running label
Another possibility is to:
Configure the new node's Usage: Only build jobs with label restrictions matching this node
Restrict where the high priority job can be run to the new node
This avoids having to create and assign labels to all jobs, as mentioned above, but is less flexible for future adaptions and extensions.

Resources