Assign "cost" to a Jenkins slave? - jenkins

I have a Jenkins master setup with several slaves in our primary data center and one slave in our DR/BC (disaster recovery/business continuity) data center.
I would like jobs to run on the DR slave regularly to ensure it stays up to date with the required software and doesn't get stale, but since our DR center is geographically distant from the resources used in the builds & tests (SAN, DB, etc.) jobs take 4x - 10x longer to run. This is fine for a DR scenario, but painful for everyday life.
It appears that Jenkins sorts the slaves alphabetically for selecting which to run a job on, which is unfortunate because our machine naming convention is based on datacenter location, and the DR slave is always picked first.
Is there a way to specify how Jenkins picks a slave? or a way to specify a "cost" (like a routing cost) of a slave, so that it will be picked less often?

The solution I have settled on:
Configure the DR slave Availability as "Take this slave on-line when in demand and off-line when idle"
Create a "BuildAll" job to launch all of the integration builds at the same time.
Schedule BuildAll to run repeatedly at 3am ( 0-10 3 * * * )
This will force the DR slave to come online, and run random jobs several times which should show if that slave has fallen behind in any required software, patches, etc.

Related

Only allow one job on a machine

The way we're using Jenkins requires us to have two nodes defined for each machine. One Jenkins node runs as a normal user (called Normal), and the other runs as the administrator (called Admin). So they show up as two separate nodes, even though they exist on the same slave machine.
But, we're running into a concurrency problem. Because our job switches between the two nodes, there is a possibility of another job (Job B) being assigned to (for example) the Normal node, while the Admin node is working on its part of (e.g.) Job A.
Is there a way to tell Jenkins that if either the Normal node or the Admin node of a machine is being used, then it should NOT give the other node some other job?
To elaborate on this question--we have a test suite that we currently run serially. All of our Jenkins masters have multiple slaves, so naturally we would like to take advantage of parallelization, so the suite doesn't spent 2 hours using one machine while the other ones sit idle. So it's not really a matter of ensuring only one job runs at once, it really is a matter of telling Jenkins not to use a node when its partner node is busy.
The issue is not related to two nodes on the same machine or one privileged or not; it's a matter of blocking one job from running while the other is still running.
I trust you are using labels to restrict what jobs can run on what nodes
You can use Build Blocker plugin to block the job from running while others are . There are other plugin options which may work for you as well.
You can also use the Paramterized Trigger to in-line the execution of the other job. It can be run as a build step or a post-build step.
You can also restrict the number of executors on a given node via ${JENKINS_URL}/computer/myNode/configure | # of executors, so you don't run multiple jobs one the same node if that's an issue.
Here's the way I solved this problem:
Set the number of executors on each slave node to 1.
Force my job to take an executor for the whole length of the job.
Specifically, in the groovy script that we use for all our jobs, at the very top, after we find which two (admin and normal, running on the same slave) nodes we need, we use the following:
n
node(myNormalNode)
{
//all the rest of the job, including:
node(myAdminNode)
{
//other commands for the admin node
}
//back to commands for the normal node
node('master')
{
//code to run on master
}
//and so forth
}
This makes Jenkins not assign any other jobs to this computer until the first job is done.

Jenkins master and slave node configuration

I am new to Jenkins and I have configured master-slave nodes as shown below, but I need help to configure the no of executors in each of the below slave nodes
Currently, I have configured 100 executors in each slave nodes
How many no of executors I can configure in each slave node and what fact(memory, RAM, etc) need to take consider when increasing the no of executors?
Maximum how many no of executors I can configure in each server?
Well it totally depends on your usage. There are multiple factors such as how much cpu and memory is available, how the build are going to execute and what kind of builds, how frequent these build should run etc. But I can clearly say that 100 is too big number. I would suggest go with 20 builds (if builds run frequently and have fair amount of CPU and memory) first and observe if is there any issue with numbers or not then you can increase accordingly.
here is very nice article check this out https://www.avantica.com/blog/jenkins-balance-load-master-slave-setup#:~:text=Jobs%20are%20built%20using%20executors,to%20build%20two%20different%20tasks.

Jenkins grouping of nodes via label on demand

I have a scenario where I have N numbers of nodes and N number of tests, and these tests will be distributed to the nodes. My nodes have a label Windows.
Here's an example:
I have a pipeline job that will manage the distribution of the tests to the nodes. I set my pipeline job to run 10 tests on 10 VMs having the label Windows and it will run smoothly. However, one of my requirements is to concurrently run that pipeline job. The problem I might encounter If I have 10 tests on VMs 1-10 in the first run of the job, and run another job for 5 tests for VMs 11-15, given that I am using the Windows label, there might be a possibility that Jenkins will assign the test to VMs 1-10 but should run on VMs 11-15 or vice versa.
The solution I came up with is to dynamically change the label of the VMs from one of the jobs to a unique label that will only be used for that Job. Unfortunately, I still don't know how to do that.
Basically, I just want to logically group my nodes via label on demand in my pipeline script.
I searched all throughout the internet and yet I still wasn't able to find a solution that fits my needs.
Kindly help me with this. Also, I am open to using a different approach.

Randomise slave load on Mesos

trying to solve some problem with Mesos. I have three build servers for Jenkins. Jenkins schedules jobs on them through Mesos.
For now, Mesos loads one agent(slave) as hard as possible, but I want it to spread jobs across all agents..
As I see, it's better to run three jobs on three agents, than on one.
Is it possible to randomise job scheduling?
Or perhaps, I have such scenario. 2 large servers and one mini. I want to schedule Jobs on mini by default, and if it's not enough resources, then proceed to large servers. How can I achieve this goal? Is it possible to set priority for agents(slaves) to specify on which agent I want job to run at first?
The Mesos plugin for Jenkins attempts to build on the most recently built slave (see this method). This means that once it builds on that machine once, as long as that machine still has available spare resources - it'll schedule additional jobs on that machine until it is full. Right now it looks like that isn't optional (I have filed it as a feature request).

How many Remote Nodes can Jenkins manage

How many Remote Nodes can Jenkins manage ? Are there any limitations/memory issues?
What is more effective:
1) 100 Nodes 1 executor per node ?
2) 5 Nodes with 20 executors per node ?
Tx.
As far as i know, there is no limitation on # of nodes one can have although your system might feel like saying, enough is enough! Issues such as number of processes per user (we got this issue recently, not with Jenkins but some other application where RAM and disk space were fine but the system stopped responding. We started getting system cannot fork() error), total number of open files etc. Few such issues might still be configurable but may not be allowed/feasible.
If resource (in your case, nodes) is not a constraint, which process wouldn't like to run wild? :) In practical cases, generally you wouldn't have the flexibility to opt for first option. In second case where you have 5 nodes with 20 executors, all you have to make sure is not to tie up jobs to a particular node unless you have a compelling reason.
Some slaves are faster, while others are slow. Some slaves are closer (network wise) to a master, others are far away. So doing a good build distribution is a challenge. Currently, Jenkins employs the following strategy:
If a project is configured to stick to one computer, that's always honored.
Jenkins tries to build a project on the same computer that it was previously built.
Jenkins tries to move long builds to slaves, because the amount of network interaction between a master and a slave tends to be logarithmic to the duration of a build (IOW, even if project A takes twice as long to build as project B, it won't require double network transfer.) So this strategy reduces the network overhead.
You should also have a look at these links:
https://wiki.jenkins-ci.org/display/JENKINS/Least+Load+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin

Resources