Query about jenkins in linux - jenkins

I am having problem with the executor in jenkins.
Can anyone please tell me about executor in jenkins?
Also, explain it's practical implementation.

from : Jenkins User Documentation Home - Glossary
Executor: A slot for execution of work defined by a Pipeline or Project on a Node. A Node may have zero or more Executors configured which corresponds to how many concurrent Projects or Pipelines are able to execute on that Node.
An Executor does "the work" of executing the job steps. In our configuration, we have many nodes, each one corresponding to a VM host / server. We have each node configured with one executor per core. That let's us run one job per core, which is a generally good performance balance. That gives use the ability to run n jobs in parallel on an n-core VM. There are no rules regarding the ratios, depends really on what your jobs do and where the performance issues may be.

Related

How to run jenkins build alternatively on agent nodes?

Let's say I have a job A and also a agent configured. I want to run build 1 of Job A on master and build 2 of Job A on agent node.
Is there an option to achieve that ?
OR
Is there a way where my job looks at controller and if it already finds a build running, then start the next build on agent ?
Are you intending to run in parallel or just alternate? (Not a good idea to run jobs on master; could configure a node to run on same host as "master".). Seems to be parallel and you have restricted to one executor each on master and agent (you can have more, in which case any advice may be moot).
Nevertheless, Jenkins queue job allocation to executors is "sticky"; it tries to run where last run, unless unavailable. This can lead to overloading in nodes. So the M,A,M,A pattern is unnatural.
There are plugins that might help: Least Load, Scoring Load Balancer, but maybe not.
Perhaps an approach would be to restrict your job using a label and have a post-build groovy step that moves the label to the other upon success for the next run or two labels and the job self-modifies the label to match the other.

Worker node doesn't trigger Jenkins job in queue even when available

This question could seem similar to other questions but I tried everything and nothing worked for me that's why I'm asking specific question for my case.
I am running Jenkins jobs using Pull request builder in a master - worker system. I have 2 worker with 2 executors each. Master doesn't have any executor. I have 2 freestyle jobs A and B. My plan is to run job A and B concurrently (whenever PR is open/modified) in a worker node but I cannot run my job A/B concurrently in a node. Currently, by default, my job A is tied to one worker node and job B is tied to another worker node. All other jobs are in queue which delays my test execution.
I looked around for different plugins - Node and Label parameters, distributed builds, job restrictions. I tried labelling the jobs and node but jobs didn't trigger. So, not sure what's the problem as I don't see any errors in logs may be I'm not using plugins in a proper way. Can someone please let me know what is the good way of the dealing my situation?

Jenkins - How to reserve an executor for (a) specific job(s)

We have a Jenkins server with 8 executors and 20 jobs. 15 of those jobs take approximately 2 hours to finish while the remaining 5 take only 15 minutes. I would like to reserve 1 executor (or 2) to run those 5 small jobs only and restrict other jobs to run on the other executors. Note: I don't have any slaves, just 8 executors on master Jenkins process.
I'm new to Jenkins so I just wonder is it any way that I can do that? Thank you.
As i understand it Kiddo uses the master for 8 executors. What you can do is to add a new slave which runs on the master, let's call it slave-master. I.e. You will have master with 6 executors that has usage set to utilise as much as possible, and then slave-master which has usage restricted to only the short builds. So on your server you will have two jenkins tasks running, one is the jenkins master it self, and two is the slave-master.
For info on how to connect slaves, go to https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds
Adding to #StephenKing answer, you also have to specify the label name for each job while configuring it, as shown in the below image:
I'm a bit late but I think it would be much easier to restrict how many concurrent "slow" jobs can run than trying to reserve executors. This is simple to do with the Lockable Resources plugin: https://wiki.jenkins.io/display/JENKINS/Lockable+Resources+Plugin
Simply add as many resources as the number of slow jobs you want to allow (6 or 7) and give them all the same label. Modify the job configurations to lock a resource (by label with quantity 1) before it can execute. If all the resources are already locked, then the job will wait until one is freed.
In the slave configuration, you can set the Usage mode to Only build jobs with label expressions matching this node.
Then, only jobs matching a given label (e.g. job-group-whatever) will be executed on this slave.
I had same issue. I installed multiple agent on same slave and it works fine.
Nodes remote directory should be different.
agent as a windows services

How to configure Jenkins for distributed load using multiple Jmeter-servers

I use Jmeter to generate a huge load to my web-server. Some slave machines are acted as Jmeter-server, another one - as Jmeter master that coordinates the load and collects statistics from slaves.
Now I'm trying to integrate this system to CI (Jenkins).
That's how I do it now. I have two separate Jenkins jobs: one of them prepares all slaves by running jmeter-server, another one runs Jmeter-master itself. All is fine with 2nd part: I successfully generate traffic and collect statistics. The issue is with 1st job. I have a huge set of slaves that can be rebooted anytime. So, I can't run the job that initiates jmeter-server once and forget about it. I need to run this job every time before Jmeter-master.
But in this case on some machines (that were not rebooted) I have multiple copies of java processes (jmeter-server copies).
So, I'm looking for a mechanism to start jmeter-server on slave nodes in a proper way.
Any ideas appreciated.
Thank you in advance!
Read this:
https://dzone.com/articles/distributed-performance
It combines:
JMeter
Maven Lazery JMeter plugin
Jenkins
All you have to do for jmeter-slaves is to start them from Jenkins using jmeter-server.sh , you might want to tweak port if you have 2 slaves on same host.
Then from controller you will reference those host machines (in this casse default port is used):
remote_hosts=test-server-1.nerdability.com,test-server-2.nerdability.com,test-server-3.nerdability.com

How can I ensure that only one if a kind of Jenkins job is run?

I have several integration tests within my Jenkins jobs. They run on several application servers, and I want to make sure that only one integration test job is run at the same time on one application server.
I would need something like a tag or variable within my jobs which create a group of jobs and then configure the logic that within that group, only one job may run at the same time.
Could I use the Exclusion plugin for that? Does anyone have experience with it?
Use the Throttle Concurrent Builds Plugin. It replaces the Locks and Latches plugin, and provides the capability to restrict the number of jobs running for specific labels.
For example: you create a project category 'Integration Test Server A' and tie jobs to it with a maximum concurrent count of 1, and a second 'Integration Test Server B' label and tie other jobs to it, both categories will only run a single concurrent build (assuming you've set a max job count of 1), and the other jobs in that category will queue until the 'lock' has cleared.
Using this method, you don't have to restrict the number of executors available on any specific Jenkins instance, and can easily add further slaves in the future without having to reconfigure all your jobs.
If I understand you right, you have a pool of application servers and it doesn't matter on what server your tests run. They only need to be the only test on that server.
I haven't seen a plugin that can do that. However, you can get easily around it. You need to configure a slave for each application server. (1 slave = 1 app server) You need to assign the same label to all slaves and every slave can only have one executor. Then you assign the jobs that run the integration tests, to run on that label. Jenkins will assign the jobs then to the next available slave (or node) that has that label.
Bare in mind that you can have more than one slave running on the same piece of hardware and even a master and a slave can coexist on the same server.
Did you check below parameter in the Jenkins -> Manage Jenkins -> Configure system
# of executors
The above parameter helps you restrict the number of jobs to be executed at a time.
A Jenkins executor is one of the basic building blocks which allow a build to run on a node/agent (e.g. build server). Think of an executor as a single “process ID”, or as the basic unit of resource that Jenkins executes on your machine to run a build. Please see Jenkins Terminology for more details regarding executors, nodes/agents, as well as other foundational pieces of Jenkins.
You can find information on how to set the number of Jenkins executors for a given agent on the Remoting Best Practices page, section Number of executors.
Source - https://support.cloudbees.com/hc/en-us/articles/216456477-What-is-a-Jenkins-Executor-and-how-can-I-best-utilize-my-executors

Resources