I am new to Jenkins and I have configured master-slave nodes as shown below, but I need help to configure the no of executors in each of the below slave nodes
Currently, I have configured 100 executors in each slave nodes
How many no of executors I can configure in each slave node and what fact(memory, RAM, etc) need to take consider when increasing the no of executors?
Maximum how many no of executors I can configure in each server?
Well it totally depends on your usage. There are multiple factors such as how much cpu and memory is available, how the build are going to execute and what kind of builds, how frequent these build should run etc. But I can clearly say that 100 is too big number. I would suggest go with 20 builds (if builds run frequently and have fair amount of CPU and memory) first and observe if is there any issue with numbers or not then you can increase accordingly.
here is very nice article check this out https://www.avantica.com/blog/jenkins-balance-load-master-slave-setup#:~:text=Jobs%20are%20built%20using%20executors,to%20build%20two%20different%20tasks.
Related
Is there a way to horizontally scale Jenkins? We have a requirement where we can more number of jobs will be added and built every day. I understand we can distribute jobs to slaves and master should be only used for orchestrating. However, masters memory is also limited and we cant keep adding jobs, since master will have to maintain job details, history etc.
What if we have some 10000+ jobs running in parallel daily? Will we be able to scale Jenkins master?
I have a multi-configuration job that uses a large amount of VMs for testing.
The Axis are something like:
30 VM slaves, 5 configurations, 5 different configurations
I would not like to run these sequentially, as the jobs would take forever. However, the default number of simultaneous runs is using up enough resources that I am getting random failures and disconnects.
Is there are way to specify the maximum number of simultaneous tests within this single running job?
I think you have to use a matrix job to trigger the builds of a separate job doing the real build. Then
you can use the Throttle Concurrent Builds Plugin to limit the number of parallel executions of that job you start by the matrix.
For multi project configuration
First you need to create a throttle category. In this case, the name is qa-aut and I limiting the number of execution to 2 for concurrent builds and concurrent builds per node. The node will have 4 executors available.
In your job configuration, make sure you don't run the multi-project sequentially:
Set up throttling builds, selecting "Throttle this project as part of one or more categories", "Multi-Project Throttle Category"(qa-aut) and "Throttle Matrix configuration builds". You can leave in blank the rest of the values
Make sure your node/master has enough executors available. In this case, the master will have available 4 executors
Execute your multi-project job
Instead of using 4 executors (all the availability), you will see it's using only 2 executors (2 threads) as specified in the category.
trying to solve some problem with Mesos. I have three build servers for Jenkins. Jenkins schedules jobs on them through Mesos.
For now, Mesos loads one agent(slave) as hard as possible, but I want it to spread jobs across all agents..
As I see, it's better to run three jobs on three agents, than on one.
Is it possible to randomise job scheduling?
Or perhaps, I have such scenario. 2 large servers and one mini. I want to schedule Jobs on mini by default, and if it's not enough resources, then proceed to large servers. How can I achieve this goal? Is it possible to set priority for agents(slaves) to specify on which agent I want job to run at first?
The Mesos plugin for Jenkins attempts to build on the most recently built slave (see this method). This means that once it builds on that machine once, as long as that machine still has available spare resources - it'll schedule additional jobs on that machine until it is full. Right now it looks like that isn't optional (I have filed it as a feature request).
Is there any difference between I create two slaves, or one slave with two executors on the same Windows server?
Yes, there is a difference: It's about memory consumption and effort of maintenance/administration.
Starting a slave on a system starts a (main) process. This process costs (private) main memory to run and connects to the master.
Each executor is a sub-process of the main process.
It is therefore apparent that running two executors on one slave costs less memory in total compared to running two slaves (with one executor each), as there would be the memory consumption of the main process twice:
2 * Main Processes + 2 * Executors > 1 * Main Process + 2 * Executors
Moreover, administrating a slave is some more effort than just an executor: Whilst an executor has virtually nothing to worry, there are numerous things to configure for a slave. Additionally, the capabilities of the two slaves are anyhow the same (they are running on the same OS as you said), so there is little value-add to also assign it different labels.
In short, if there are no other boundary conditions, which make me do it differently, I always would prefer running two executors on one slave, as this is easier to administrate and some memory is saved.
A slave is a "machine". An executor is an "OS Process" in the slave.
So ideally we always add executors - they do the work and can run in parallel, and the simple theoretic answer to your question is "2 executors on one slave"
In practice we need to add slaves in several use cases:
We need more resources (more cpu, more memory, more "machines")
We need a different setting (Different OSes, Different hardware)
We have global resources that would create a conflict for executors on same machine (shared browser for a UI testing process)
Make the decision based on your use case.
One benefit which immediately comes to my mind for running 1 executor on given node, is to prevent conflicts between processes run at the same time.
On other hand you could prevent job conflicts using existing Jenkins plugins, ie. Heavy Job, Build Blocker.
How many Remote Nodes can Jenkins manage ? Are there any limitations/memory issues?
What is more effective:
1) 100 Nodes 1 executor per node ?
2) 5 Nodes with 20 executors per node ?
Tx.
As far as i know, there is no limitation on # of nodes one can have although your system might feel like saying, enough is enough! Issues such as number of processes per user (we got this issue recently, not with Jenkins but some other application where RAM and disk space were fine but the system stopped responding. We started getting system cannot fork() error), total number of open files etc. Few such issues might still be configurable but may not be allowed/feasible.
If resource (in your case, nodes) is not a constraint, which process wouldn't like to run wild? :) In practical cases, generally you wouldn't have the flexibility to opt for first option. In second case where you have 5 nodes with 20 executors, all you have to make sure is not to tie up jobs to a particular node unless you have a compelling reason.
Some slaves are faster, while others are slow. Some slaves are closer (network wise) to a master, others are far away. So doing a good build distribution is a challenge. Currently, Jenkins employs the following strategy:
If a project is configured to stick to one computer, that's always honored.
Jenkins tries to build a project on the same computer that it was previously built.
Jenkins tries to move long builds to slaves, because the amount of network interaction between a master and a slave tends to be logarithmic to the duration of a build (IOW, even if project A takes twice as long to build as project B, it won't require double network transfer.) So this strategy reduces the network overhead.
You should also have a look at these links:
https://wiki.jenkins-ci.org/display/JENKINS/Least+Load+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin