I have a multi-configuration job that uses a large amount of VMs for testing.
The Axis are something like:
30 VM slaves, 5 configurations, 5 different configurations
I would not like to run these sequentially, as the jobs would take forever. However, the default number of simultaneous runs is using up enough resources that I am getting random failures and disconnects.
Is there are way to specify the maximum number of simultaneous tests within this single running job?
I think you have to use a matrix job to trigger the builds of a separate job doing the real build. Then
you can use the Throttle Concurrent Builds Plugin to limit the number of parallel executions of that job you start by the matrix.
For multi project configuration
First you need to create a throttle category. In this case, the name is qa-aut and I limiting the number of execution to 2 for concurrent builds and concurrent builds per node. The node will have 4 executors available.
In your job configuration, make sure you don't run the multi-project sequentially:
Set up throttling builds, selecting "Throttle this project as part of one or more categories", "Multi-Project Throttle Category"(qa-aut) and "Throttle Matrix configuration builds". You can leave in blank the rest of the values
Make sure your node/master has enough executors available. In this case, the master will have available 4 executors
Execute your multi-project job
Instead of using 4 executors (all the availability), you will see it's using only 2 executors (2 threads) as specified in the category.
Related
is there a way to globally limit total number of concurrent running jobs?
I know that I can throttle number of concurrent runs per job/node, but I need a way to globally set this to apply on all jobs without the need to pass by each job and modify it
for example: total number of runs: 100 this means that no more than 100 job/build can be running concurrently
The numbers of runs that can run concurrently on a Jenkins server is the number of available executers you currently have on the server. Therefore to limit the number of concurrent executions you can just limit the number of executers.
Each static node (Agent) including the master itself can be configured to have a specific number of executers - which is the number of jobs that can run concurrently on that agent, and in case you are using a cloud plugin for dynamically provisioning agents - almost all cloud plugins have built in support for limiting the number of provisioned instances and settings the number of executers each agent will have.
Therefore using the available executers you can control the total number of concurrent running jobs and limit their number as needed.
I would like to maximize my agent usage, and I've got various types of agents and various needs for the jobs.
We have two types of agents:
Virtual machines, which have only one executor
Physical machines, which have five executors
We have three general types of jobs:
Automated user interface tests that interact with the desktop
Product performance (timing) tests
Functional regression tests
Here are the criteria:
The performance tests must have exclusive access to the physical agents
The UI tests are timing-sensitive, so they should also have exclusive access to any agent it runs on
The functional regression tests can be run anywhere, on any number of executors
I can make use of the "Job Weight" plugin, which causes a particular build of a job to take up a specific number of nodes. I can also make use of the "Throttle Concurrent Builds" plugin, which can limit the number of concurrently running builds per node. However, I can't find a combination that works.
Example 1:
UI tests throttled to one build per agent
Performance tests given a job weight of five
Problem 1:
Functional tests can run on the same agent as a UI test.
Example 2:
UI tests and functional tests throttled to one build per agent (sharing a throttle category)
Performance tests given a weight of five
Problem 2:
Functional tests are now limited to one executor per agent, therefore not maximizing the physical agents
Example 3:
Set Performance and UI tests to job weight of five
Problem 3:
UI tests will no longer use the virtual machine agents.
If the "Job Weight" plugin had a "Max" setting (which would just use all of the executors on the agent), that would make this problem go away. I could then set the UI and Performance tests to have a job weight of "Max" and be done with it.
Any suggestions on how to get these criteria to fit together with the current limitations of Jenkins and its plugins?
In the end, this is what we did. All agents were given five executors each.
The job weights were changed around a bit:
Functional tests given a weight of one
Performance tests given a weight of five
UI tests given a weight of five
We then used the throttle plugin and created two categories that throttle according to labels:
1_per_any_agent limits any jobs associated with that category to run only one job on the agent at a time.
1_per_vm_agent limits any jobs associated with that category to run only one job on a VM agent at a time, but isn't limited for physical agents.
We applied the 1_per_any_agent throttle category to the performance and UI tests and applied the 1_per_vm_agent throttle category to the functional tests.
Now, functional tests can have five builds running simultaneously on physical agents, but are limited to only one on virtual agents. They will not run when performance or UI tests are running because those jobs require a weight of five.
In my newly installed Jenkins, I have four jobs. I can only run two concurrently. If I trigger the build of a third job, it is set in the queue and triggered once one of the first two finishes.
I know my server can handle more than two concurrent jobs at a time. How can I increase this default threshold of two?
If it means anything, these are not build-a-deployable package kind of jobs but environment prep jobs that instantiate various DBs. So the jobs simply invoke a python script on the Jenkins server, which is the same script across multiple jobs but each job invokes it with different input params. The jobs are 100% independent of one another and do not share any resource except the script.
You go to Manage Jenkins --> Configure System, then change # of executors:
I have around three Jenkins slave that are configured to run the same job allowing only one concurrent run on each slave. Each of these slave is connected to an embedded hardware that we run the job on. The total duration of the job is around 2 hours. The first 1 hour 50 mins is just taken to compile and configure the slave and the last 10 mins is when the embedded device is used. So basically I was looking for something that I can lock on for the last 10 mins. This would allow us to run multiple concurrent builds on the same slave.
Locks and Latches locks are shared across nodes.
What I am looking for is a node specific lock
If you can separate the problematic section from the compilation process you can just create another job to handle the last 10 minutes and call it using Parameterized Trigger Plugin. This job will run one instance at a time and will act as a native blocker for the run. That way, you can configure concurrent executions and throttling (if needed) on the main job, and create a "gate" to the problematic section.
we are running Jenkins with lots of jobs. At the moment these jobs are kind of grouped by using "master jobs". These do nothing but start all jobs of one group. But, if one of these master jobs runs, it starts around 10 other jobs at one time. Depending on the duration of these jobs and the number of build processores (at the moment 6) Jenkins is blocked for a longer time (up to an hour). The other thing is, that these jobs are not really suitable for such massive parallelization.
To solve this, I'm looking for a way (a plugin), that allows to group some jobs and start them parallel, but limit the build processors used for the jobs of this group to a fixed number (e.g. 2). So it would be possible to run a group of jobs that compile java projects and parallel another group of jobs that installs test databases.
I tried the Build flow plugin, but it's not really the right one: you must separate the jobs manually to the sub-groups that run parallel and if a job in one sub-group failes, the following jobs of this group are not started.
So, maybe someone knows a Jenkins plugin that fits better? Thanks a lot in advance!
Frank
Throttle Concurrent Builds Plugin
Create some category my-group.
Add all the jobs into this group.
Set Maximum Total Concurrent Builds and Maximum Concurrent Builds Per Node.