we are running Jenkins with lots of jobs. At the moment these jobs are kind of grouped by using "master jobs". These do nothing but start all jobs of one group. But, if one of these master jobs runs, it starts around 10 other jobs at one time. Depending on the duration of these jobs and the number of build processores (at the moment 6) Jenkins is blocked for a longer time (up to an hour). The other thing is, that these jobs are not really suitable for such massive parallelization.
To solve this, I'm looking for a way (a plugin), that allows to group some jobs and start them parallel, but limit the build processors used for the jobs of this group to a fixed number (e.g. 2). So it would be possible to run a group of jobs that compile java projects and parallel another group of jobs that installs test databases.
I tried the Build flow plugin, but it's not really the right one: you must separate the jobs manually to the sub-groups that run parallel and if a job in one sub-group failes, the following jobs of this group are not started.
So, maybe someone knows a Jenkins plugin that fits better? Thanks a lot in advance!
Frank
Throttle Concurrent Builds Plugin
Create some category my-group.
Add all the jobs into this group.
Set Maximum Total Concurrent Builds and Maximum Concurrent Builds Per Node.
Related
I have a multi-configuration job that uses a large amount of VMs for testing.
The Axis are something like:
30 VM slaves, 5 configurations, 5 different configurations
I would not like to run these sequentially, as the jobs would take forever. However, the default number of simultaneous runs is using up enough resources that I am getting random failures and disconnects.
Is there are way to specify the maximum number of simultaneous tests within this single running job?
I think you have to use a matrix job to trigger the builds of a separate job doing the real build. Then
you can use the Throttle Concurrent Builds Plugin to limit the number of parallel executions of that job you start by the matrix.
For multi project configuration
First you need to create a throttle category. In this case, the name is qa-aut and I limiting the number of execution to 2 for concurrent builds and concurrent builds per node. The node will have 4 executors available.
In your job configuration, make sure you don't run the multi-project sequentially:
Set up throttling builds, selecting "Throttle this project as part of one or more categories", "Multi-Project Throttle Category"(qa-aut) and "Throttle Matrix configuration builds". You can leave in blank the rest of the values
Make sure your node/master has enough executors available. In this case, the master will have available 4 executors
Execute your multi-project job
Instead of using 4 executors (all the availability), you will see it's using only 2 executors (2 threads) as specified in the category.
In my newly installed Jenkins, I have four jobs. I can only run two concurrently. If I trigger the build of a third job, it is set in the queue and triggered once one of the first two finishes.
I know my server can handle more than two concurrent jobs at a time. How can I increase this default threshold of two?
If it means anything, these are not build-a-deployable package kind of jobs but environment prep jobs that instantiate various DBs. So the jobs simply invoke a python script on the Jenkins server, which is the same script across multiple jobs but each job invokes it with different input params. The jobs are 100% independent of one another and do not share any resource except the script.
You go to Manage Jenkins --> Configure System, then change # of executors:
trying to solve some problem with Mesos. I have three build servers for Jenkins. Jenkins schedules jobs on them through Mesos.
For now, Mesos loads one agent(slave) as hard as possible, but I want it to spread jobs across all agents..
As I see, it's better to run three jobs on three agents, than on one.
Is it possible to randomise job scheduling?
Or perhaps, I have such scenario. 2 large servers and one mini. I want to schedule Jobs on mini by default, and if it's not enough resources, then proceed to large servers. How can I achieve this goal? Is it possible to set priority for agents(slaves) to specify on which agent I want job to run at first?
The Mesos plugin for Jenkins attempts to build on the most recently built slave (see this method). This means that once it builds on that machine once, as long as that machine still has available spare resources - it'll schedule additional jobs on that machine until it is full. Right now it looks like that isn't optional (I have filed it as a feature request).
I have around three Jenkins slave that are configured to run the same job allowing only one concurrent run on each slave. Each of these slave is connected to an embedded hardware that we run the job on. The total duration of the job is around 2 hours. The first 1 hour 50 mins is just taken to compile and configure the slave and the last 10 mins is when the embedded device is used. So basically I was looking for something that I can lock on for the last 10 mins. This would allow us to run multiple concurrent builds on the same slave.
Locks and Latches locks are shared across nodes.
What I am looking for is a node specific lock
If you can separate the problematic section from the compilation process you can just create another job to handle the last 10 minutes and call it using Parameterized Trigger Plugin. This job will run one instance at a time and will act as a native blocker for the run. That way, you can configure concurrent executions and throttling (if needed) on the main job, and create a "gate" to the problematic section.
I want to use a lock in a workflow job in order to prevent jobs from running at the same time on the same node.
I want to use the functionality of the lock and latches plugin to control the parallel execution of jobs: When a Job A starts building on a specific node, Job B should wait until A is done, and then B can run.
How can I achieve that ? or is there another solution (in case locks are not supported in workflow jobs) ?
Thank you.
What exactly are you trying to prevent? The easiest way would be to set each node as having only 1 executor... If you do this, then the node will only ever run one job at a time. Note that some fly-weight tasks may run but generally these are non-significant and involve polling the remote SCM repository and such.
If you just mean within the same workflow, you can use various mix of the parallel step to split parallel sections and then combine the results.