I have a Jenkins multibranch pipeline that uses scripted pipeline Jenkinsfiles.
In earlier versions (pre 2.0 I believe) versions of the Throttle Concurrent Builds plugin, I was able to set my job properties to throttle concurrent builds that used identical parameters. In the 2.0 and later version of this plugin, that functionality looks to be lost. As far as I can tell, I can only throttle jobs based off of static categories that are defined globally. I need a large amount of jobs to run concurrently, but only if there are no jobs already running on that branch with the same parameters.
How can I throttle jobs with identical parameters in scripted multibranch pipelines? Do I have to use the locks functionality to create a mutex? Can the locks handle ~ 500 unique locks at a time? It looks like it lists these locks out in the global configuration, which will be very noisy.
Related
I want to run one job multiple times (each build with different parameters) on 2 executors.
I want to execute them based on their build priority value.
Unfortunately Priority Sorter plugin doesn't help in my case, it doesn't sort it correctly - my builds are being executed based on the timestamp they were added to the queue instead of priority.*
I believe this priority mechanism should be implemented before, on a queue level.
How to achieve that?
*-I tested it on the newest Jenkins version and the newest Priority Sorter plugin version
I think what you should try is Accelerated Build Now plugin. This plugin allows the Jenkins users to launch a build right there, even if the queue is long. It prioritises human launched jobs and brings them to the top.
The Priority Sorter I am using (v3.6.0 and Jenkins v.2.73.3) will not see a job unless you have enabled the Execute Concurrent builds if necessary.
So, allow parallel builds for that job and maybe decrease the number of cores to 1. See if that works. If not, you can try Throttle Concurrent Builds Plugin. This allows you to assign as many cores as you want in the specific job.
Here is a patch for Priority Queue plugin. For your case it could be this patch.
I have a multi-configuration job that uses a large amount of VMs for testing.
The Axis are something like:
30 VM slaves, 5 configurations, 5 different configurations
I would not like to run these sequentially, as the jobs would take forever. However, the default number of simultaneous runs is using up enough resources that I am getting random failures and disconnects.
Is there are way to specify the maximum number of simultaneous tests within this single running job?
I think you have to use a matrix job to trigger the builds of a separate job doing the real build. Then
you can use the Throttle Concurrent Builds Plugin to limit the number of parallel executions of that job you start by the matrix.
For multi project configuration
First you need to create a throttle category. In this case, the name is qa-aut and I limiting the number of execution to 2 for concurrent builds and concurrent builds per node. The node will have 4 executors available.
In your job configuration, make sure you don't run the multi-project sequentially:
Set up throttling builds, selecting "Throttle this project as part of one or more categories", "Multi-Project Throttle Category"(qa-aut) and "Throttle Matrix configuration builds". You can leave in blank the rest of the values
Make sure your node/master has enough executors available. In this case, the master will have available 4 executors
Execute your multi-project job
Instead of using 4 executors (all the availability), you will see it's using only 2 executors (2 threads) as specified in the category.
We have different jobs running on our jenkins. Some jobs are heavy and taking a lot of CPU and RAM, some are not. So I would like to have some plugins to help me set the weight for those jobs, just like https://wiki.jenkins-ci.org/display/JENKINS/Heavy+Job+Plugin.
But since we are using Jenkins pipeline which is not supported by Heavy Job plugin (See https://issues.jenkins-ci.org/browse/JENKINS-41940). Is there any other equivalent for pipeline jobs just like that?
Not as flexible as a dynamic weight, but to avoid overload you can create several executors with a different label (such as many with label light and only a few with label heavy) and then use node to target these labels.
This is not a solution to the packing problem, only prevents too many jobs with a heavy class from running at the same time.
I am fairly new to Jenkins pipeline and am considering migrating an existing Jenkins batch to use pipeline script.
This may be an obvious question to those in the know but I have not been able to find any discussion of it anywhere. If you have a fairly complex set of jobs, say a few hundred, is it best practice to end up with one job with a fairly large script or a small number of jobs, probably parameterized, say 5 to 10, with smaller pipeline scripts that call each other.
Having one huge job has the severe disadvantage that you cannot easily execute the single stages anymore. On the other hand, splitting everything into different jobs has the disadvantage that many of the nice pipeline features (shared variables, shared code) cannot be used anymore. I do not think that there is a unique answer to this.
Have a look at the following two related questions:
Jenkins Build Pipeline - Restart At Stage
Run Parts of a Pipeline as Separate Job
I have in Jenkins a set of jobs A1, A2, ... that can be executed concurrently, as well as a job B that must never be executed concurrently with any job Ai. All these jobs run on the same node (the jobs Ai use a pool of executors that, for reasons that can't be helped, occasionally have to be shepherded by job B). Can I enforce this in Jenkins?
The concept is similar to that of a shared mutex; the jobs Ai require shared-level access to the pool, while the job B requires exclusive-level access.
I'm looking at the Throttle Concurrent Builds plugin, but it appears from the options that it provides that it only has one level of access. I could make B never be concurrent with any Ai, but only by making all Ai mutually exclusive as well.
Is there a way to achieve shared-mutex-like behavior, either with this plugin or otherwise?
There's the Block queued job plugin:
Plugin for blocking/unblocking job in queue by some conditions configured in this job.
There's the Build Blocker Plugin:
This plugin keeps the actual job in the queue if at least one name of currently running jobs is matching with one of the given regular expressions.