How to use Throttle Concurrent Builds in Jenkins Declarative Pipelines - jenkins

I have Jenkins declarative pipelines for a few different repos that trigger a database refresh, and unit tests that depend on the database. These Jenkins jobs are triggered from pull requests in GitHub.
To avoid resource collisions, I need to prevent these jobs from running at the same time -- both within each project, and across projects.
The "Throttle Concurrent Builds" plugin seems to be built for this.
I have installed the plugin and configured a category like so:
And I added the "throttle" option to the Jenkinsfile in one of the repositories whose builds should be throttled:
pipeline {
agent any
options {
throttle(['ci_database_build'])
}
stages {
stage('Build') {
parallel {
stage('Build source') {
steps {
// etc etc...
However, this does not seem to be preventing 2 jobs from executing at once. As evidence, here are 2 jobs (both containing the above Jenkisfile change) executing at the same time:
What am I missing?

The following in the options block should work
options {
throttleJobProperty(
categories: ['ci_database_build'],
throttleEnabled: true,
throttleOption: 'category',
)
}
The full syntax can be seen here: https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/68)

Related

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

How to prevent concurrent builds of different branches in the same repository within a bitbucket organization job style?

I have a BitBucket organization job configured in my Jenkins, which is configured to scan the whole organization every 20 minutes and if it identifies a commit in any of the organization's repositories it triggers an automatic build.
Sometimes, more than one branch is being changed at a certain time and this causes Jenkins to trigger more than one build of the same project.
One of these projects should never allow concurrent builds as it uses resources which are being locked when a build runs, this causes other branches where commits are being pushed to trigger but they always fail because their main resource is locked by the first instance of the build.
I'm aware to the Throttle Builds plugin and it looks perfect for freestyle/pipeline jobs but in the case of organization scanning I cannot configure anything in the repositories under the organization, just the organization itself, the same goes for Hudson Locks and Latches plugin.
Anyone knows any solution?
I had a similar problem, and wanted to make sure that each branch of my multibranch pipeline could only execute one build at a time. here's what I added to my pipeline script:
pipeline {
agent any
options {
disableConcurrentBuilds() //each branch has 1 job running at a time
}
...
...
}
https://jenkins.io/doc/book/pipeline/syntax/#options
[Update 09/30/2017]
You may also want to check out the lock & milestone steps of Declarative Pipeline.
Lock
Rather than attempt to limit the number of concurrent builds of a job using the stage, we now rely on the "Lockable Resources" plugin and the lock step to control this. The lock step limits concurrency to a single build and it provides much greater flexibility in designating where the concurrency is limited.
stage('Build') {
doSomething()
lock('myResource') {
echo "locked build"
}
}
Milestone
The milestone step is the last piece of the puzzle to replace functionality originally intended for stage and adds even more control for handling concurrent builds of a job. The lock step limits the number of builds running concurrently in a section of your Pipeline while the milestone step ensures that older builds of a job will not overwrite a newer build.
stage('Build') {
milestone()
echo "Building"
}
stage('Test') {
milestone()
echo "Testing"
}
https://jenkins.io/blog/2016/10/16/stage-lock-milestone/
https://jenkins.io/doc/pipeline/steps/pipeline-milestone-step/
https://jenkins.io/doc/pipeline/steps/lockable-resources/

Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?

Here is an example of declarative pipeline where the agent is set for the pipeline but not set in the individual stages:
pipeline {
agent { node { label 'linux' } }
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'make'
}
}
}
}
Documentation I've found about scripted pipeline makes it clear that a single workspace will be used within a single node block but multiple node blocks might be allocated multiple workspaces, therefore it is necessary to stash between those steps, use the External Workspace Plugin, etc. if you want to be sure of what's in the workspace between steps.
I had a hard time finding documentation about workspace guarantees for declarative pipeline. What guarantees about workspaces exist for this example?
I believe I encountered two stages executing in different workspaces during testing of a similar pipeline but I'm not sure that's what was happening. I'd really like to avoid needing to stash my checkout prior to my build step or use the External Workspace plugin so I was hoping there'd be a way to force all my stages to run all in one workspace/on one node.
The Pipeline code presented should only create a single workspace and run all stages in it. Unless you create a new agent directive in any of your stages it will not utilize another node or workspace.
btw, checkout scm happens automatically at the beginning of the Pipeline with Declarative so you don't need to explicitly call that out.
i'm 70% sure--based on anecdotal evidence--that you will always get the same workspace on the same node in different stages of a declarative pipeline if you specify a node at the top level and never override it, the way you're doing.
i reserve the right to adjust my confidence level as i receive feedback on this answer. :D

Jenkins Pipeline Plugin execute two pipelines in parallel and make downstream pipeline wait

We are using Jenkins and the Pipeline plugin for CI/CD. We have two pipelines that we need to run parallel, and there is a downstream pipeline which should trigger ONLY if the two upstream pipelines both finish and are successful.
P1--
| -- P3
P2--
Basically P3 should run only when P1 and P2 are finished and successful and not depend on just one of them.
Is there a way to achieve this? We are using 2.5 version of the plugin.
Since stages only run if previous stages run successfully, and since you can execute other pipelines via build, and since there is a magical instruction called parallel, I think this might do it:
pipeline {
agent { label 'docker' }
stages {
stage("build_p1_and_p2_in_parallel") {
steps {
parallel p1: {
build 'p1'
}, p2: {
build 'p2'
}
}
}
stage("build_p3_if_p1_and_p2_succeeded") {
steps {
build 'p3'
}
}
}
}
Use the "Snippet Generator" embedded in your jenkins instance to figure out what the argument to build should be. If it's another pipeline at the same level as the top level Jenkinsfile, you could just reference it by job name. Caveat: I've used parallel, but never build within parallel, but it seems like it should work.
You can try and wrap the pipelines jobs with MultiJob plugin that can implement the logic that you require as 2 jobs inside a phase.

Visualizing build steps in Jenkins pipeline

In my Jenkins pipeline, I trigger several other jobs using the build step and pass some parameters to it. I'm having issues visualizing the different jobs I've triggered in addition to my pipeline. I have set up the Jenkins Delivery Pipeline plugin but the documentation for it is extremely vague and I have only been able to visualize the steps within my pipeline, despite tagging the jobs with both a stage and task name.
Example:
I have two jobs in Jenkins as pipelines/workflow jobs with the following pipeline script:
Job Foo:
stage('Building') {
println 'Triggering job'
build 'Bar'
}
Job Bar:
node('master') {
stage('Child job stage') {
println 'Doing stuff in child job'
}
}
When visualizing this with the Jenkins Pipeline Delivery plugin, I only get this:
How do I make it also show the stage in job Bar in a separate box?
Unfortunately, this use case is currently not supported in the Delivery Pipeline plugin version 1.0.0. The delivery pipeline plugin views for Jenkins pipelines are only rendering what is contained within one pipeline definition at this point. This feature request is tracked in JENKINS-43679.

Resources