How to prevent concurrent builds of different branches in the same repository within a bitbucket organization job style? - jenkins

I have a BitBucket organization job configured in my Jenkins, which is configured to scan the whole organization every 20 minutes and if it identifies a commit in any of the organization's repositories it triggers an automatic build.
Sometimes, more than one branch is being changed at a certain time and this causes Jenkins to trigger more than one build of the same project.
One of these projects should never allow concurrent builds as it uses resources which are being locked when a build runs, this causes other branches where commits are being pushed to trigger but they always fail because their main resource is locked by the first instance of the build.
I'm aware to the Throttle Builds plugin and it looks perfect for freestyle/pipeline jobs but in the case of organization scanning I cannot configure anything in the repositories under the organization, just the organization itself, the same goes for Hudson Locks and Latches plugin.
Anyone knows any solution?

I had a similar problem, and wanted to make sure that each branch of my multibranch pipeline could only execute one build at a time. here's what I added to my pipeline script:
pipeline {
agent any
options {
disableConcurrentBuilds() //each branch has 1 job running at a time
}
...
...
}
https://jenkins.io/doc/book/pipeline/syntax/#options
[Update 09/30/2017]
You may also want to check out the lock & milestone steps of Declarative Pipeline.
Lock
Rather than attempt to limit the number of concurrent builds of a job using the stage, we now rely on the "Lockable Resources" plugin and the lock step to control this. The lock step limits concurrency to a single build and it provides much greater flexibility in designating where the concurrency is limited.
stage('Build') {
doSomething()
lock('myResource') {
echo "locked build"
}
}
Milestone
The milestone step is the last piece of the puzzle to replace functionality originally intended for stage and adds even more control for handling concurrent builds of a job. The lock step limits the number of builds running concurrently in a section of your Pipeline while the milestone step ensures that older builds of a job will not overwrite a newer build.
stage('Build') {
milestone()
echo "Building"
}
stage('Test') {
milestone()
echo "Testing"
}
https://jenkins.io/blog/2016/10/16/stage-lock-milestone/
https://jenkins.io/doc/pipeline/steps/pipeline-milestone-step/
https://jenkins.io/doc/pipeline/steps/lockable-resources/

Related

How to schedule Jenkins job when files Check-in with delay

Currently I'm using Serena DIMENSIONS as configuration management with Jenkins for continuous Integration.
Once Developer check in new files in Project folder in Serena, The Jenkins job(which detect changes in Serena DIMENSIONS,download changed files and build the software) needs to be trigger with 15 minutes Delay(Delay is require to complete check in all necessary files.
Can you please tell me the solution?
With Jenkins Pipeline you can create a stage that uses sleep step. For example:
pipeline {
agent none
stages {
stage('Wait') {
agent { label 'wait-node' }
steps {
sleep time: 15, unit: "MINUTES"
}
}
}
}
There's one drawback - your executor is blocked for all the wait time. To work around this in elegant way you can define a dedicated node (wait-node) with large enough number of executors to handle waiting stages (note - other stages may run on different nodes). This way actual executors aren't blocked and you can see all waiting jobs on Jenkins Dashboard.

Jenkins pipeline: how to trigger another job and wait for it without using an extra agent/executor

I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"

How to "Scan Repository Now" from a Jenkinsfile

I can call another jenkins job using the build command. Is there a way I can tell another job to do a branch scan?
A multibranch pipeline job has a UI button "Scan Repository Now". When you press this button, it will do a checkout of the configured SCM repository and detect all the branches and create subjobs for each branch.
I have a multibranch pipeline job for which I have selected the "Suppress automatic SCM triggering" option because I only want it to run when I call it from another job. Because this option is selected, the multibranch pipeline doesn't automatically detect when new branches are added to the repository. (If I click "Scan Repository Now" in the UI it will detect them.)
Essentially I have a multibranch pipeline job and I want to call it from another multibranch pipeline job that uses the same git repository.
node {
if(env.BRANCH_NAME == "the-branch-I-want" && other_criteria) {
//scanScm "../my-other-multibranch-job" <--- scanScm is a fake command I made up
build "../my-other-multibranch-job/${env.BRANCH_NAME}"
I get an error on that build line, because the target multibranch pipeline job does not yet know that BRANCH_NAME exists. I need a way to trigger an SCM re-scan in the target job from this current job.
Similar to what you figured out yourself, I can contribute my optimization that actually waits until the scan has finished (but is subject to Script Security):
// Helper functions to trigger branch indexing for a certain multibranch project.
// The permissions that this needs are pretty evil.. but there's currently no other choice
//
// Required permissions:
// - method jenkins.model.Jenkins getItemByFullName java.lang.String
// - staticMethod jenkins.model.Jenkins getInstance
//
// See:
// https://github.com/jenkinsci/pipeline-build-step-plugin/blob/3ff14391fe27c8ee9ccea9ba1977131fe3b26dbe/src/main/java/org/jenkinsci/plugins/workflow/support/steps/build/BuildTriggerStepExecution.java#L66
// https://stackoverflow.com/questions/41579229/triggering-branch-indexing-on-multibranch-pipelines-jenkins-git
void scanMultiBranchAndWaitForJob(String multibranchProject, String branch) {
String job = "${multibranchProject}/${branch}"
// the `build` step does not support waiting for branch indexing (ComputedFolder job type),
// so we need some black magic to poll and wait until the expected job appears
build job: multibranchProject, wait: false
echo "Waiting for job '${job}' to appear..."
while (Jenkins.instance.getItemByFullName(job) == null || Jenkins.instance.getItemByFullName(job).isDisabled()) {
sleep 3
}
}
Ended up figuring this out shortly after posting the question. Calling build against the base multibranch pipeline job as opposed to a branch causes it to re-scan. The solution to my above snippet would have ended up looking something like...
node {
if(env.BRANCH_NAME == "the-branch-I-want" && other_criteria) {
build job: "../my-other-multibranch-job", wait: false, propagate: false // scan for branches
sleep 2 // scanning takes time
build "../my-other-multibranch-job/${env.BRANCH_NAME}"
The wait: false is important because otherwise you get "ERROR: Waiting for non-job items is not supported". The multibranch "parent" job is closer to a folder than a job, but it's a folder that supports the build command, and it does so by scanning the SCM.
But solving this just led to another problem, which is that with wait: false we have no way of knowing when the SCM Scan finished. If you have a large repository (or you're short on jenkins agents), the branch won't get discovered until after the second build command has already failed due to the branch not existing. You could bump the sleep time even higher, but that doesn't scale.
Fortunately, it turns out manually initiating the SCM scan isn't even needed if you have github webhooks set up for your jenkins. The branch will be discovered more-or-less instantly, so for my purposes this is solved another way. The reason I was running into it is we don't have webhooks set up in our dev jenkins, but once I move this code to prod it will work fine.
If you're trying to use JobDSL to set up multibranches calling multibranches and you don't have webhooks or something equivalent, the better path is probably to abandon multibranch for your second tier of jobs and use JobDSL to create folders and manage the branch jobs yourself.

How do you tie two different build pipelines together in Jenkins?

I have a build pipeline in jenkins that builds and deploys the back-end components which expose a REST API. I have another build pipeline that builds and deploys the front-end components which call the back-end components. The back-end and front-end components live in seperate Git repositories.
The build job of each pipeline is kicked off when a commit occurs in each respective Git repository.
I would like to run automated functional tests at the end of the build pipeline of each build pipeline. But how do I know that both pipelines are finished and it should run the functional tests? Can it link the two pipelines together?
One approach is to use the Locks and Latches plugin and give each of the jobs on each pipeline their own Lock eg Pipeline-A and Pipeline-B, then the job that runs the tests is configured to obtain the lock on both Pipeline-A and Pipeline-B. This both prevents the test job running if any part of either pipeline is running, and blocks any changes on the pipeline whilst the tests are running.
If you'd only like to lock on the deploy jobs, you can use the same approach but only configure the deploy jobs with the locks; this will allow normal builds to run as normal, but deploy jobs queue up whilst the tests run.
Assumptions;
Any Deploy jobs are triggering a test execution
A second approach is to have your job pipelines setup such that before performing a deployment they trigger a single job in the following layout;
EndOfPipelineA -> SystemDeploymentController
EndOfPipelineB -> SystemDeploymentController
SystemDeploymentController -> DeployAppOne
SystemDeploymentController -> DeployAppTwo
DeployAppTwo -> TestExecution
DeployAppOne -> TestExecution
Then you use the Join plugin to only run the TestExecution job when both the deployments are complete AND successful.
The second approach allows you to:
conditionally control the execution of the test execution depending on the success of
deployments,
Have a single job that'll let you redeploy your whole system if you make any changes to the system it runs on, AND then run tests automatically.
Potentially make use of the Promotions plugin to highlight "good configurations" where both apps worked well together
However it is a bit trickier to manage.
Although this is an old question, you might consider restructuring your build pipeline using the Build Flow Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
It will have the advantage of keeping your pipeline logic in one place.
You can use build step for this. Let's say you have a pipeline named parent and child. In parent pipeline you can define:
pipeline {
agent any
stages {
stage ('call-child-pipeline') {
steps {
build job: 'child'
}
}
}
}
you can also pass some parameters to the child pipeline:
stage ('call-child-pipeline') {
steps {
build job: 'child', parameters: [string(name: 'my_param', value: "my_value")]
}
}
if you don't want to wait until the child pipeline is finished, add wait: false, e.g.
build job: 'child', wait: false

How do I make a Jenkins job start after multiple simultaneous upstream jobs succeed?

In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together).
Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed").
Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after?
Edit:
When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with.
However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.
Pipeline plugin
You can use the Pipeline Plugin (formerly workflow-plugin).
It comes with many examples, and you can follow this tutorial.
e.g.
// build
stage 'build'
...
// deploy
stage 'deploy'
...
// run tests in parallel
stage 'test'
parallel 'functional': {
...
}, 'performance': {
...
}
// promote artifacts
stage 'promote'
...
Build flow plugin
You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen).
Setting up the jobs
Create jobs for:
build
deploy
performance tests
functional tests
promotion
Setting up the upstream
in the upstream (here build) create a unique artifact, e.g.:
echo ${BUILD_TAG} > build.tag
archive the build.tag artifact.
record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent.
Configure to get promoted when promotion job is successful.
Setting up the downstream jobs
I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER
Copy the artifacts from upstream build job using the Copy Artifact Plugin
Project name = ${PARENT_JOB_NAME}
Which build = ${PARENT_BUILD_NUMBER}
Artifacts to copy = build.tag
Record fingerprints; that's crucial.
Setting up the downstream promotion job
Do the same as the above, to establish upstream-downstream relationship.
It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn".
Create a build flow job
// start with the build
parent = build("build")
parent_job_name = parent.environment["JOB_NAME"]
parent_build_number = parent.environment["BUILD_NUMBER"]
// then deploy
build("deploy")
// then your qualifying tests
parallel (
{ build("functional tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) },
{ build("performance tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) }
)
// if nothing failed till now...
build("promotion",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
// knock yourself out...
build("more expensive QA tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
good luck.
There are two solutions that I have used for this scenario in the past:
Use the Join Plugin on your "deploy" job and specify "promote" as the targeted job. You would have to specify "Functional Tests" and "Performance Tests" as the joined jobs and start them via in some fashion, post build. The Parameterized Trigger Plugin is good for this.
Use the Promoted Builds Plugin on your "deploy" job, specify a promotion that works when downstream jobs are completed and specify Functional and Performance test jobs. As part of the promotion action, trigger the "promote" job. You still have to start the two test jobs from "deploy"
There is a CRITICAL aspect to both of these solutions: fingerprints must be correctly used. Here is what I found:
The "build" job must ORIGINATE a new fingerprinted file. In other words, it has to fingerprint some file that Jenkins thinks was originated by the initial job. Double check the "See Fingerprints" link of the job to verify this.
All downstream linked jobs (in this case, "deploy", "Functional Tests" and "Performance tests") need to obtain and fingerprint this same file. The Copy Artifacts plugin is great for this sort of thing.
Keep in mind that some plugins allow you change the order of fingerprinting and downstream job starting; in this case, the fingerprinting MUST occur before a downstream job fingerprints the same file to ensure the ORIGIN of the fingerprint is properly set.
The Multijob plugin works beautifully for that scenario. It also comes in handy if you want a single "parent" job to kick off multiple "child" jobs but still be able to execute each of the children manually, by themselves. This works by creating "phases", to which you add 1 to n jobs. The build only continues when the entire phase is done, so if a phase as multiple jobs they all must complete before the rest are executed. Naturally, it is configurable whether the build continues if there is a failure within the phase.
Jenkins recently announced first class support for workflow.
I believe the Workflow Plugin is now called the Pipeline Plugin and is the (current) preferred solution to the original question, inspired by the Build Flow Plugin. There is also a Getting Started Tutorial in GitHub.
Answers by jason & dnozay are good enough. But in case someone is looking for easy way just use JobFanIn plugin.
This diamond dependency build pipeline could be configured with
the DepBuilder plugin. DepBuilder is using its own domain
specific language, that would in this case look like:
_BUILD {
// define the maximum duration of the build (4 hours)
maxDuration: 04:00
}
// define the build order of the existing Jenkins jobs
Build -> Deploy
Deploy -> "Functional Tests" -> Promote
Deploy -> "Performance Tests" -> Promote
After building the project, the build visualization will be shown on the project dashboard page:
If any of the upstream jobs didn't succeed, the build will be automatically aborted. Abort behavior could be tweaked on a per job basis, for more info see the DepBuilder documentation.

Resources