How do I make a Jenkins job start after multiple simultaneous upstream jobs succeed? - jenkins

In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together).
Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed").
Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after?
Edit:
When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with.
However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.

Pipeline plugin
You can use the Pipeline Plugin (formerly workflow-plugin).
It comes with many examples, and you can follow this tutorial.
e.g.
// build
stage 'build'
...
// deploy
stage 'deploy'
...
// run tests in parallel
stage 'test'
parallel 'functional': {
...
}, 'performance': {
...
}
// promote artifacts
stage 'promote'
...
Build flow plugin
You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen).
Setting up the jobs
Create jobs for:
build
deploy
performance tests
functional tests
promotion
Setting up the upstream
in the upstream (here build) create a unique artifact, e.g.:
echo ${BUILD_TAG} > build.tag
archive the build.tag artifact.
record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent.
Configure to get promoted when promotion job is successful.
Setting up the downstream jobs
I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER
Copy the artifacts from upstream build job using the Copy Artifact Plugin
Project name = ${PARENT_JOB_NAME}
Which build = ${PARENT_BUILD_NUMBER}
Artifacts to copy = build.tag
Record fingerprints; that's crucial.
Setting up the downstream promotion job
Do the same as the above, to establish upstream-downstream relationship.
It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn".
Create a build flow job
// start with the build
parent = build("build")
parent_job_name = parent.environment["JOB_NAME"]
parent_build_number = parent.environment["BUILD_NUMBER"]
// then deploy
build("deploy")
// then your qualifying tests
parallel (
{ build("functional tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) },
{ build("performance tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) }
)
// if nothing failed till now...
build("promotion",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
// knock yourself out...
build("more expensive QA tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
good luck.

There are two solutions that I have used for this scenario in the past:
Use the Join Plugin on your "deploy" job and specify "promote" as the targeted job. You would have to specify "Functional Tests" and "Performance Tests" as the joined jobs and start them via in some fashion, post build. The Parameterized Trigger Plugin is good for this.
Use the Promoted Builds Plugin on your "deploy" job, specify a promotion that works when downstream jobs are completed and specify Functional and Performance test jobs. As part of the promotion action, trigger the "promote" job. You still have to start the two test jobs from "deploy"
There is a CRITICAL aspect to both of these solutions: fingerprints must be correctly used. Here is what I found:
The "build" job must ORIGINATE a new fingerprinted file. In other words, it has to fingerprint some file that Jenkins thinks was originated by the initial job. Double check the "See Fingerprints" link of the job to verify this.
All downstream linked jobs (in this case, "deploy", "Functional Tests" and "Performance tests") need to obtain and fingerprint this same file. The Copy Artifacts plugin is great for this sort of thing.
Keep in mind that some plugins allow you change the order of fingerprinting and downstream job starting; in this case, the fingerprinting MUST occur before a downstream job fingerprints the same file to ensure the ORIGIN of the fingerprint is properly set.

The Multijob plugin works beautifully for that scenario. It also comes in handy if you want a single "parent" job to kick off multiple "child" jobs but still be able to execute each of the children manually, by themselves. This works by creating "phases", to which you add 1 to n jobs. The build only continues when the entire phase is done, so if a phase as multiple jobs they all must complete before the rest are executed. Naturally, it is configurable whether the build continues if there is a failure within the phase.

Jenkins recently announced first class support for workflow.

I believe the Workflow Plugin is now called the Pipeline Plugin and is the (current) preferred solution to the original question, inspired by the Build Flow Plugin. There is also a Getting Started Tutorial in GitHub.

Answers by jason & dnozay are good enough. But in case someone is looking for easy way just use JobFanIn plugin.

This diamond dependency build pipeline could be configured with
the DepBuilder plugin. DepBuilder is using its own domain
specific language, that would in this case look like:
_BUILD {
// define the maximum duration of the build (4 hours)
maxDuration: 04:00
}
// define the build order of the existing Jenkins jobs
Build -> Deploy
Deploy -> "Functional Tests" -> Promote
Deploy -> "Performance Tests" -> Promote
After building the project, the build visualization will be shown on the project dashboard page:
If any of the upstream jobs didn't succeed, the build will be automatically aborted. Abort behavior could be tweaked on a per job basis, for more info see the DepBuilder documentation.

Related

Jenkins pipeline: how to trigger another job and wait for it without using an extra agent/executor

I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"

Jenkins multibranch pipeline only for subfolder

I have git monorepo with different apps. Currently I have single Jenkinsfile in root folder that contains pipeline for app alls. It is very time consuming to execute full pipeline for all apps when commit changed only one app.
We use GitFlow-like approach to branching so Multibranch Pipeline jobs in Jenkins as perfect fit for our project.
I'm looking for a way to have several jobs in Jenkins, each one will be triggered only when code of appropriate application was changed.
Perfect solution for me looks like this:
I have several Multibranch Pipeline jobs in Jenkins. Each one looks for changes only to given directory and subdirectories. Each one uses own Jenkinsfile. Jobs pull git every X minutes and if there are changes to appropriate directories in existing branches - initiates build; if there are new branches with changes to appropriate directories - initiates build.
What stops me from this implementation
I'm missing a way to define commit to which folders must be ignored during scan execution by Multibranch pipeline. "Additional behaviour" for Multibranch pipeline doesn't have "Polling ignores commits to certain paths" option, while Pipeline or Freestyle jobs have. But I want to use Multibranch pipeline.
Solution described here doesnt work for me because if there will be new branch with changes only to "project1" then whenever Multibranch pipeline for "project2" will be triggered it will discover this new branch anyway and build it. Means for every new branch each of my Multibranch pipelines will be executed at least once no matter if there was changes to appropriate code or not.
Appreciate any help or suggestions how I can implement few Multibranch pipelines watching over same git repository but triggered only when appropriate pieces of code changed
This can be accomplished by using the Multibranch build strategy extension plugin. With this plugin, you can define a rule where the build only initiates when the changes belong to a sub-directory.
Install the plugin
On the Multibranch pipeline configuration, add a Build strategy
Select Build included regions strategy
Put a sub-folder on the field, such as subfolder/**
This way the changes will still be discovered, but they won't initiate a build if it doesn't belong to a certain set of files or folders.
This is the best approach I'm aware so far. But I think the best way would be a case where the changes doesn't even get discovered.
Edit: Gerrit Code Review plugin configuration
In case you're using the Gerrit Code Review plugin, you can also prevent new changes to be discovered by using a custom query:
I solved this by creating a project that builds other projects depending on the files changed. For example, from your repo root:
/Jenkinsfile
#!/usr/bin/env groovy
pipeline {
agent any
options {
timestamps()
}
triggers {
bitbucketPush()
}
stages {
stage('Build project A') {
when {
changeset "project-a/**"
}
steps {
build 'project-a'
}
}
stage('Build project B') {
when {
changeset "project-b/**"
}
steps {
build 'project-b'
}
}
}
}
You would then have other Pipeline projects with their own Jenkinsfile (i.e., project-a/Jenkinsfile).
I know that this post is quite old, but I solved this problem by changing the "include branches" parameter for SVN repositories (this can possibly also be done using the property "Filter by name (with wildcards)" for git repos). Instead of supplying only the actual branch name, I also included the subfolder. So instead of only supplying "trunk", I used "trunk/subfolder". This limits scanning to only that specific directory. Note that I have not yet fully tested this solution.

How to prevent concurrent builds of different branches in the same repository within a bitbucket organization job style?

I have a BitBucket organization job configured in my Jenkins, which is configured to scan the whole organization every 20 minutes and if it identifies a commit in any of the organization's repositories it triggers an automatic build.
Sometimes, more than one branch is being changed at a certain time and this causes Jenkins to trigger more than one build of the same project.
One of these projects should never allow concurrent builds as it uses resources which are being locked when a build runs, this causes other branches where commits are being pushed to trigger but they always fail because their main resource is locked by the first instance of the build.
I'm aware to the Throttle Builds plugin and it looks perfect for freestyle/pipeline jobs but in the case of organization scanning I cannot configure anything in the repositories under the organization, just the organization itself, the same goes for Hudson Locks and Latches plugin.
Anyone knows any solution?
I had a similar problem, and wanted to make sure that each branch of my multibranch pipeline could only execute one build at a time. here's what I added to my pipeline script:
pipeline {
agent any
options {
disableConcurrentBuilds() //each branch has 1 job running at a time
}
...
...
}
https://jenkins.io/doc/book/pipeline/syntax/#options
[Update 09/30/2017]
You may also want to check out the lock & milestone steps of Declarative Pipeline.
Lock
Rather than attempt to limit the number of concurrent builds of a job using the stage, we now rely on the "Lockable Resources" plugin and the lock step to control this. The lock step limits concurrency to a single build and it provides much greater flexibility in designating where the concurrency is limited.
stage('Build') {
doSomething()
lock('myResource') {
echo "locked build"
}
}
Milestone
The milestone step is the last piece of the puzzle to replace functionality originally intended for stage and adds even more control for handling concurrent builds of a job. The lock step limits the number of builds running concurrently in a section of your Pipeline while the milestone step ensures that older builds of a job will not overwrite a newer build.
stage('Build') {
milestone()
echo "Building"
}
stage('Test') {
milestone()
echo "Testing"
}
https://jenkins.io/blog/2016/10/16/stage-lock-milestone/
https://jenkins.io/doc/pipeline/steps/pipeline-milestone-step/
https://jenkins.io/doc/pipeline/steps/lockable-resources/

Jenkins how to create pipeline manual step

Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md

How do you tie two different build pipelines together in Jenkins?

I have a build pipeline in jenkins that builds and deploys the back-end components which expose a REST API. I have another build pipeline that builds and deploys the front-end components which call the back-end components. The back-end and front-end components live in seperate Git repositories.
The build job of each pipeline is kicked off when a commit occurs in each respective Git repository.
I would like to run automated functional tests at the end of the build pipeline of each build pipeline. But how do I know that both pipelines are finished and it should run the functional tests? Can it link the two pipelines together?
One approach is to use the Locks and Latches plugin and give each of the jobs on each pipeline their own Lock eg Pipeline-A and Pipeline-B, then the job that runs the tests is configured to obtain the lock on both Pipeline-A and Pipeline-B. This both prevents the test job running if any part of either pipeline is running, and blocks any changes on the pipeline whilst the tests are running.
If you'd only like to lock on the deploy jobs, you can use the same approach but only configure the deploy jobs with the locks; this will allow normal builds to run as normal, but deploy jobs queue up whilst the tests run.
Assumptions;
Any Deploy jobs are triggering a test execution
A second approach is to have your job pipelines setup such that before performing a deployment they trigger a single job in the following layout;
EndOfPipelineA -> SystemDeploymentController
EndOfPipelineB -> SystemDeploymentController
SystemDeploymentController -> DeployAppOne
SystemDeploymentController -> DeployAppTwo
DeployAppTwo -> TestExecution
DeployAppOne -> TestExecution
Then you use the Join plugin to only run the TestExecution job when both the deployments are complete AND successful.
The second approach allows you to:
conditionally control the execution of the test execution depending on the success of
deployments,
Have a single job that'll let you redeploy your whole system if you make any changes to the system it runs on, AND then run tests automatically.
Potentially make use of the Promotions plugin to highlight "good configurations" where both apps worked well together
However it is a bit trickier to manage.
Although this is an old question, you might consider restructuring your build pipeline using the Build Flow Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
It will have the advantage of keeping your pipeline logic in one place.
You can use build step for this. Let's say you have a pipeline named parent and child. In parent pipeline you can define:
pipeline {
agent any
stages {
stage ('call-child-pipeline') {
steps {
build job: 'child'
}
}
}
}
you can also pass some parameters to the child pipeline:
stage ('call-child-pipeline') {
steps {
build job: 'child', parameters: [string(name: 'my_param', value: "my_value")]
}
}
if you don't want to wait until the child pipeline is finished, add wait: false, e.g.
build job: 'child', wait: false

Resources