Jenkins pipeline: how to trigger another job and wait for it without using an extra agent/executor - jenkins

I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?

I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...

I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...

Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"

Related

Jenkinsfile how to mimic 2 separate test and build jobs

In the old configuration we had 2 jobs, test and build.
The build ran after test had run successfully, but we could manually trigger build if we want to skip the tests.
After we switched to multiple pipeline using Jenkinsfile, we had to put those 2 build jobs in to the same file:
stage('Running tests'){
...
}
stage('Build'){
...
}
So now the build step is only triggered after running tests successfully, and we cannot manually trigger build, without commenting out the test steps and commit to the repository.
I am wondering if there is a better approach/practise to utilise the Jenkinsfile to overcome this limitation?
Using pipeline and Jenkinsfile is becoming the standard and preferred way of running jobs on Jenkins now a days. So using a Jenkinsfile is certainly the way to go.
One way to solve the problem is to make the job parameterized:
// Set the parameter properties, this will be done at the first run so that we can trigger with parameters manually
properties([parameters([booleanParam(defaultValue: true, description: 'Testing will be done if this is checked', name: 'DO_TEST')])])
stage('Running tests'){
// Putting the check inside of the stage step so that we don't confuse the stage view
if (params['DO_TEST']) {
...
}
}
stage('Build'){
...
}
The first time the job runs, it will add a parameter to the job. After that we can trigger manually and select whether tests should run. The default value will be used when it's triggered by SCM.

Jenkins how to create pipeline manual step

Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md

Jenkins Pipeline and Promotions

When having build job with implemented promotion cycle, i.e. Dev->QA->Performance->Production.
What will be the correct way to migrate this cycle into pipeline? it looks rather clean\structured to call each of the above mentioned jobs, Yet, How can I query the build ID (to be able to call the deployment job)? Or I have have totally misunderstood the pipeline concept?
You might consider multiple solutions :
Trigger each job sequentially
Just call each job sequentially using build step :
node() {
stage "Dev"
build job: 'Dev'
stage "QA"
build job: 'QA'
// Your other promotion cycles...
}
It is easy to use and will probably be already compliant with your actual solution, but I'm not a big fan of this solution because the actual output of your pipeline stages (Dev, QA, etc.) will really be in the dedicated job (Dev job, QA job) instead of being directly inside your pipeline. Your pipeline will be an empty shell just calling other jobs...
Call pipelines functions instead of jobs
Define a pipeline function for each of your promotion cycle (preferably in an external file) and then call each function sequentially. Example :
node {
git 'http://urlToYourGit/projectContainingYourFunctions'
cycles = load 'promotions-cycles.groovy'
stage "Dev"
cycles.dev()
stage "QA"
cycles.qa()
// Your other promotion cycles calls...
}
The biggest advantages is that your promotions cycles code is commited in your Git repository and that all your stages output is actually a part of your pipeline output, which is great for easy debugging.
Plus you can easily apply conditions based on the success/failure of your functions (e.g. if your QA stage fails you don't want to go any further).
Note that both solutions should allow you to launch your promotions cycles in parallel if needed and to pass parameters to either your jobs or functions.
It is better to call each build in separate pipeline stages. Something like this:
stage "Dev"
node{
build job: 'Dev', parameters:
[
[$class: 'StringParameterValue', name: 'param', value: "param"],
];
}
stage "QA"
node{
build job: 'QA'
}
etc...
To cycle this process you ca use retry option or endless cycle in Groovy

How do you tie two different build pipelines together in Jenkins?

I have a build pipeline in jenkins that builds and deploys the back-end components which expose a REST API. I have another build pipeline that builds and deploys the front-end components which call the back-end components. The back-end and front-end components live in seperate Git repositories.
The build job of each pipeline is kicked off when a commit occurs in each respective Git repository.
I would like to run automated functional tests at the end of the build pipeline of each build pipeline. But how do I know that both pipelines are finished and it should run the functional tests? Can it link the two pipelines together?
One approach is to use the Locks and Latches plugin and give each of the jobs on each pipeline their own Lock eg Pipeline-A and Pipeline-B, then the job that runs the tests is configured to obtain the lock on both Pipeline-A and Pipeline-B. This both prevents the test job running if any part of either pipeline is running, and blocks any changes on the pipeline whilst the tests are running.
If you'd only like to lock on the deploy jobs, you can use the same approach but only configure the deploy jobs with the locks; this will allow normal builds to run as normal, but deploy jobs queue up whilst the tests run.
Assumptions;
Any Deploy jobs are triggering a test execution
A second approach is to have your job pipelines setup such that before performing a deployment they trigger a single job in the following layout;
EndOfPipelineA -> SystemDeploymentController
EndOfPipelineB -> SystemDeploymentController
SystemDeploymentController -> DeployAppOne
SystemDeploymentController -> DeployAppTwo
DeployAppTwo -> TestExecution
DeployAppOne -> TestExecution
Then you use the Join plugin to only run the TestExecution job when both the deployments are complete AND successful.
The second approach allows you to:
conditionally control the execution of the test execution depending on the success of
deployments,
Have a single job that'll let you redeploy your whole system if you make any changes to the system it runs on, AND then run tests automatically.
Potentially make use of the Promotions plugin to highlight "good configurations" where both apps worked well together
However it is a bit trickier to manage.
Although this is an old question, you might consider restructuring your build pipeline using the Build Flow Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
It will have the advantage of keeping your pipeline logic in one place.
You can use build step for this. Let's say you have a pipeline named parent and child. In parent pipeline you can define:
pipeline {
agent any
stages {
stage ('call-child-pipeline') {
steps {
build job: 'child'
}
}
}
}
you can also pass some parameters to the child pipeline:
stage ('call-child-pipeline') {
steps {
build job: 'child', parameters: [string(name: 'my_param', value: "my_value")]
}
}
if you don't want to wait until the child pipeline is finished, add wait: false, e.g.
build job: 'child', wait: false

How do I make a Jenkins job start after multiple simultaneous upstream jobs succeed?

In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together).
Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed").
Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after?
Edit:
When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with.
However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.
Pipeline plugin
You can use the Pipeline Plugin (formerly workflow-plugin).
It comes with many examples, and you can follow this tutorial.
e.g.
// build
stage 'build'
...
// deploy
stage 'deploy'
...
// run tests in parallel
stage 'test'
parallel 'functional': {
...
}, 'performance': {
...
}
// promote artifacts
stage 'promote'
...
Build flow plugin
You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen).
Setting up the jobs
Create jobs for:
build
deploy
performance tests
functional tests
promotion
Setting up the upstream
in the upstream (here build) create a unique artifact, e.g.:
echo ${BUILD_TAG} > build.tag
archive the build.tag artifact.
record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent.
Configure to get promoted when promotion job is successful.
Setting up the downstream jobs
I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER
Copy the artifacts from upstream build job using the Copy Artifact Plugin
Project name = ${PARENT_JOB_NAME}
Which build = ${PARENT_BUILD_NUMBER}
Artifacts to copy = build.tag
Record fingerprints; that's crucial.
Setting up the downstream promotion job
Do the same as the above, to establish upstream-downstream relationship.
It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn".
Create a build flow job
// start with the build
parent = build("build")
parent_job_name = parent.environment["JOB_NAME"]
parent_build_number = parent.environment["BUILD_NUMBER"]
// then deploy
build("deploy")
// then your qualifying tests
parallel (
{ build("functional tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) },
{ build("performance tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) }
)
// if nothing failed till now...
build("promotion",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
// knock yourself out...
build("more expensive QA tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
good luck.
There are two solutions that I have used for this scenario in the past:
Use the Join Plugin on your "deploy" job and specify "promote" as the targeted job. You would have to specify "Functional Tests" and "Performance Tests" as the joined jobs and start them via in some fashion, post build. The Parameterized Trigger Plugin is good for this.
Use the Promoted Builds Plugin on your "deploy" job, specify a promotion that works when downstream jobs are completed and specify Functional and Performance test jobs. As part of the promotion action, trigger the "promote" job. You still have to start the two test jobs from "deploy"
There is a CRITICAL aspect to both of these solutions: fingerprints must be correctly used. Here is what I found:
The "build" job must ORIGINATE a new fingerprinted file. In other words, it has to fingerprint some file that Jenkins thinks was originated by the initial job. Double check the "See Fingerprints" link of the job to verify this.
All downstream linked jobs (in this case, "deploy", "Functional Tests" and "Performance tests") need to obtain and fingerprint this same file. The Copy Artifacts plugin is great for this sort of thing.
Keep in mind that some plugins allow you change the order of fingerprinting and downstream job starting; in this case, the fingerprinting MUST occur before a downstream job fingerprints the same file to ensure the ORIGIN of the fingerprint is properly set.
The Multijob plugin works beautifully for that scenario. It also comes in handy if you want a single "parent" job to kick off multiple "child" jobs but still be able to execute each of the children manually, by themselves. This works by creating "phases", to which you add 1 to n jobs. The build only continues when the entire phase is done, so if a phase as multiple jobs they all must complete before the rest are executed. Naturally, it is configurable whether the build continues if there is a failure within the phase.
Jenkins recently announced first class support for workflow.
I believe the Workflow Plugin is now called the Pipeline Plugin and is the (current) preferred solution to the original question, inspired by the Build Flow Plugin. There is also a Getting Started Tutorial in GitHub.
Answers by jason & dnozay are good enough. But in case someone is looking for easy way just use JobFanIn plugin.
This diamond dependency build pipeline could be configured with
the DepBuilder plugin. DepBuilder is using its own domain
specific language, that would in this case look like:
_BUILD {
// define the maximum duration of the build (4 hours)
maxDuration: 04:00
}
// define the build order of the existing Jenkins jobs
Build -> Deploy
Deploy -> "Functional Tests" -> Promote
Deploy -> "Performance Tests" -> Promote
After building the project, the build visualization will be shown on the project dashboard page:
If any of the upstream jobs didn't succeed, the build will be automatically aborted. Abort behavior could be tweaked on a per job basis, for more info see the DepBuilder documentation.

Resources