Jenkins pipeline: return value of build step - jenkins

In this integration pipeline in Jenkins, I am triggering different builds in parallel using the build step, as follows:
stage('trigger all builds')
{
parallel
{
stage('componentA')
{
steps
{
script
{
def myjob=build job: 'componentA', propagate: true, wait: true
}
}
}
stage('componentB')
{
steps
{
script
{
def myjob=build job: 'componentB', propagate: true, wait: true
}
}
}
}
}
I would like to access the return value of the build step, so that I can know in my Groovy scripts what job name, number was triggered.
I have found in the examples that the object returned has getters like getProjectName() or getNumber() that I can use for this.
But how do I know the exact class of the returned object and the list of methods I can call on it? This seems to be missing from the Pipeline documentation. I am asking for this case in particular, but generally speaking, how can I know the class of the returned object and its documentation?

The step documentation is generated based on some files that are bundled with the plugin, which sometimes isn't enough. One easy way would be to just print out the class of the result object by calling getClass:
def myjob=build job: 'componentB', propagate: true, wait: true
echo "${myjob.getClass()}"
This output would tell you that the result (in this case) is a org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper which has published Javadoc.
For other cases, I usually have to dive into the Jenkins source code. Here is my general strategy:
Figure out which plugin the step comes from either by the step documentation, jenkins.io steps reference, or just searching the internet
From the plugin site, go to the source code repository
Search for the String literal of the step name, and find the step type that returns it. In this case, it looks to be coming from the BuildTriggerStep class, which extends AbstractStepImpl
#Override
public String getFunctionName() {
return "build";
}
Look at the nested DescriptorImpl to see what execution class is returned
public DescriptorImpl() {
super(BuildTriggerStepExecution.class);
}
Go to BuildTriggerStepExecution and look at the execution body in the start() method
Reading over the workflow step README shows that something should call context.onSuccess(value) to return a result. There is one place in that file, but that is only on the "no-wait" case, which always returns immediately and is null (source).
if (step.getWait()) {
return false;
} else {
getContext().onSuccess(null);
return true;
}
Ok, so it isn't completing in the step execution, so it must be somwhere else. We can also search the repository for onSuccess and see what else might trigger it from this plugin. We find that a RunListener implementation handles setting the result asynchronously for the step execution if it has been configured that way:
for (BuildTriggerAction.Trigger trigger : BuildTriggerAction.triggersFor(run)) {
LOGGER.log(Level.FINE, "completing {0} for {1}", new Object[] {run, trigger.context});
if (!trigger.propagate || run.getResult() == Result.SUCCESS) {
if (trigger.interruption == null) {
trigger.context.onSuccess(new RunWrapper(run, false));
} else {
trigger.context.onFailure(trigger.interruption);
}
} else {
trigger.context.onFailure(new AbortException(run.getFullDisplayName() + " completed with status " + run.getResult() + " (propagate: false to ignore)"));
}
}
run.getActions().removeAll(run.getActions(BuildTriggerAction.class));
The trigger.context.onSuccess(new RunWrapper(run, false)); is where the RunWrapper result comes from

The result of the downstream job is given in the result attribute of the returned object.
I recommend to use propagate: false to get control of how the result of the downstream job affects the current build.
Example:
pipeline{
[...]
stages {
stage('Dummy') {
steps {
echo "Hello world #1"
}
}
stage('Fails') {
steps {
script {
downstream = build job: 'Pipeline Test 2', propagate: false
if (downstream.result != 'SUCCESS') {
unstable(message: "Downstream job result is ${downstream.result}")
}
}
}
}
}
[...]
}
In this example, the current build is set to UNSTABLE whenever the downstream build has not been successful.
The result can be: SUCCESS, FAILURE, UNSTABLE, or ABORTED.
For your other question see Is there a built-in function to print all the current properties and values of an object? and Groovy / grails how to determine a data type?

I got the class path from build log:
13:20:52 org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper#5fe8d20f
and then I found the doc from https://javadoc.jenkins.io/, you can get all you need from this page
link: https://javadoc.jenkins.io/plugin/workflow-support/org/jenkinsci/plugins/workflow/support/steps/build/RunWrapper.html

Related

Build job on Jenkins pipeline returns a null objet when wait is set to false

I have a pipeline in Jenkins whose first stage is about triggering another job with the build function. I want to have the RunWrapper object back from the build function, but it returns a null object when the wait paramater is set to false.
Basically, in this case the build function returns the correct object type :
stage('Trigger Another Job With Propagate') {
steps {
script {
triggeredJob = build(job: 'Tiggered_Job', wait: true)
echo "${triggeredJob.getClass()}"
// correct object type
}
}
}
But when we don't want to wait the job to finish before stepping into the next stages, a null object is returned :
stage('Trigger Another Job With Propagate') {
steps {
script {
triggeredJob = build(job: 'Tiggered_Job', wait: false)
echo "${triggeredJob.getClass()}"
// null object type
}
}
}
I think that behavior is not normal, we should be able to use the object in any case.

How to call a Jenkinsfile or a Jenkins Job from another Jenkinsfile?

On a scripted pipeline written in Groovy, I have 2 Jenkinsfiles namely - Jenkinsfile1 and Jenkinsfile2.
Is it possible to call Jenkinsfile2 from Jenkinsfile1.
Lets say following is my Jenkinsfile1
#!groovy
stage('My build') {
node('my_build_node') {
def some_output = True
if (some_output) {
// How to call Jenkinsfile2 here?
}
}
}
How do I call Jenkinsfile2 above when output has a value which is not empty?
Or is it possible to call another Jenkins job which uses Jenkinsfile2?
Your question wasn't quite clear to me. If you just want to load and evaluate some piece of Groovy code into yours, you can use load() (as #JoseAO previously stated). Apart from his example, if your file (Jenkinsfile2.groovy) has a call() method, you can use it directly, like this:
node('master') {
pieceOfCode = load 'Jenkinsfile2.groovy'
pieceOfCode()
pieceOfCode.bla()
}
Now, if you want to trigger another job, you can use the build() step, even if you're not using a declarative pipeline. The thing is that the pipeline you're calling must be created in Jenkins, because build() takes as an parameter the job name, not the pipeline filename. Here's an example of how to call a job named pipeline2:
node('master') {
build 'pipeline2'
}
Now, as for your question "How do I call Jenkinsfile2 above when output has a value which is not empty?", if I understood correctly, you're trying to run some shell command, and if it's empty, you'll load the Jenkinsfile/pipeline. Here's how to achieve that:
// Method #1
node('master') {
try {
sh 'my-command-goes-here'
build 'pipeline2' // if you're trying to call another job
// If you're trying to load and evaluate a piece of code
pieceOfCode = load 'Jenkinsfile2.groovy'
pieceOfCode()
pieceOfCode.bla()
}
catch(Exception e) {
print("${e}")
}
}
// Method #2
node('master') {
def commandResult = sh script: 'my-command-goes-here', returnStdout: true
if (commandResult.length() != 0) {
build 'pipeline2' // if you're trying to call another job
// If you're trying to load and evaluate a piece of code
pieceOfCode = load 'Jenkinsfile2.groovy'
pieceOfCode()
pieceOfCode.bla()
}
else {
print('Something went bad with the command.')
}
}
Best regards.
For example, your Jenkisfile2 it's my "pipeline2.groovy".
def pipeline2 = load (env.PATH_PIPELINE2 + '/pipeline2.groovy')
pipeline2.method()

How to disable triggers from branch indexing but still allow SCM triggering in multibranch jobs

When using a Jenkins Multibranch Pipeline Job if you choose Suppress Automatic SCM trigger in the job it will stop the job from building after indexing branches (great functionality).
However for some reason this ALSO will kill the ability to trigger the build from SCM events!
Is there any way to stop builds from triggering after branch discovery (branch indexing) but still build normally by SCM events?
You can always add logic to your pipeline to abort on branch indexing causes. For example:
boolean isBranchIndexingCause() {
def isBranchIndexing = false
if (!currentBuild.rawBuild) {
return true
}
currentBuild.rawBuild.getCauses().each { cause ->
if (cause instanceof jenkins.branch.BranchIndexingCause) {
isBranchIndexing = true
}
}
return isBranchIndexing
}
Adjust the logic to suit your use-case.
EDIT: The Pipeline Syntax > Global Variables Reference embedded within the Jenkins UI (e.g.: <jenkins url>/job/<pipeline job name>/pipeline-syntax/globals) has information about the currentBuild global variable, which lead to some javadocs:
The currentBuild variable, which is of type RunWrapper, may be used to refer to the currently running build. It has the following readable properties:
...
rawBuild:
a hudson.model.Run with further APIs, only for trusted libraries or administrator-approved scripts outside the sandbox; the value will not be Serializable so you may only access it inside a method marked #NonCPS
...
See also: jenkins.branch.BranchIndexingCause
I know this post is very old, but maybe someone still has this issue, first you need to install the plugin basic-branch-build-strategies:
If you are using jenkins dsl:
buildStrategies {
buildAllBranches {
strategies {
skipInitialBuildOnFirstBranchIndexing()
}
}
}
That's not the functionality for this - https://issues.jenkins-ci.org/browse/JENKINS-41980
Suppressing SCM Triggers should suppress all builds triggered by a
detected change in the SCM irrespective of how the change was detected
There is a feature request for such functionality in the Jenkins backlog: JENKINS-63673 Allow configuring in NoTriggerBranchProperty which builds should be suppressed. I created a pull request today, so there is a chance it will be a part of the Branch API plugin in the future. In the meantime you may to use the custom version (see the build).
How to use the JobDSL plugin to configure it automatically:
multibranchPipelineJob {
// ...
branchSources {
branchSource {
source {
// ...
}
strategy {
allBranchesSame {
props {
suppressAutomaticTriggering {
strategyId(2)
}
}
}
}
}
}
// ...
}
For what I understand, this happens because the pipeline definition is not read when you set "Suppress Automatic SCM trigger".
And so, all triggers (SCM, upstream...) that you declared in the pipeline won't be known by Jenkins, until you run the job a first time.
So if you don't want builds to be triggered by branch indexing, set option "Suppress Automatic SCM trigger"
If you want you pipeline to be known by Jenkins, so that he can react to your triggers, you should not set "Suppress Automatic SCM trigger"
We can Modify branch-api-plugin accordingly. https://github.com/jenkinsci/branch-api-plugin
here src/main/java/jenkins/branch/OverrideIndexTriggersJobProperty.java
In this file it decides which Task to trigger Build
Here i have modified the function so that Feature branches are not triggered but will still be added to the list.
By Default it evaluates the checkbox Suppress Automatic SCM trigger
#Extension
public static class Dispatcher extends Queue.QueueDecisionHandler {
private static final Logger LOGGER = Logger.getLogger(Dispatcher.class.getName());
#SuppressWarnings("rawtypes") // untypable
#Override
public boolean shouldSchedule(Queue.Task p, List<Action> actions) {
LOGGER.log(Level.INFO, "[TARUN_DEBUG] TASK NAME : "+ p.getName());
if(p.getName().startsWith("feature")||p.getName().startsWith("bugfix")){
return false;
}
else if(p.getName().startsWith("release")||p.getName().equals("master")||p.getName().startsWith("develop")||p.getName().startsWith("part of")||p.getName().startsWith("PR-")){
}
else{
LOGGER.log(Level.INFO, "[TARUN_DEBUG] NOT TRIGGERED "+p.getName());
return false;
}
for (Action action : actions) {
if (action instanceof CauseAction) {
for (Cause c : ((CauseAction) action).getCauses()) {
if (c instanceof BranchIndexingCause) {
if (p instanceof Job) {
Job<?,?> j = (Job) p;
OverrideIndexTriggersJobProperty overrideProp = j.getProperty(OverrideIndexTriggersJobProperty.class);
if (overrideProp != null) {
return overrideProp.getEnableTriggers();
} else {
return true;
}
}
}
}
}
}
return true;
}
To skip a build triggered by build indexing you can place the following code snippet in your pipeline. No extra libraries required.
when {
// Run pipeline/stage only if not triggered by branch indexing.
not {
triggeredBy 'BranchIndexingCause'
}
}

Is there a way to programmatically inject post actions in declarative pipeline

I need to share some code between several stages, which would also need to add post actions. To do so, I thought about putting everything in a method, which will be called from
pipeline {
stages {
stage('Some') {
steps {
script { commonCode() }
}
}
}
}
However, I'm not sure how could I install post actions in from commonCode. Documentation does not mention a thing. Looking at the code, implies that this DSL is basically just playing with a hash map, but I don't know would it be possible to access it from the method and modify on the fly.
Basically I would like to do something like this in commonCode:
if (something) {
attachPostAction('always', { ... })
} else {
attachPostAction('failure', { ... })
}
The only thing that works so far is that in commonCode I do:
try {
...
onSuccess()
} catch (e) {
onError()
} finally {
onAlways()
}
But was wondering if there is a more elegant way...
Now that I better understand the question (I hope)...
This is a pretty interesting idea--generate your post actions on the fly in previous stages.
It turns out to be really easy. I tried one option (success) that stored various closures in a list, then iterate through the list and run all the closures in the post action. Then I did another (failure) where I just saved a single closure as a variable and ran that. Both work well.
Below is the code that does this. Uncomment the error line to simluate a failed build.
def postSuccess = []
def postFailure
pipeline {
agent any
stages {
stage('Success'){
steps {
script {
println "Configure Success Post Steps"
postSuccess[0] = {echo "This is a successful build"}
postSuccess[1] = {
echo "Running multiple steps"
sh "ls -latr"
}
}
}
}
stage('Failure'){
steps {
script {
println "Configure Failure Post Steps"
postFailure = {
echo "This build failed"
echo "Running multiple steps for failure"
sh """
whoami
pwd
"""
}
}
// error "Simulate a failed build" //uncomment this line to make the build fail
}
}
} // stages
post {
success {
echo "SUCCESS"
script {
for (def my_closure in postSuccess) {
my_closure()
}
}
}
failure {
echo "FAILURE!"
script {
postFailure()
}
}
}
} // pipeline
You can use regular groovy scripting outside of the pipeline block. While I haven't tried it, you should be able to define a method outside of there and then call it from inside the pipeline. But method calls can't be called as steps. You would need to wrap it in a script step. But post actions take the same steps as steps{} blocks, so if you can use it insteps, you can use it in the post sections. You will need to watch scoping carefully or you will end up trying to sort out why things are null in some places.
You can also used a shared library. You could define a step in the shared library and then use it like any other step in a steps{} block or one of the post blocks.

Jenkins Declarative Pipeline detect first run and fail when choice parameters present

I often write Declarative Pipeline jobs where I setup parameters such as "choice". The first time I run the job, it blindly executes using the first value in the list. I don't want that to happen. I want to detect this case and not continue the job if a user didn't select a real value.
I thought about using "SELECT_VALUE" as the first item in the list and then fail the job if that is the value. I know I can use a 'when' condition on each stage, but I'd rather not have to copy that expression to each stage in the pipeline. I'd like to fail the whole job with one check up front.
I don't like the UI for 'input' tasks because the controls are hidden until you hover over a running stage.
What is the best way to validate arguments with a Declarative Pipeline? Is there a better way to detect when the job is run for the first time and stop?
I've been trying to figure this out myself and it looks like the pipeline runs with a fully populated parameters list.
So, the answer to your choice option is to make the first item a value like "please select option" and have your code use when to check that
For example
def paramset = true
pipeline {
parameters {
choice(choices: ['select','test','proof', 'prod'], name: 'ENVI')
}
stages {
stage ('check') {
when { expression { return params.choice.ENVI == 'select' }
steps {
script {
echo "Missing parameters"
paramset = false
}
}
}
stage ('step 1') {
when { expression { return paramset }
steps {
script {
echo "Doing step 1"
}
}
}
stage ('step 2') {
when { expression { return paramset }
steps {
script {
echo "Doing step 2"
}
}
}
stage ('step 3') {
when { expression { return paramset }
steps {
script {
echo "Doing step 3"
}
}
}
}
}

Resources