Jenkins - How to get and use upstream info in downstream - jenkins

Executing upstream job called "A". On success of A executing test cases which is downstream project "B". But while sending mail from B we have to incorporate upstream project details (upstream project name, build no) in mail. So we can easily map / corelate the test run with respective upstream job.
In downstream project dashboard below details are displaying.
Started by upstream project Dev_RM_3.0_CI_Test build number 10
originally caused by:
I checked in https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project. but couldnt find anything to inherit in downstream.
Created sample job with below details to display the current job details.
echo $BUILD_NUMBER
echo $JOB_NAME
echo $BUILD_ID
But the output is
Building on master in workspace /var/lib/jenkins/workspace/env
[env] $ /bin/sh -xe /tmp/hudson970280339057643719.sh
+ echo 1
1
+ echo env
env
+ echo 1
1
Finished: SUCCESS
Any help to inherit upstream details in downstream job?
How to get current job details?

The message that you refer to your question "Started by upstream project "Chained/1-First" build number 34" for example, is available in the jenkins Cause.
Jenkins keeps the upstream build info in it's cause object. If your are using build DSL or Pipelines you may get it in groovy. Alternatively you can curl the job url and use jq to get the Cause
For example curl http://localhost:8080/job/Chained/job/2-Second/17/api/json
"_class": "org.jenkinsci.plugins.workflow.job.WorkflowRun",
"actions": [{
"_class": "hudson.model.CauseAction",
"causes": [{
"_class": "hudson.model.Cause$UpstreamCause",
"shortDescription": "Started by upstream project \"Chained/1-First\" build number 34",
"upstreamBuild": 34,
"upstreamProject": "Chained/1-First",
"upstreamUrl": "job/Chained/job/1-First/"
}]
}
Or from the pipeline for example:
node() {
stage('downstream') {
def upstream = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
echo upstream?.shortDescription
}
}
You can get a bunch of information out of Cause, pending all the script approvals or a global shared step. You will get a null if a different cause triggers this build, eg commit, or user.

You can pass in the upstream variables via build parameters to the downstream job and then you can access them (in the downstream job) using things such as ${MyParameter1} and ${MyParameter2}.
You would need to:
Add build parameters to the downstream job. For example, a string parameter named "ParentJobName".
Add a post build "Trigger downstream parameterized builds on other projects" to the upstream job.
Add something like "Current Build parameters" or "Predefined parameters" to the #2 and pass in whatever you need. For example:
ParentJobName=${JOB_NAME}
Access the parameters as you would other build variables. e.g. ${ParentJobName}
You should be able to pass in the basic stuff that way. Anything more complicated than that and you will probably be better off using a plugin like Copy Artifacts Plugin to copy files or using the Jenkins API in a system groovy step to get/modify the upstream build, etc.

You can simply use params.variableName in your downstream job to retrieve the parameters passed from your upstream parameter job. Your downstream job need not necessarily be a parameterized job.

Extending #razboy answer:
this is good way if Cause cannot be whitelisted in sandbox. I forgot about Jenkins API and used current build console to look for string about trigger cause. You can try to fetch data from API as #razboy or get current console and grep it if you need simple stuff. Jenkins API is more flexible for more complex logic. To get API help,append /api to your build url: <jenkins_url>/job/<buildUrl>/<buildNumber>/api
def buildUrl = env.BUILD_URL
sh "wget $buildUrl -O currentConsole.txt"
statusCode = sh returnStatus: true,script: 'cat currentConsole.txt | grep -q "Started by upstream project"'
boolean startedByUpstream= statusCode==0

MeowRude's answer helped me. To repcap it, in upstream job:
build job: 'mail-test', parameters: [[$class: 'StringParameterValue', name: 'VERSION_NUMBER', value: '1.0.0.0']]
And in downstream job:
echo "${params.VERSION_NUMBER}"

You may have to have certain plugins installed, but
def causes = currentBuild.getBuildCauses()
will return an ArrayList of objects that will most likely provide the necessary details, for example upstreamProject for the full project name and upstreamBuild for the build number. Then you can correlate results between up- and downstream builds easily.
Source: link to pipeline-examples in razboy's comment above

Related

How to get build number as a parameter for downstream job from upstream job with declarative pipeline code

I have Two jobs like one is CI jod & another one is CD. I want CI build number should use on CD number.. Can you please help me with declarative pipeline script to get build number as a parameter. here CI job is calling CD job.
Jenkins already provides a simple means to access the number of the current build using env.BUILD_NUMBER. So if you wanted to pass the build number of CI to the downstream job CD, you could do
build([
job : 'CD',
parameters: [
string(name: 'MAIN_BUILD_NUMBER', value: "${env.BUILD_NUMBER}")
]
])
Then in the CD job, declare a parameter like this:
parameters {
string(defaultValue: null, description: 'Build No', name: 'MAIN_BUILD_NUMBER')
}
You should then be able to use ${env.MAIN_BUILD_NUMBER} anywhere in your CD jobs' Jenkinsfile.

Jenkins: how to stop downstream projects when upstream is aborted

I have an upstream project in Jenkins which calls in sequence some downstream projects with the "Trigger/call builds on other projects" plugin.
How can I automatically abort a build of any downstream project when I perform the aborting of the upstream project?
If the upstream is aborted, the downstream is still running and I want a different behaviour.
Thanks.
As mentioned in my comment, you can have downstream jobs listen to upstream jobs instead of having upstream jobs trigger downstream jobs. When it comes to parameters, the following groovy code example can be used to retrieve them:
def up_stream_cause = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
if (up_stream_cause != null ) {
def up_stream_run = up_stream_cause.upstreamRun
def parameters_action = up_stream_run.getAction(ParametersAction)
def parameters = parameters_action.getParameters()
}
Alternatively, you can of course simply build the downstream build during the upstream build using the following groovy code:
build job: 'job_name',
parameters: [
[
$class: 'StringParameterValue', name: 'parameter',
value: 'value'
]
]
Both of those solutions allow you to not trigger a downstream build when your upstream build fails, aborts or is unstable.
You can review:
https://wiki.jenkins.io/plugins/servlet/mobile?contentId=36603009#content/view/36603009
Pipeline jobs can by stopped by sending an HTTP POST request to URL endpoints of a build.
BUILD ID URL/stop - aborts a Pipeline.
BUILD ID URL/term - forcibly terminates a build (should only be used if stop does not work.
BUILD ID URL/kill - hard kill a pipeline. This is the most destructive way to stop a pipeline and should only be used as a last resort.
This last option is quite effective

How to control downstream pipeline's interactive input in upstream pipeline

I have an upstream pipeline which is calling another downstream pipeline
build job: "/org/projectA/master",
parameters: [[$class: 'StringParameterValue', name: 'variable', value: 'value']],
wait: true
In my downstream pipeline, there is a step to ask for approve
input "Deploy to prod?"
Currently the job is paused in the downstream pipeline waiting for approve, but in my main job (upstream pipeline), it is just waiting for sub pipeline to finish, doesn't show any message for approver. So is it possible to display the interactive input in my main pipeline? then the approver doesn't need to click to the sub pipeline to check the status.
BTW, I cannot move the input to main pipeline, cause there are other steps after it in the sub pipeline.
Thanks in advance for any suggestion
I really wouldn't recommend it, but there's a way via Jenkins Remote API -
Jenkins input pipeline step filled via POST with CSRF - howto?
curl -X POST -H "Jenkins-Crumb:${JENKINS_CRUMB}" -d json='{"parameter": {"name": "${PARAMETER_NAME}", "value": "${PARAMETER_VALUE}"}}' -d proceed='${SUBMIT_CAPTION}' 'http://j${JENKINS_URL}/job/${JOB_NAME}/${BUILD_ID}/input/${INPUT_ID}/submit'
The question would be how would you run this? A new input in the upstream job? Run when?
It might be more useful to divide the downstream job into two and run the actual deploy only when user accepts the input in the upstream job.

How to pass output from one pipeline to another in jenkins

I'm new to Jenkins and I've been tasked with a simple task of passing the output from one pipeline to the other.
Lets say that the first pipeline has a script that says echo HelloWorld, how would i pass this output to another pipeline so it displays the same thing.
I've looked at parameterized triggers and couple of other answers but I was hoping if someone could layout the step by step procedure to me.
If you want to implement it purely with Jenkins pipeline code - what I do is have an orchestrator pipeline job that builds all the pipeline jobs in my process, waits for them to complete then gets the build number:
Orchestrator job
def result = build job: 'jobA'
def buildNumber = result.getNumber()
echo "jobA build number : ${buildNumber}"
In each job like say 'jobA' I arrange to write the output to a known file (a properties file for example) which is then archived:
jobA
writeFile encoding: 'utf-8', file: 'results.properties', text: 'a=123\r\nb=foo'
archiveArtifacts 'results.properties'
Then after the build of each job like jobA, use the build number and use the Copy Artifacts plugin to get the file back into your orchestrator job and process it however you want:
Orchestrator job
step([$class : 'CopyArtifact',
filter : 'results.properties',
flatten : true,
projectName: 'jobA',
selector : [$class : 'SpecificBuildSelector',
buildNumber: buildNumber.toString()]])
You will find these plugins useful to look at:
Copy Artifact Plugin
Pipeline Utility Steps Plugin
If you are chaining jobs instead of using an orchestrator - say jobA builds jobB builds jobC etc - then you can use a similar method. CopyArtifacts can copy from the upstream job or you can pass parameters with the build number and name of the upstream job. I chose to use an orchestrator job after changing from chained jobs because I need some jobs to be built in parallel.

add build parameter in jenkins build schedule

I have a jenkins job. i want to build my job in a specific time with a build parameter.
I want to do this by using the Build periodically option.
I have input like this:
*/1 * * * * Parameter1
If I do this, jenkins show an error.
Is this possible without using any plugin.
if not, than which plugin will be better
Alternatively is there a way to give parameter's here in the schedule?
My actual requirement is like this:
build in morning using one parameter
build in evening using another parameter.
Basically, with the 'Build periodically' option you can't schedule a Jenkins job with parameters.
However, to schedule a job at different times that needs to use different environments, you have to use the parameterized-scheduler plugin or search for it in
(Manage Jenkins -> Manage Plugins -> Parameterized Scheduler).
Examples:
# Parameter1
H/15 * * * * %Parameter1
# Parameter2
H/30 * * * * %Parameter2
Remember you have to have your parameters already setup because the plugin is visible only for jobs with parameters.
The Node and Label parameter plugin can help since it allows you to select individual nodes assuming your different servers qa1 and qa2 are already configured. Hope that clarifies things for you.
With the native Jenkins crontab, it's not possible.
But it should be possible with this plugin:
https://github.com/jwmach1/parameterized-scheduler
You have to fork the repo and build this plugin + do a manual installation.
This tutorial explains how to build a custom plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial
(Setting Up Environment + Building a Plugin)
Maybe not exactly what you want, but it is an interesting hack I found out so I decided to share. You can programmatically set parameters of Jenkins job depending on the environment.
# check if job was trigered by timer
if [ $(env | grep -E '^BUILD_CAUSE=TIMERTRIGGER$') ] ; then
# your logic here, utilise the power of bash
if [ $(date +"%H") -eq 16 ] ; then PARAM=VALUE_1 ; fi
if [ $(date +"%H") -eq 17 ] ; then PARAM=VALUE_2 ; fi
fi
Without plugins, you can try cloning the job, and creating a build schedule with different parameter values. I.e. you might have job_morning and job_evening.
See How do I clone a job in jenkins?
Scheduling parameterized job is possible if there is a default value.
Here I will give an example using Jenkinsfile. Suppose in your pipeline script you have a param testUserName defined:
pipeline {
parameters {
string(name: 'testUserName', defaultValue: 'defaultTestUser',
description: 'Username to use for test scenarios')
}
stages {
stage('Run tests') {
steps {
sh "mvn verify --batch-mode -Dtest.user=${params.testUserName}"
}
}
}
}
When you press the "Build now" button, the job will run for the first time without asking for params (defaultValue will be used). After downloading and processing Jenkinsfile within that first run, the button name will change to "Build with Parameters". You click the button and type another user (not the default one defined in Jenkinsfile). The problem is that the value you typed will not persist between job runs. It will always reset to defaultValue.
To prevent value reset between job runs replace
defaultValue: 'defaultTestUser'
to
defaultValue: params.testUserName ?: 'defaultTestUser'
Now the job will always run with a value previously specified on "Build with Parameters". Found this solution on dev.to

Resources