We have a Jenkins job that uses a declarative pipeline.
This job can be triggered by different other builds.
In the declarative pipeline how can I find out which build has triggered the pipeline?
Code sample below
pipeline {
agent any
stages {
stage('find upstream job') {
steps {
script {
def causes = currentBuild.rawBuild.getCauses()
for(cause in causes) {
if (cause.class.toString().contains("UpstreamCause")) {
println "This job was caused by job " + cause.upstreamProject
} else {
println "Root cause : " + cause.toString()
}
}
}
}
}
}
}
You can check the job's REST API to get extra information like below
{
"_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun",
"actions" : [
{
"_class" : "hudson.model.ParametersAction",
"parameters" : [
]
},
{
"_class" : "hudson.model.CauseAction",
"causes" : [
{
"_class" : "hudson.model.Cause$UpstreamCause",
"shortDescription" : "Started by upstream project \"larrycai-sto-46908390\" build number 7",
"upstreamBuild" : 7,
"upstreamProject" : "larrycai-sto-46908390",
"upstreamUrl" : "job/larrycai-sto-46908390/"
}
]
},
Reference:
https://jenkins.io/doc/pipeline/examples/#get-build-cause
Get Jenkins upstream jobs
I realize that this is a couple years old, but the previous response required some additional security setup in my Jenkins instance. After a bit of research, I found that there was a new feature request completed in 11/2018 that addresses this need and exposes build causes in currentBuild. Here is a little lib I wrote that returns the cause with the string "JOB/" prepended if the build was triggered by another build:
def call(body) {
if (body == null) {body = {DEBUG = false}}
def myParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = myParams
body()
def causes = currentBuild.getBuildCauses()
if (myParams.DEBUG) {
echo "causes count: " + causes.size().toString()
echo "causes text : " + causes.toString()
}
for(cause in causes) {
// echo cause
if (cause._class.toString().contains("UpstreamCause")) {
return "JOB/" + cause.upstreamProject
} else {
return cause.toString()
}
}
}
To use this, I place it in a library in a file named "buildCause.groovy". Then I reference the library at the top of my Jenkinsfile:
library identifier: 'lib#master', retriever: modernSCM(
[$class: 'GitSCMSource', remote: '<LIBRARY_REPO_URL>',
credentialsId: '<LIBRARY_REPO_CRED_ID', includes: '*'])
Then I can call it as needed within my pipeline:
def cause=buildCause()
echo cause
if (!cause.contains('JOB/')) {
echo "started by user"
} else {
echo "triggered by job"
}
Larry's answer didn't quite work for me.
But, after I've modified it slightly with the help of these docs and this version works:
def causes = currentBuild.getBuildCauses()
for(cause in causes) {
if (cause._class.toString().contains("UpstreamCause")) {
println "This job was caused by job " + cause.upstreamProject
} else {
println "Root cause : " + cause.toString()
}
}
P.S. Actually, Daniel's answer mentions this method, but there's too much clutter, I only noticed it after I wrote my solution.
Related
Hi My jenkins file code is as follows : I am basically trying to make call to a python script and execute it, I have defined some variables in my code : And when i am trying to run it, It gives no such property error in the beginning and I cant find out the reason behind it.
I would really appreciate any suggestions on this .
import groovy.json.*
pipeline {
agent {
label 'test'
}
parameters {
choice(choices: '''\
env1
env2'''
, description: 'Environment to deploy', name: 'vpc-stack')
choice(choices: '''\
node1
node2'''
, description: '(choose )', name: 'stack')
}
stages {
stage('Tooling') {
steps {
script {
//set up terraform
def tfHome = tool name: 'Terraform 0.12.24'
env.PATH = "${tfHome}:${env.PATH}"
env.TFHOME = "${tfHome}"
}
}
}
stage('Build all modules') {
steps {
wrap([$class: 'BuildUser']) {
// build all modules
script {
if (params.refresh) {
echo "Jenkins refresh!"
currentBuild.result = 'ABORTED'
error('Jenkinsfile refresh! Aborting any real runs!')
}
sh(script: """pwd""")
def status_code = sh(script: """PYTHONUNBUFFERED=1 python3 scripts/test/test_script.py /$vpc-stack""", returnStatus: true)
if (status_code == 0) {
currentBuild.result = 'SUCCESS'
}
if (status_code == 1) {
currentBuild.result = 'FAILURE'
}
}
}
}
}
}
post {
always {
echo 'cleaning workspace'
step([$class: 'WsCleanup'])
}
}
}
And this code is giving me the following error :
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: vpc for class
Any suggestions what can be done to resolve this.
Use another name for the choice variable without the dash sign -, e.g. vpc_stack or vpcstack and replace the variable name in python call.
I have written a Jenkinsfile script which gets whether documents are updated or code is updated in the current Github commit and starts all the stages accordingly. If only documents are updated I don't run the code testing stage again.
So now if the previous build failed and now in the current Git commit only documents are updated then it will not run the code testing stage. So I want a method/way to know which stage failed during the last Jenkins build and if needed run the current Jenkins build.
For example if the code testing stage failed in the previous build, I'll need to run the code testing stage for this build, otherwise I can just run the documents zipping stage.
As a workaround to get failed stages from Jenkins build such function can be used. I could not find a simpler way to do it. But this code requires to run without Groovy sandbox or you need to whitelist a lot of Jenkins method signatures (which is not recommeded). Also blueocean plugin has to be installed.
import io.jenkins.blueocean.rest.impl.pipeline.PipelineNodeGraphVisitor
import io.jenkins.blueocean.rest.impl.pipeline.FlowNodeWrapper
import org.jenkinsci.plugins.workflow.flow.FlowExecution
import org.jenkinsci.plugins.workflow.graph.FlowNode
import org.jenkinsci.plugins.workflow.job.WorkflowRun
#NonCPS
List getFailedStages(WorkflowRun run) {
List failedStages = []
FlowExecution exec = run.getExecution()
PipelineNodeGraphVisitor visitor = new PipelineNodeGraphVisitor(run)
def flowNodes = visitor.getPipelineNodes()
for (node in flowNodes) {
if (node.getType() != FlowNodeWrapper.NodeType.STAGE ) { continue; }
String nodeName = node.getDisplayName()
def nodeResult = node.getStatus().getResult()
println String.format('{"displayName": "%s", "result": "%s"}',
nodeName, nodeResult)
def resultSuccess = io.jenkins.blueocean.rest.model.BlueRun$BlueRunResult.SUCCESS
if (nodeResult != resultSuccess) {
failedStages.add(nodeName)
}
}
return failedStages
}
// Ex. Get last build of "test_job"
WorkflowRun run = Jenkins.instance.getItemByFullName("test_job")._getRuns()[0]
failedStages = getFailedStages(run)
I thing it could fit. Use buildVariables from previous build, timeout \ input in case You need to change something, try \ catch for setup stages status. Code example:
// yourJob
// with try/catch block
def stageOneStatus;
def stageTwoStatus;
def stageThreeStatus;
pipeline {
agent any
stages {
stage("STAGE 1") {
// For initial run every stage
when { expression { params.stageOne == "FAILURE" } }
steps {
script {
try {
// make thing
} catch (Exception e) {
stageOneStatus = "FAILURE";
}
}
}
}
stage("STAGE 2") {
when { expression { params.stageTwo == "FAILURE" } }
steps {
script {
try {
// make thing
} catch (Exception e) {
stageTwoStatus = "FAILURE";
}
}
}
}
stage("STAGE 3") {
when { expression { params.stageThree == "FAILURE" } }
steps {
script {
try {
// make thing
} catch (Exception e) {
stageThreeStatus = "FAILURE";
}
}
}
}
}
}
// Checking JOB
def pJob;
pipeline {
agent any
stages {
// Run job with inheriting variable from build
stage("Inheriting job") {
steps {
script {
pJob = build(job: "yourJob", parameters: [
[$class: 'StringParameterValue', name: 'stageOne', value: 'FAILURE'],
[$class: 'StringParameterValue', name: 'stageTwo', value: 'FAILURE'],
[$class: 'StringParameterValue', name: 'stageThree', value: 'FAILURE']
], propagate: false)
if (pJob.result == 'FAILURE') {
error("${pJob.projectName} FAILED")
}
}
}
}
// Wait for fix, and re run job
stage ('Wait for fix') {
timeout(time: 24, unit: 'HOURS') {
input "Ready to rerun?"
}
}
// Re run job after changes in code
stage("Re-run Job") {
steps {
script {
build(
job: "yourJob",
parameters: [
[$class: 'StringParameterValue',name: 'stageOne',value: pJob.buildVariables.stageOneStatus ],
[$class: 'StringParameterValue',name: 'stageTwo',value: pJob.buildVariables.stageTwoStatus ],
[$class: 'StringParameterValue',name: 'stageThree',value: pJob.buildVariables.stageThreeStatus ]
]
)
}
}
}
}
}
pipeline {
agent any
stages {
stage("foo") {
steps {
script {
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [choice(name: 'RELEASE_SCOPE', choices: 'patch\nminor\nmajor',
description: 'What is the release scope?')]
}
echo "${env.RELEASE_SCOPE}"
}
}
}
}
In this above code, The choice are hardcoded (patch\nminor\nmajor) -- My requirement is to dynamically give choice values in the dropdown.
I get the values from calling api - Artifacts list (.zip) file names from artifactory
In the above example, It request input when we do the build, But i want to do a "Build with parameters"
Please suggest/help on this.
Depends how you get data from API there will be different options for it, for example let's imagine that you get data as a List of Strings (let's call it releaseScope), in that case your code be following:
...
script {
def releaseScopeChoices = ''
releaseScope.each {
releaseScopeChoices += it + '\n'
}
parameters: [choice(name: 'RELEASE_SCOPE', choices: ${releaseScopeChoices}, description: 'What is the release scope?')]
}
...
hope it will help.
This is a cutdown version of what we use. We separate stuff into shared libraries but I have consolidated a bit to make it easier.
Jenkinsfile looks something like this:
#!groovy
#Library('shared') _
def imageList = pipelineChoices.artifactoryArtifactSearchList(repoName, env.BRANCH_NAME)
imageList.add(0, 'build')
properties([
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters([
choice(name: 'ARTIFACT_NAME', choices: imageList.join('\n'), description: '')
])
])
Shared library that looks at artifactory, its pretty simple.
Essentially make GET Request (And provide auth creds on it) then filter/split result to whittle down to desired values and return list to Jenkinsfile.
import com.cloudbees.groovy.cps.NonCPS
import groovy.json.JsonSlurper
import java.util.regex.Pattern
import java.util.regex.Matcher
List artifactoryArtifactSearchList(String repoKey, String artifact_name, String artifact_archive, String branchName) {
// URL components
String baseUrl = "https://org.jfrog.io/org/api/search/artifact"
String url = baseUrl + "?name=${artifact_name}&repos=${repoKey}"
Object responseJson = getRequest(url)
String regexPattern = "(.+)${artifact_name}-(\\d+).(\\d+).(\\d+).${artifact_archive}\$"
Pattern regex = ~ regexPattern
List<String> outlist = responseJson.results.findAll({ it['uri'].matches(regex) })
List<String> artifactlist=[]
for (i in outlist) {
artifactlist.add(i['uri'].tokenize('/')[-1])
}
return artifactlist.reverse()
}
// Artifactory Get Request - Consume in other methods
Object getRequest(url_string){
URL url = url_string.toURL()
// Open connection
URLConnection connection = url.openConnection()
connection.setRequestProperty ("Authorization", basicAuthString())
// Open input stream
InputStream inputStream = connection.getInputStream()
#NonCPS
json_data = new groovy.json.JsonSlurper().parseText(inputStream.text)
// Close the stream
inputStream.close()
return json_data
}
// Artifactory Get Request - Consume in other methods
Object basicAuthString() {
// Retrieve password
String username = "artifactoryMachineUsername"
String credid = "artifactoryApiKey"
#NonCPS
credentials_store = jenkins.model.Jenkins.instance.getExtensionList(
'com.cloudbees.plugins.credentials.SystemCredentialsProvider'
)
credentials_store[0].credentials.each { it ->
if (it instanceof org.jenkinsci.plugins.plaincredentials.StringCredentials) {
if (it.getId() == credid) {
apiKey = it.getSecret()
}
}
}
// Create authorization header format using Base64 encoding
String userpass = username + ":" + apiKey;
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());
return basicAuth
}
I could achieve it without any plugin:
With Jenkins 2.249.2 using a declarative pipeline,
the following pattern prompt the user with a dynamic dropdown menu
(for him to choose a branch):
(the surrounding withCredentials bloc is optional, required only if your script and jenkins configuration do use credentials)
node {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'user-credential-in-gitlab',
usernameVariable: 'GIT_USERNAME',
passwordVariable: 'GITLAB_ACCESS_TOKEN']]) {
BRANCH_NAMES = sh (script: 'git ls-remote -h https://${GIT_USERNAME}:${GITLAB_ACCESS_TOKEN}#dns.name/gitlab/PROJS/PROJ.git | sed \'s/\\(.*\\)\\/\\(.*\\)/\\2/\' ', returnStdout:true).trim()
}
}
pipeline {
agent any
parameters {
choice(
name: 'BranchName',
choices: "${BRANCH_NAMES}",
description: 'to refresh the list, go to configure, disable "this build has parameters", launch build (without parameters)to reload the list and stop it, then launch it again (with parameters)'
)
}
stages {
stage("Run Tests") {
steps {
sh "echo SUCCESS on ${BranchName}"
}
}
}
}
The drawback is that one should refresh the jenkins configration and use a blank run for the list be refreshed using the script ...
Solution (not from me): This limitation can be made less anoying using an aditional parameters used to specifically refresh the values:
parameters {
booleanParam(name: 'REFRESH_BRANCHES', defaultValue: false, description: 'refresh BRANCH_NAMES branch list and launch no step')
}
then wihtin stage:
stage('a stage') {
when {
expression {
return ! params.REFRESH_BRANCHES.toBoolean()
}
}
...
}
this is my solution.
def envList
def dockerId
node {
envList = "defaultValue\n" + sh (script: 'kubectl get namespaces --no-headers -o custom-columns=":metadata.name"', returnStdout: true).trim()
}
pipeline {
agent any
parameters {
choice(choices: "${envList}", name: 'DEPLOYMENT_ENVIRONMENT', description: 'please choose the environment you want to deploy?')
booleanParam(name: 'SECURITY_SCAN',defaultValue: false, description: 'container vulnerability scan')
}
The example of Jenkinsfile below contains AWS CLI command to get the list of Docker images from AWS ECR dynamically, but it can be replaced with your own command. Active Choices Plug-in is required.
Note! You need to approve the script specified in parameters after first run in "Manage Jenkins" -> "In-process Script Approval", or open job configuration and save it to approve
automatically (might require administrator permissions).
properties([
parameters([[
$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
name: 'image',
description: 'Docker image',
filterLength: 1,
filterable: false,
script: [
$class: 'GroovyScript',
fallbackScript: [classpath: [], sandbox: false, script: 'return ["none"]'],
script: [
classpath: [],
sandbox: false,
script: '''\
def repository = "frontend"
def aws_ecr_cmd = "aws ecr list-images" +
" --repository-name ${repository}" +
" --filter tagStatus=TAGGED" +
" --query imageIds[*].[imageTag]" +
" --region us-east-1 --output text"
def aws_ecr_out = aws_ecr_cmd.execute() | "sort -V".execute()
def images = aws_ecr_out.text.tokenize().reverse()
return images
'''.stripIndent()
]
]
]])
])
pipeline {
agent any
stages {
stage('First stage') {
steps {
sh 'echo "${image}"'
}
}
}
}
choiceArray = [ "patch" , "minor" , "major" ]
properties([
parameters([
choice(choices: choiceArray.collect { "$it\n" }.join(' ') ,
description: '',
name: 'SOME_CHOICE')
])
])
I'm trying to create a declarative pipeline which does a number (configurable via parameter) jobs in parallel, but I'm having trouble with the parallel part.
Basically, for some reason the below pipeline generates the error
Nothing to execute within stage "Testing" # line .., column ..
and I cannot figure out why, or how to solve it.
import groovy.transform.Field
#Field def mayFinish = false
def getJob() {
return {
lock("finiteResource") {
waitUntil {
script {
mayFinish
}
}
}
}
}
def getFinalJob() {
return {
waitUntil {
script {
try {
echo "Start Job"
sleep 3 // Replace with something that might fail.
echo "Finished running"
mayFinish = true
true
} catch (Exception e) {
echo e.toString()
echo "Failed :("
}
}
}
}
}
def getJobs(def NUM_JOBS) {
def jobs = [:]
for (int i = 0; i < (NUM_JOBS as Integer); i++) {
jobs["job{i}"] = getJob()
}
jobs["finalJob"] = getFinalJob()
return jobs
}
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr:'5'))
}
parameters {
string(
name: "NUM_JOBS",
description: "Set how many jobs to run in parallel"
)
}
stages {
stage('Setup') {
steps {
echo "Setting it up..."
}
}
stage('Testing') {
steps {
parallel getJobs(params.NUM_JOBS)
}
}
}
}
I've seen plenty of examples doing this in the old pipeline, but not declarative.
Anyone know what I'm doing wrong?
At the moment, it doesn't seem possible to dynamically provide the parallel branches when using a Declarative Pipeline.
Even if you have a stage prior where, in a script block, you call getJobs() and add it to the binding, the same error message is thrown.
In this case you'd have to fall back to using a Scripted Pipeline.
Let's say we have the following Jenkinsfile:
stage name: "Cool stage"
sh 'whoami'
stage name: "Better stage"
def current_stage = getCurrentStageName()
echo "CONGRATULATIONS, you are on stage: $current_stage"
The question is how to implement getCurrentStageName(). I know, that I can get an access to build run-time using currentBuild.rawBuild.
But how to get stage name from that point?
I need this for some customization in email notifications, so that I can always catch failed stage name and include it into email body.
You can now do this in a built-in manner, since Jenkins 2.3. Like so:
steps {
updateGitlabCommitStatus name: STAGE_NAME, state: 'running'
echo '${STAGE_NAME}'
}
For more information see: https://issues.jenkins-ci.org/browse/JENKINS-44456
This should work from a pipeline shared library:
#!/usr/bin/env groovy
import hudson.model.Action;
import org.jenkinsci.plugins.workflow.graph.FlowNode
import org.jenkinsci.plugins.workflow.cps.nodes.StepStartNode
import org.jenkinsci.plugins.workflow.actions.LabelAction
def getStage(currentBuild){
def build = currentBuild.getRawBuild()
def execution = build.getExecution()
def executionHeads = execution.getCurrentHeads()
def stepStartNode = getStepStartNode(executionHeads)
if(stepStartNode){
return stepStartNode.getDisplayName()
}
}
def getStepStartNode(List<FlowNode> flowNodes){
def currentFlowNode = null
def labelAction = null
for (FlowNode flowNode: flowNodes){
currentFlowNode = flowNode
labelAction = false
if (flowNode instanceof StepStartNode){
labelAction = hasLabelAction(flowNode)
}
if (labelAction){
return flowNode
}
}
if (currentFlowNode == null) {
return null
}
return getStepStartNode(currentFlowNode.getParents())
}
def hasLabelAction(FlowNode flowNode){
def actions = flowNode.getActions()
for (Action action: actions){
if (action instanceof LabelAction) {
return true
}
}
return false
}
def call() {
return getStage(currentBuild)
}
Example usage:
node {
stage('Stage One'){
echo getCurrentStage()
}
stage('Stage Two'){
echo getCurrentStage()
}
}
Aleks' workaround works fine, just thought it's worth sharing the code
node ("docker") {
def sendOk = {
String stage -> slackSend color: 'good', message: stage + " completed, project - ${env.JOB_NAME}:1.0.${env.BUILD_NUMBER}"
}
def sendProblem = {
String stage, error -> slackSend color: 'danger', message: stage + " did not succeed, project - ${env.JOB_NAME}:1.0.${env.BUILD_NUMBER}, error: ${error}, Find details here: ${env.BUILD_URL}"
}
def exec = {
work, stageName ->
stage (stageName) {
try {
work.call();
sendOk(stageName)
}
catch(error) {
sendProblem(stageName, error)
throw error
}
}
}
exec({
git credentialsId: 'github-root', url: 'https://github.com/abc'
dir ('src') {
git credentialsId: 'github-root', url: 'https://github.com/abc-jenkins'
}
sh "chmod +x *.sh"
}, "pull")
exec({ sh "./Jenkinsfile-clean.sh \"1.0.${env.BUILD_NUMBER}\"" }, "clean")
exec({ sh "./Jenkinsfile-unit.sh \"1.0.${env.BUILD_NUMBER}\"" }, "unit")
exec({ sh "./Jenkinsfile-build.sh \"1.0.${env.BUILD_NUMBER}\"" }, "build")
exec({ sh "./Jenkinsfile-dockerize.sh \"1.0.${env.BUILD_NUMBER}\"" }, "dockerize")
exec({ sh "./Jenkinsfile-push.sh \"1.0.${env.BUILD_NUMBER}\"" }, "push")
exec({ sh "./Jenkinsfile-prod-like.sh \"1.0.${env.BUILD_NUMBER}\"" }, "swarm")
}
As a workaround, in the failure email I include a link to the Pipeline Steps page. This page clearly shows green and red balls for each step, making it easy for the email recipient to figure out not just the stage, but the step that failed.
In the following example email body, the FlowGraphTable link links to Pipeline Steps:
def details = """<p>Job '${env.JOB_NAME}', build ${env.BUILD_NUMBER} result was ${buildStatus}.
Please scrutinize the build and take corrective action.</p>
<p>Quick links to the details:
<ul>
<li>${env.JOB_NAME} job main page</li>
<li>Build ${env.BUILD_NUMBER} main page</li>
<ul>
<li>Console output</li>
<li>Git changes</li>
<li>Pipeline steps.
This page will show you which step failed, and give you access
to the job workspace.</li>
</ul>
</ul></p>"""
This is an excerpt from my implementation of notifyBuild() that BitwiseMan of CloudBees presents in his article, Sending Notifications in Pipeline.