Calling multiple downstream jobs from an upstream Jenkins pipeline job - jenkins

I have two pipeline based jobs
Parent_Job (has string parameters project1 & project2)
#NonCPS
def invokeDeploy(map) {
for (entry in map) {
echo "Starting ${entry.key}"
build job: 'Child_Job', parameters: [
string(name: 'project', value: entry.key),
string(name: 'version', value: entry.value)
], quietPeriod: 2, wait: true
echo "Completed ${entry.key}"
}
}
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
invokeDeploy(params)
}
}
}
}
}
Child_Job (has string parameters project & version)
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
echo "${params.project} --> ${params.version}"
}
}
}
}
}
Parent job output
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
Starting project2
[Pipeline] build (Building Child_Job)
Scheduling project: Child_Job
Starting building: Child_Job #18
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I expected the downstream job to be called twice, (for project1 and project2) but its invoked only once (for project2)
Is there something obviously wrong with this script?

It seems that the problem with wait: true option enabled for build job step. If you change it to wait: false it will execute 2 times. I tried it on this test pipeline:
#NonCPS
def invokeDeploy(map) {
for (entry in map) {
echo "Starting ${entry.key}"
build job: 'pipeline', quietPeriod: 2, wait: false
echo "Completed ${entry.key}"
}
}
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def sampleMap = [first_job:'First', second_job:'Second']
invokeDeploy(sampleMap)
}
}
}
}
}

Related

Jenkins stage doesn't call custom method

I have a Jenkins pipeline that does some code linting through different environments. I have a linting method that I call based on what parameters are passed. However, during my build, the stage that calls the method does nothing and returns nothing. Everything appears to be sane to me. Below is my code, and the stages showing the null results.
Jenkinsfile:
IAMMap = [
"west": [
account: "XXXXXXXX",
],
"east": [
account: "YYYYYYYYY",
],
]
pipeline {
options {
ansiColor('xterm')
}
parameters {
booleanParam(
name: 'WEST',
description: 'Whether to lint code from west account or not. Defaults to "false"',
defaultValue: false
)
booleanParam(
name: 'EAST',
description: 'Whether to lint code from east account or not. Defaults to "false"',
defaultValue: true
)
booleanParam(
name: 'LINT',
description: 'Whether to perform linting. This should always default to "true"',
defaultValue: true
)
}
environment {
CODE_DIR = "/code"
}
stages {
stage('Start Lint') {
steps {
script {
if (params.WEST && params.LINT) {
codeLint("west")
}
if (params.EAST && params.LINT) {
codeLint("east")
}
}
}
}
}
post {
always {
cleanWs disableDeferredWipeout: true, deleteDirs: true
}
}
}
def codeLint(account) {
return {
stage('Code Lint') {
dir(env.CODE_DIR) {
withAWS(IAMMap[account]) {
sh script: "./lint.sh"
}
}
}
}
}
Results:
15:00:20 [Pipeline] { (Start Lint)
15:00:20 [Pipeline] script
15:00:20 [Pipeline] {
15:00:20 [Pipeline] }
15:00:20 [Pipeline] // script
15:00:20 [Pipeline] }
15:00:20 [Pipeline] // stage
15:00:20 [Pipeline] stage
15:00:20 [Pipeline] { (Declarative: Post Actions)
15:00:20 [Pipeline] cleanWs
15:00:20 [WS-CLEANUP] Deleting project workspace...
15:00:20 [WS-CLEANUP] Deferred wipeout is disabled by the job configuration...
15:00:20 [WS-CLEANUP] done
As you can see nothing gets executed. I assure you I am checking the required parameters when running Build with Parameters in the console. As far as I know, this is the correct syntax for a declarative pipeline.
Don't return the Stage, just execute it within the codeLint function.
def codeLint(account) {
stage('Code Lint') {
dir(env.CODE_DIR) {
withAWS(IAMMap[account]) {
sh script: "./lint.sh"
}
}
}
}
Or once the Stage is returned you can run it. This may need Script approval.
codeLint("west").run()

Jenkins Pipeline with conditional "When" expression of choice parameters

I'm new to Groovy. I'm not able to figure out what's wrong here.
Depends on the choice of input, I expect the script to execute either Step 'Hello' or 'Bye' but it skips both. I mostly orientated to this Jenkins pipeline conditional stage using "When" for choice parameters, but still can't figure it out.
How can I use those choice parameters correctly?
pipeline {
agent any
stages {
stage('Init') {
steps('Log-in'){
echo 'Log-in'
}
}
stage('Manual Step') {
input {
message "Hello or Goodbye?"
ok "Say!"
parameters{choice(choices:['Hello','Bye'], description: 'Users Choice', name: 'CHOICE')}
}
steps('Input'){
echo "choice: ${CHOICE}"
echo "choice params.: " + params.CHOICE //null
echo "choice env: " + env.CHOICE //Hello
}
}
stage('Hello') {
when{ expression {env.CHOICE == 'Hello'}}
steps('Execute'){
echo 'Say Hello'
}
}
stage('Bye') {
when{ expression {env.CHOICE == 'Bye'}}
steps('Execute'){
echo 'Say Bye'
}
}
}
}
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] echo
Log-in
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Manual Step)
[Pipeline] input
Input requested
Approved by Admin
[Pipeline] withEnv
[Pipeline] {
[Pipeline] echo
choice: Hello
[Pipeline] echo
choice params.: null
[Pipeline] echo
choice env: Hello
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Hello)
Stage "Hello" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Bye)
Stage "Bye" skipped due to when conditional
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
From the docs:
Any parameters provided as part of the input submission will be available in the environment for the rest of the stage.
This means that your parameter CHOICE does not exist in the other stages. If you want to have a parameter that's available on all the stages, you can define a parameter outside of the stage, i.e.:
pipeline {
agent any
parameters {
choice(choices:['Hello','Bye'], description: 'Users Choice', name: 'CHOICE')
}
stages {
stage('Init') {
steps('Log-in') {
echo 'Log-in'
}
}
stage('Manual Step') {
steps('Input') {
echo "choice: ${CHOICE}"
echo "choice params.: " + params.CHOICE
echo "choice env: " + env.CHOICE
}
}
stage('Hello') {
when {
expression { env.CHOICE == 'Hello' }
}
steps('Execute') {
echo 'Say Hello'
}
}
stage('Bye') {
when {
expression {env.CHOICE == 'Bye'}
}
steps('Execute'){
echo 'Say Bye'
}
}
}
}
This will behave as expected. The difference is that the job won't ask you for input, instead, you will provide the wanted parameters before pressing build.

Jenkins pipeline stuck on build job

I recently create a new jenkins pipeline that mainly relies on other build jobs. Strange thing is, the 1st stage job gets triggered, ran successfully + Finished with "SUCCESS" state. But the pipeline keeps on loading forever after Scheduling project: "run-operation".
Any idea what mistake i made below?
UPDATE 1: remove param with hard coded advertiser & query
pipeline {
agent {
node {
label 'slave'
}
}
stages {
stage('1') {
steps {
script{
def buildResult = build job: 'run-operation', parameters: [
string(name: 'ADVERTISER', value: 'car'),
string(name: 'START_DATE', value: '2019-12-29'),
string(name: 'END_DATE', value: '2020-01-11'),
string(name: 'QUERY', value: 'weekly')
]
def envVar = buildResult.getBuildVariables();
}
}
}
stage('2') {
steps {
script{
echo 'track the query operations from above job'
def trackResult = build job: 'track-operation', parameters: [
string(name: 'OPERATION_NAMES', value: envVar.operationList),
]
}
}
}
stage('3') {
steps {
echo 'move flag'
}
}
stage('callback') {
steps {
echo 'for each operation call back url'
}
}
}
}
Console log (despite the job was running, the pipeline doesnt seems to know, see log):
Started by user reusable
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/jobs/etl-pipeline/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (1)
[Pipeline] build (Building run-operation)
Scheduling project: run-operation)
...

Jenkins pipeline: prevent a job from failing if upstream jobs do not exist

In my Jenkins I have a Groovy pipeline script which triggers multiple jobs afterward:
stage('Build other pipelines') {
steps {
build job: "customer-1/${URLEncoder.encode(BRANCH_NAME, "UTF-8")}", propagate: true, wait: false
build job: "customer-2/${URLEncoder.encode(BRANCH_NAME, "UTF-8")}", propagate: true, wait: false
build job: "customer-3/${URLEncoder.encode(BRANCH_NAME, "UTF-8")}", propagate: true, wait: false
}
}
Now, I develop on a feature-branch e.g. feature/ISSUE-123 just for customer 2, so the jobs customer-1/ISSUE-123 and customer-3/ISSUE-123 do not exist. How can I tell Jenkins not to fail in this case?
Consider extracting a new method called safeTriggerJob that wraps the build step with the try-catch block that catches exception thus let the pipeline continue running.
pipeline {
agent any
stages {
stage("Test") {
steps {
safeTriggerJob job: "job2", propagate: true, wait: false
}
}
}
}
void safeTriggerJob(Map params) {
try {
build(params)
} catch (Exception e) {
echo "WARNING: ${e.message}"
}
}
Output:
[Pipeline] Start of Pipeline (hide)
[Pipeline] node
Running on Jenkins in /home/wololock/.jenkins/workspace/sandbox-pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] build
[Pipeline] echo
WARNING: No item named job2 found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Alternatively, instead of extracting a dedicated method you could add try-catch directly inside steps block, but in this case, you would need to wrap it with script, something like:
pipeline {
agent any
stages {
stage("Test") {
steps {
script {
try {
build job: "job2", propagate: true, wait: false
} catch (Exception e) {
echo "WARNING: ${e.message}"
}
// The next build inside its own try-catch here, etc.
}
}
}
}
}

How to run multiple stages on the same node with declarative Jenkins pipeline?

Goal
Run multiple stages of a declarative Jenkins pipeline on the same node.
Setup
This is just a minimal example to show the problem. There are 2 Windows nodes "windows-slave1" and "windows-slave2" both labeled with the label "windows".
NOTE: My real Jenkinsfile cannot use a global agent because there are groups of stages that require to run on different nodes (e.g. Windows vs. Linux).
Expected Behaviour
Jenkins selects one of the nodes in "Stage 1" based on the label and uses the same node in "Stage 2" because the variable windowsNode was updated to the node selected in "Stage 1".
Actual Behaviour
"Stage 2" sometimes runs on the same and sometimes on a different node than "Stage 1". See the output below.
Jenkinsfile
#!groovy
windowsNode = 'windows'
pipeline {
agent none
stages {
stage('Stage 1') {
agent {
label windowsNode
}
steps {
script {
// all subsequent steps should be run on the same windows node
windowsNode = NODE_NAME
}
echo "windowsNode: $windowsNode, NODE_NAME: $NODE_NAME"
}
}
stage('Stage 2') {
agent {
label windowsNode
}
steps {
echo "windowsNode: $windowsNode, NODE_NAME: $NODE_NAME"
}
}
}
}
Output
[Pipeline] stage
[Pipeline] { (Stage 1)
[Pipeline] node
Running on windows-slave2 in C:\Jenkins\workspace\test-agent-allocation#2
[Pipeline] {
[Pipeline] script
[Pipeline] {
[Pipeline] }
[Pipeline] // script
[Pipeline] echo
windowsNode: windows-slave2, NODE_NAME: windows-slave2
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Stage 2)
[Pipeline] node
Running on windows-slave1 in C:\Jenkins\workspace\test-agent-allocation
[Pipeline] {
[Pipeline] echo
windowsNode: windows-slave2, NODE_NAME: windows-slave1
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS
Any ideas what's wrong with the setup? I guess it's how the Jenkinsfile is parsed and executed.
Other suggestions? Maybe there is a Jenkins API to select a node based on the "windows" label when setting windowsNode initially.
Since version 1.3 of Declarative Pipeline plugin, this is officially supported.
It's officially called "Sequential Stages".
pipeline {
agent none
stages {
stage("check code style") {
agent {
docker "code-style-check-image"
}
steps {
sh "./check-code-style.sh"
}
}
stage("build and test the project") {
agent {
docker "build-tools-image"
}
stages {
stage("build") {
steps {
sh "./build.sh"
}
}
stage("test") {
steps {
sh "./test.sh"
}
}
}
}
}
}
Official announcement here: https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/
You could define stages inside a script block. Those stages are kind of sub-stages of a parent stage running in a given agent. That was the approach that I had to use in a similar use case than yours.
#!groovy
windowsNode = 'windows'
pipeline {
agent none
stages {
stage('Stage A') {
agent {
label windowsNode
}
steps {
script {
stage('Stage 1') {
windowsNode = NODE_NAME
echo "windowsNode: $windowsNode, NODE_NAME: $NODE_NAME"
}
stage('Stage 2') {
echo "windowsNode: $windowsNode, NODE_NAME: $NODE_NAME"
}
}
}
}
}
}
I have found that this works as you would expect
#!groovy
windowsNode = 'windows'
pipeline {
agent none
stages {
stage('Stage 1') {
steps {
node(windowsNode) {
script {
// all subsequent steps should be run on the same windows node
windowsNode = NODE_NAME
}
echo "windowsNode: $windowsNode, NODE_NAME: $NODE_NAME"
}
}
}
stage('Stage 2') {
steps {
node(windowsNode) {
echo "windowsNode: $windowsNode, NODE_NAME: $NODE_NAME"
}
}
}
}
}
replace agent none with agent any

Resources