Jenkins pipeline - build a job with default input value from another job - jenkins

I have a Jenkins Pipeline called pipeline1, described by the following Jenkinsfile:
node()
{
def server
def action
stage("Configuration")
{
def userInput = input(
id: 'userInput', message: 'Let\'s configure this run !', parameters: [
choice(choices: "server1\nserver2", name: "server", description: "Which server do you want to use ?"),
choice(choices: "stop\nstart\nrestart", name: "action", description: "What do you want to do ?")
]);
server = userInput.server;
action = userInput.action;
}
stage("Run")
{
echo(server)
echo(action)
}
}
It asks the user to choice some input in the Configuration stage and just echos them in the Run stage.
I would like to trigger this job from another one and automatically fill the input to avoid human action.
I've tried to use the same syntax we use to build a parametrized job and come to something like this:
node()
{
stage("Run")
{
build job: 'pipeline1', wait: true, parameters: [
[$class: 'StringParameterValue', name: 'userInput.server', value: "server1"],
[$class: 'StringParameterValue', name: 'userInput.action', value: "stop"]
]
}
}
But it doesn't work. It does trigger the pipeline1 job, but it wait for the user to fill the input...
EDIT: I would like to keep the input feature in pipeline1 instead of having a standard parametrized job.
Do you have any idea to achieve this ?
Thanks a lot.

OK so, I have got a complete answer for You. Use properties in pipeline1 and if/else block:
properties([
parameters([
choice(name: 'manually',
description: 'Do you whish to use a user input?',
choices: 'No\nYes')
])
])
node()
{
def server
def action
stage("Configuration") {
if ( params.useIn == 'Yes' || params.manually == 'Yes' ) {
def userInput = input(
id: 'userInput', message: 'Let\'s configure this run !', parameters: [
choice(choices: "server1\nserver2", name: "server", description: "Which server do you want to use ?"),
choice(choices: "stop\nstart\nrestart", name: "action", description: "What do you want to do ?")]
);
server = userInput.server;
action = userInput.action;
} else {
server = params.server
action = params.action
}
}
stage("Run")
{
echo(server)
echo(action)
}
}
Job 2 with little changes:
node()
{
stage("Run")
{
build job: 'pipeline1', wait: true, parameters: [
[$class: 'StringParameterValue', name: 'useIn', value: "No"],
[$class: 'StringParameterValue', name: 'server', value: "server1"],
[$class: 'StringParameterValue', name: 'action', value: "start"]
]
}
}
If You want to run job2, and then use User Input, just change useIn for Yes. So at this moment You can run directly pipeline1 job with User Input:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Configuration)
[Pipeline] input
Input requested
Approved by 3sky
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run)
[Pipeline] echo
server2
[Pipeline] echo
restart
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
or trigger it by job2, without User Input:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Configuration)
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Run)
[Pipeline] echo
server1
[Pipeline] echo
start
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Related

Jenkins stage doesn't call custom method

I have a Jenkins pipeline that does some code linting through different environments. I have a linting method that I call based on what parameters are passed. However, during my build, the stage that calls the method does nothing and returns nothing. Everything appears to be sane to me. Below is my code, and the stages showing the null results.
Jenkinsfile:
IAMMap = [
"west": [
account: "XXXXXXXX",
],
"east": [
account: "YYYYYYYYY",
],
]
pipeline {
options {
ansiColor('xterm')
}
parameters {
booleanParam(
name: 'WEST',
description: 'Whether to lint code from west account or not. Defaults to "false"',
defaultValue: false
)
booleanParam(
name: 'EAST',
description: 'Whether to lint code from east account or not. Defaults to "false"',
defaultValue: true
)
booleanParam(
name: 'LINT',
description: 'Whether to perform linting. This should always default to "true"',
defaultValue: true
)
}
environment {
CODE_DIR = "/code"
}
stages {
stage('Start Lint') {
steps {
script {
if (params.WEST && params.LINT) {
codeLint("west")
}
if (params.EAST && params.LINT) {
codeLint("east")
}
}
}
}
}
post {
always {
cleanWs disableDeferredWipeout: true, deleteDirs: true
}
}
}
def codeLint(account) {
return {
stage('Code Lint') {
dir(env.CODE_DIR) {
withAWS(IAMMap[account]) {
sh script: "./lint.sh"
}
}
}
}
}
Results:
15:00:20 [Pipeline] { (Start Lint)
15:00:20 [Pipeline] script
15:00:20 [Pipeline] {
15:00:20 [Pipeline] }
15:00:20 [Pipeline] // script
15:00:20 [Pipeline] }
15:00:20 [Pipeline] // stage
15:00:20 [Pipeline] stage
15:00:20 [Pipeline] { (Declarative: Post Actions)
15:00:20 [Pipeline] cleanWs
15:00:20 [WS-CLEANUP] Deleting project workspace...
15:00:20 [WS-CLEANUP] Deferred wipeout is disabled by the job configuration...
15:00:20 [WS-CLEANUP] done
As you can see nothing gets executed. I assure you I am checking the required parameters when running Build with Parameters in the console. As far as I know, this is the correct syntax for a declarative pipeline.
Don't return the Stage, just execute it within the codeLint function.
def codeLint(account) {
stage('Code Lint') {
dir(env.CODE_DIR) {
withAWS(IAMMap[account]) {
sh script: "./lint.sh"
}
}
}
}
Or once the Stage is returned you can run it. This may need Script approval.
codeLint("west").run()

Queue more than 2 jobs in a Jenkins pipeline

I need to be able to queue more than 2 jobs in a Jenkins pipeline.
In https://stackoverflow.com/a/24918670/8369030 it is suggested to use the Random String Parameter Plugin, how ever I can not find any documentation how to use it.
Alternatively I tried to do it with a random value like showed in https://stackoverflow.com/a/67110959/8369030, how ever this seems only to work in a Stage but not in a Parameter. Specifically, I always get null as default value when doing this:
pipeline {
environment {
max = 50
random_num = "${Math.abs(new Random().nextInt(max+1))}"
}
parameters {
string(name: 'JOB_ID', defaultValue: "${env.random_num}",
description: "Enter a random value to allow more than 2 jobs in the queue")
}
But it does not solve the problem with queues, because the parameter is overridden after execution, not before.
max = 50
random_num = "${Math.abs(new Random().nextInt(max+1))}"
println(random_num)
pipeline {
agent any
parameters {
string(name: 'rn', defaultValue: random_num,
description: "Enter a random value to allow more than 2 jobs in the queue")
}
stages {
stage('Randon number') {
steps {
println(random_num)
println(rn)
}
}
}
}
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] echo
31
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/test2
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Randon number)
[Pipeline] echo
31
[Pipeline] echo
25
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Solution:
enter image description here

Jenkins pipeline stuck on build job

I recently create a new jenkins pipeline that mainly relies on other build jobs. Strange thing is, the 1st stage job gets triggered, ran successfully + Finished with "SUCCESS" state. But the pipeline keeps on loading forever after Scheduling project: "run-operation".
Any idea what mistake i made below?
UPDATE 1: remove param with hard coded advertiser & query
pipeline {
agent {
node {
label 'slave'
}
}
stages {
stage('1') {
steps {
script{
def buildResult = build job: 'run-operation', parameters: [
string(name: 'ADVERTISER', value: 'car'),
string(name: 'START_DATE', value: '2019-12-29'),
string(name: 'END_DATE', value: '2020-01-11'),
string(name: 'QUERY', value: 'weekly')
]
def envVar = buildResult.getBuildVariables();
}
}
}
stage('2') {
steps {
script{
echo 'track the query operations from above job'
def trackResult = build job: 'track-operation', parameters: [
string(name: 'OPERATION_NAMES', value: envVar.operationList),
]
}
}
}
stage('3') {
steps {
echo 'move flag'
}
}
stage('callback') {
steps {
echo 'for each operation call back url'
}
}
}
}
Console log (despite the job was running, the pipeline doesnt seems to know, see log):
Started by user reusable
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/jobs/etl-pipeline/workspace
[Pipeline] {
[Pipeline] stage
[Pipeline] { (1)
[Pipeline] build (Building run-operation)
Scheduling project: run-operation)
...

how to call parameters from groovy map script to Jenkins Pipeline

simply I need to write jenkinsfile pipeline which contains choices and this choices come come from another groovy script that provides a map in this map I can add environment name and 3 values DB , APP , WEB so when I start build a new job I get parameter build with choices to pick the environment name DEV, QA , UAT and according to this choice it will pass map/list of the three IPs for DB,APP, WEB so I can use this values during build
/env.groovy
def DEV = [ DB : 10.0.0.5 , APP : 10.0.0.10 , WEB : 10.0.0.15 ]
def UAT = [ DB : 10.0.0.20 , APP : 10.0.0.25 , WEB : 10.0.0.30 ]
def QA = [ DB : 10.0.0.35 , APP : 10.0.0.40 , WEB : 10.0.0.45 ]
take this values from env.groovy file and pass it to choice in Jenkinsfile so I can get drop down menu with ( DEV - UAT - QA )
before I click build
I do not want to add this values inside the Jenkinsfile , I need to added it to separate groovy script (env.groovy)
this is for new pipeline Jenkinsfile which is running on ubuntu Jenkins server
String[] env = env()
pipeline {
agent any
properties([
parameters([
choice(
choices: env,
description: 'Select the environment',
name: 'ENV',
)
])
])
stages {
stage('deploy') {
steps {
sh "echo ${ENV}"
}
}
}
}
expecting to echo list of environment variables DB , APP and WEB , so I can be sure I can pass this values to another shell script that I will add later to deploy , but I didn't get this menu in the 1st place and I get this error
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 5: The ‘properties’ section has been renamed as of version 0.8. Use ‘options’ instead. # line 5, column 5.
properties([
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:133)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:126)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:561)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:522)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:320)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
IMHO there is no possibility to use variable as maps name directly, but we have switch statement.
Try this one:
def map
pipeline {
agent any
parameters {
choice( name: 'env', choices: ['DEV', 'UAT', 'QA'] , description: "Choose ENV?" )
}
stages {
//I'm a bit lazy, in Your case use regular file :)
stage('create file') {
steps {
script {
sh "echo \"DEV = [ DB : '10.0.0.5' , APP : '10.0.0.10' , WEB : '10.0.0.15' ]\" > env.groovy"
sh "echo \"UAT = [ DB : '10.0.0.20' , APP : '10.0.0.25' , WEB : '10.0.0.30' ]\" >> env.groovy"
sh "echo \"QA = [ DB : '10.0.0.35' , APP : '10.0.0.40' , WEB : '10.0.0.45' ]\" >> env.groovy"
}
}
}
stage('switch time') {
steps {
script{
load "$JENKINS_HOME/workspace/box1/env.groovy"
switch (params.env) {
case 'DEV':
map = DEV
break
case 'UAT':
map = UAT
break
case 'QA':
map = QA
break
default:
map = []
break
}
}
}
}
stage('deploy') {
steps {
println map.DB
println map.APP
println map.WEB
}
}
}
}
Expected output:
Started by user 3sky
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in /app/jenkins/home/workspace/box1
[Pipeline] {
[Pipeline] stage
[Pipeline] { (create file)
[Pipeline] script
[Pipeline] {
[Pipeline] sh
[box1] Running shell script
+ echo 'DEV = [ DB : '\''10.0.0.5'\'' , APP : '\''10.0.0.10'\'' , WEB : '\''10.0.0.15'\'' ]'
[Pipeline] sh
[box1] Running shell script
+ echo 'UAT = [ DB : '\''10.0.0.20'\'' , APP : '\''10.0.0.25'\'' , WEB : '\''10.0.0.30'\'' ]'
[Pipeline] sh
[box1] Running shell script
+ echo 'QA = [ DB : '\''10.0.0.35'\'' , APP : '\''10.0.0.40'\'' , WEB : '\''10.0.0.45'\'' ]'
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (switch time)
[Pipeline] script
[Pipeline] {
[Pipeline] load
[Pipeline] { (/app/jenkins/home/workspace/box1/env.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (deploy)
[Pipeline] echo
10.0.0.5
[Pipeline] echo
10.0.0.10
[Pipeline] echo
10.0.0.15
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Calling multiple downstream jobs from an upstream Jenkins pipeline job

I have two pipeline based jobs
Parent_Job (has string parameters project1 & project2)
#NonCPS
def invokeDeploy(map) {
for (entry in map) {
echo "Starting ${entry.key}"
build job: 'Child_Job', parameters: [
string(name: 'project', value: entry.key),
string(name: 'version', value: entry.value)
], quietPeriod: 2, wait: true
echo "Completed ${entry.key}"
}
}
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
invokeDeploy(params)
}
}
}
}
}
Child_Job (has string parameters project & version)
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
echo "${params.project} --> ${params.version}"
}
}
}
}
}
Parent job output
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
Starting project2
[Pipeline] build (Building Child_Job)
Scheduling project: Child_Job
Starting building: Child_Job #18
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I expected the downstream job to be called twice, (for project1 and project2) but its invoked only once (for project2)
Is there something obviously wrong with this script?
It seems that the problem with wait: true option enabled for build job step. If you change it to wait: false it will execute 2 times. I tried it on this test pipeline:
#NonCPS
def invokeDeploy(map) {
for (entry in map) {
echo "Starting ${entry.key}"
build job: 'pipeline', quietPeriod: 2, wait: false
echo "Completed ${entry.key}"
}
}
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def sampleMap = [first_job:'First', second_job:'Second']
invokeDeploy(sampleMap)
}
}
}
}
}

Resources