I have a parameterized Jenkins Pipeline with a default value and I'm trying to pass that param as a script argument but it doesn't seem to pass anything. Here is the script :
pipeline {
agent any
stages {
stage('Building') {
steps {
build job: 'myProject', parameters: [string(name: 'configuration', value: '${configuration}')]
}
}
stage('Doing stuff') {
steps {
sh "~/scripts/myScript ${configuration}"
}
}
}
}
It seems to work for the build step but not for the script. I returns an error saying I have no argument.
I tried to get it with ${configuration}, ${params.configuration} and $configuration.
What is the right way to access a param and pass it correctly to a script ?
Thanks.
Actually, you are using the build step, to pass a parameter to the Jenkins job 'myProject'.
build job: 'myProject', parameters: [string(name: 'configuration', value: '${configuration}')]
If you want to declare a Parameter in this job you need to declare your parameter in a "parameters" block.
pipeline {
agent any
parameters {
string(defaultValue: '', description: '', name: 'configuration')
}
stages {
stage('Doing stuff') {
steps {
sh "~/scripts/myScript ${configuration}"
}
}
}
}
I have a situation where the build number of one job has to be passed to another job and the next job will use that as a parameter.
stages {
stage('Build Job1') {
steps {
script {
build job: "001_job"
$build_001= env.BUILD_NUMBER of 001_job
echo env.BUILD_NUMBER //this echos the build number of this job and not 001_job
}
}
}
stage('job_002') {
steps {
script {
build job: "job_002", parameters: [string(name: "${PAYLOAD_PARAM}", value: "$build_001")]
}
}
}
}
}
I figured out a way to do this.
Need to have a global environment variable and then assign the build number via funtion like in solution below:
pipeline {
environment {
BUILD_NUM = ''
}
agent {
label 'master'
}
stages {
stage('Build Job1') {
steps {
script {
def build job: "001_job"
def build_num1 = build_job.getNumber()
BUILD_NUM = "${build_num1}"
echo BUILD_NUM //build number of oo1/job
}
}
}
stage('job_002') {
steps {
script {
build job: "job_002", parameters: [string(name: "${PAYLOAD_PARAM}", value: BUILD_NUM))]
}
}
}
}
}
I have a Jenkins Job, configured as a Scripted Jenkins Pipeline, which:
Checks out the code from GitHub
merges in developer changes
builds a debug image
it is then supposed to split into 3 separate parallel processes - one of which builds the release version of the code and unit tests it.
The other 2 processes are supposed to be identical, with the debug image being flashed onto a target and various tests running.
The targets are identified in Jenkins as slave_1 and slave_2 and are both allocated the label 131_ci_targets
I am using 'parallel' to trigger the release build, and the multiple instances of the test job. I will post a (slightly redacted) copy of my Scripted pipeline below for full reference, but for the question I have tried all 3 of the following options.
Using a single build call with LabelParamaterValue and allNodesMatchingLabel set to true. In this the TEST_TARGETS is the label 131_ci_targets
parallel_steps = [:]
parallel_steps["release"] = { // Release build and test steps
}
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'LabelParameterValue',
name: 'RUN_NODE', label: "${TEST_TARGETS}",
allNodesMatchingLabel: true,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
stage('Parallel'){
parallel parallel_steps
} // Parallel
Using a single build call with NodeParamaterValue and a list of all nodes. In this TEST_TARGETS is again the label, while test_nodes is a list of 2 strings: [slave_1, slave_2]
parallel_steps = [:]
parallel_steps["release"] = { // Release build and test steps
}
test_nodes = hostNames("${TEST_TARGETS}")
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: test_nodes,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
stage('Parallel'){
parallel parallel_steps
} // Parallel
3: Using multiple stages, each with a single build call with NodeParamaterValue and a list containing only 1 slave id.
test_nodes is the list of strings : [slave_1, slave_2], while the first call passes slave_1 and the second slave_2.
for ( tn in test_nodes ) {
parallel_steps["${tn}"] = {
stage("${tn}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: [tn],
nodeEligibility: [$class: 'IgnoreOfflineNodeEligibility']]],
wait: false
}
} // ${tn}
}
All of the above will trigger only a single run of the 'Trial_Test_Pipe' on slave_2 assuming that both slave_1 and slave_2 are defined, online and have available executors.
The Trial_Test_Pipe job is another Jenkins Pipeline job, and has the checkbox "Do not allow concurrent builds" unchecked.
Any thoughts on:
Why the job will only trigger one of the runs, not both?
What the correct solution may be?
For reference now: here is my full(ish) scripted Jenkins job:
import hudson.model.*
import hudson.EnvVars
import groovy.json.JsonSlurperClassic
import groovy.json.JsonBuilder
import groovy.json.JsonOutput
import java.net.URL
def BUILD_SLAVE=""
// clean the workspace before starting the build process
def clean_before_build() {
bat label:'',
script: '''cd %GITHUB_REPO_PATH%
git status
git clean -x -d -f
'''
}
// Routine to build the firmware
// Can build Debug or Release depending on the environment variables
def build_the_firmware() {
return
def batch_script = """
REM *** Build script here
echo "... Build script here ..."
"""
bat label:'',
script: batch_script
}
// Copy the hex files out of the Build folder and into the Jenkins workspace
def copy_hex_files_to_workspace() {
return
def batch_script = """
REM *** Copy HEX file to workspace:
echo "... Copy HEX file to workspace ..."
"""
bat label:'',
script: batch_script
}
// Updated from stackOverflow answer: https://stackoverflow.com/a/54145233/1589770
#NonCPS
def hostNames(label) {
nodes = []
jenkins.model.Jenkins.instance.computers.each { c ->
if ( c.isOnline() ){
labels = c.node.labelString
labels.split(' ').each { l ->
if (l == label) {
nodes.add(c.node.selfLabel.name)
}
}
}
}
return nodes
}
try {
node('Build_Slave') {
BUILD_SLAVE = "${env.NODE_NAME}"
echo "build_slave=${BUILD_SLAVE}"
stage('Checkout Repo') {
// Set a desription on the build history to make for easy identification
currentBuild.setDescription("Pull Request: ${PULL_REQUEST_NUMBER} \n${TARGET_BRANCH}")
echo "... checking out dev code from our repo ..."
} // Checkout Repo
stage ('Merge PR') {
// Merge the base branch into the target for test
echo "... Merge the base branch into the target for test ..."
} // Merge PR
stage('Build Debug') {
withEnv(['LIB_MODE=Debug', 'IMG_MODE=Debug', 'OUT_FOLDER=Debug']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Debug
stage('Post Build') {
if (currentBuild.resultIsWorseOrEqualTo("UNSTABLE")) {
echo "... Send a mail to the Admins and the Devs ..."
}
} // Post Merge
} // node
parallel_steps = [:]
parallel_steps["release"] = {
node("${BUILD_SLAVE}") {
stage('Build Release') {
withEnv(['LIB_MODE=Release', 'IMG_MODE=Release', 'OUT_FOLDER=build\\Release']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Release
stage('Unit Tests') {
echo "... do Unit Tests here ..."
}
}
} // release
test_nodes = hostNames("${TEST_TARGETS}")
if (true) {
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'LabelParameterValue',
name: 'RUN_NODE', label: "${TEST_TARGETS}",
allNodesMatchingLabel: true,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
} else if ( false ) {
parallel_steps["${TEST_TARGETS}"] = {
stage("${TEST_TARGETS}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: test_nodes,
nodeEligibility: [$class: 'AllNodeEligibility']]]
}
} // ${TEST_TARGETS}
} else {
for ( tn in test_nodes ) {
parallel_steps["${tn}"] = {
stage("${tn}") {
echo "test_nodes: ${test_nodes}"
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: "${TARGET_BRANCH}"),
string(name: 'FRAMEWORK_VERSION', value: "${FRAMEWORK_VERSION}"),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: [tn],
nodeEligibility: [$class: 'IgnoreOfflineNodeEligibility']]],
wait: false
}
} // ${tn}
}
}
stage('Parallel'){
parallel parallel_steps
} // Parallel
} // try
catch (Exception ex) {
if ( manager.logContains(".*Merge conflict in .*") ) {
manager.addWarningBadge("Pull Request ${PULL_REQUEST_NUMBER} Experienced Git Merge Conflicts.")
manager.createSummary("warning.gif").appendText("<h2>Experienced Git Merge Conflicts!</h2>", false, false, false, "red")
}
echo "... Send a mail to the Admins and the Devs ..."
throw ex
}
So ... I have a solution for this ... as in, I understand what to do, and why one of the above solutions wasn't working.
The winner is Option 3 ... the reason it wasn't working is that the code inside the enclosure (the stage part) isn't evaluated until the stage is actually being run. As a result the strings aren't expanded until then and, since tn is fixed at slave_2 by that point, that's the value used on both parallel streams.
In the Jenkins examples here ... [https://jenkins.io/doc/pipeline/examples/#parallel-from-grep] ... the enclosures are returned from a function transformIntoStep and by doing this I was able to force early evaluation of the strings and so get parallel steps running on both slaves.
If you're here looking for answers, I hope this helps. If you are, and it has, please feel free to give me an uptick. Cheers :)
My final scripted jenkinsfile looks something like this:
import hudson.model.*
import hudson.EnvVars
import groovy.json.JsonSlurperClassic
import groovy.json.JsonBuilder
import groovy.json.JsonOutput
import java.net.URL
BUILD_SLAVE=""
parallel_steps = [:]
// clean the workspace before starting the build process
def clean_before_build() {
bat label:'',
script: '''cd %GITHUB_REPO_PATH%
git status
git clean -x -d -f
'''
}
// Routine to build the firmware
// Can build Debug or Release depending on the environment variables
def build_the_firmware() {
def batch_script = """
REM *** Build script here
echo "... Build script here ..."
"""
bat label:'',
script: batch_script
}
// Copy the hex files out of the Build folder and into the Jenkins workspace
def copy_hex_files_to_workspace() {
def batch_script = """
REM *** Copy HEX file to workspace:
echo "... Copy HEX file to workspace ..."
"""
bat label:'',
script: batch_script
}
// Updated from stackOverflow answer: https://stackoverflow.com/a/54145233/1589770
#NonCPS
def hostNames(label) {
nodes = []
jenkins.model.Jenkins.instance.computers.each { c ->
if ( c.isOnline() ){
labels = c.node.labelString
labels.split(' ').each { l ->
if (l == label) {
nodes.add(c.node.selfLabel.name)
}
}
}
}
return nodes
}
def transformTestStep(nodeId) {
return {
stage(nodeId) {
build job: 'Trial_Test_Pipe',
parameters: [string(name: 'TARGET_BRANCH', value: TARGET_BRANCH),
string(name: 'FRAMEWORK_VERSION', value: FRAMEWORK_VERSION),
[$class: 'NodeParameterValue',
name: 'RUN_NODE', labels: [nodeId],
nodeEligibility: [$class: 'IgnoreOfflineNodeEligibility']]],
wait: false
}
}
}
def transformReleaseStep(build_slave) {
return {
node(build_slave) {
stage('Build Release') {
withEnv(['LIB_MODE=Release', 'IMG_MODE=Release', 'OUT_FOLDER=build\\Release']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Release
stage('Unit Tests') {
echo "... do Unit Tests here ..."
}
}
}
}
try {
node('Build_Slave') {
BUILD_SLAVE = "${env.NODE_NAME}"
echo "build_slave=${BUILD_SLAVE}"
parallel_steps["release"] = transformReleaseStep(BUILD_SLAVE)
test_nodes = hostNames("${TEST_TARGETS}")
for ( tn in test_nodes ) {
parallel_steps[tn] = transformTestStep(tn)
}
stage('Checkout Repo') {
// Set a desription on the build history to make for easy identification
currentBuild.setDescription("Pull Request: ${PULL_REQUEST_NUMBER} \n${TARGET_BRANCH}")
echo "... checking out dev code from our repo ..."
} // Checkout Repo
stage ('Merge PR') {
// Merge the base branch into the target for test
echo "... Merge the base branch into the target for test ..."
} // Merge PR
stage('Build Debug') {
withEnv(['LIB_MODE=Debug', 'IMG_MODE=Debug', 'OUT_FOLDER=Debug']){
clean_before_build()
build_the_firmware()
copy_hex_files_to_workspace()
archiveArtifacts "${LIB_MODE}\\*.hex, ${LIB_MODE}\\*.map"
}
} // Build Debug
stage('Post Build') {
if (currentBuild.resultIsWorseOrEqualTo("UNSTABLE")) {
echo "... Send a mail to the Admins and the Devs ..."
}
} // Post Merge
} // node
stage('Parallel'){
parallel parallel_steps
} // Parallel
} // try
catch (Exception ex) {
if ( manager.logContains(".*Merge conflict in .*") ) {
manager.addWarningBadge("Pull Request ${PULL_REQUEST_NUMBER} Experienced Git Merge Conflicts.")
manager.createSummary("warning.gif").appendText("<h2>Experienced Git Merge Conflicts!</h2>", false, false, false, "red")
}
echo "... Send a mail to the Admins and the Devs ..."
throw ex
}
I need to clean up some Kubernete namespaces(hello_namespace, second,my_namespace1, my_namespace45,my_namespace44 for example and I do it with a jenkins job.
I read with kubectl the namespace I need to clean up and then I want to fire a job to delete it, My code should be something like that
pipeline {
agent { label 'master' }
stages {
stage('Clean e2e') {
steps {
script {
sh "kubectl get namespace |egrep 'my_namespace[0-9]+'|cut -f1 -d ' '>result.txt"
def output=readFile('result.txt').trim()
}
}
}
The ouput of this code will be the variable $output with the values:
my_namespace1
my_namespace45
my_namespace44
Separated by line, now I want to fire a job with the namespace like parameter , how can I do that? (My problem is to read the file and fire independent job for each namespace)
while (output.nextLine() callJob)
The job call should be like
build job: 'Delete temp Stage', parameters:
[string(name: 'Stage', value: "${env.stage_name}")]
I already got it :)
#!groovy
pipeline {
agent { label 'master' }
stages {
stage('Clean up stages') {
steps {
script {
sh '(kubectl get namespace |egrep "namespace[0-9]+"|cut -f1 -d " "|while read i;do echo -n $i";" ; done;)>result.txt'
def stages = readFile('result.txt').trim().split(';')
for (stage in stages) {
if (stage?.trim()) {
echo "deleting stage: $stage"
build job: 'Delete temp Stage', parameters:
[string(name: 'Stage', value: "$stage")]
}
}
}
}
}
}
}
I am trying to create a pipeline where there are multiple triggers for it - cron, pull request opened and manual. Hence, I wanted to visualize it accordingly.
This is my code with no triggers applied.
pipeline {
agent any
environment {
CI = 'true'
TRIGGER = 'PULL_REQUEST'
}
stages {
stage('Trigger')
{
parallel{
stage('Daily') {
when {
environment name: 'TRIGGER', value: 'DAILY'
}
steps {
sh 'echo "daily build"'
}
}
stage('Pull Request') {
when {
environment name: 'TRIGGER', value: 'PULL_REQUEST'
}
steps {
sh 'echo "pull request trigger"'
}
}
stage('Manual') {
when {
environment name: 'TRIGGER', value: 'MANUAL'
}
steps {
sh 'echo "Manual"'
}
}
}
}
stage('Build') {
steps {
sh 'echo "build step"'
}
}
stage('Test') {
steps {
sh 'echo "test step"'
}
}
stage('Deliver for development') {
when {
environment name: 'CI', value: 'true'
}
steps {
sh 'echo "delivery for development"'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
sh 'echo proceeded'
}
}
stage('Deploy for production') {
when {
environment name: 'CI', value: 'false'
}
steps {
sh 'echo "deploy for production"'
input message: 'Finished using the web site? (Click "Proceed" to continue)'
sh 'echo proceeded'
}
}
}
}
I am able to achieve this with the above code :
Now, my objective is to put the triggers in those parallel stages - Daily, Pull Request and Manual. However, I am not able to do so, as trigger is for the complete pipeline and perhaps expected to appear only once in the pipeline. How can I achieve this effect?
These 3 parallel triggers will set some environment variables which will be used in further stages to modify the pipeline flow accordingly, depending on the type of trigger.