I have a job configured in jenkins, that has 4-5 choice parameter. Till now we used to do "build with parameter"-> select one of the parameters and run the job.
Now a new requirement has come, where, the same job has to be triggered with each of these parameters one by one.
I am quite new to jenkins, and could not find exact solution for this requirement. Looking for some help here.
Thanks.
You could use Pipeline to trigger ?
node{
try{
stage('1st Parameter')
{
build job: 'target_job_name_here', parameters:
[
string(name: 'parameter_1', value: 'Parameter1-value')
]
}
}
catch (err){
echo "1st Parameter fail"
}
try{
stage('2nd Parameter')
{
build job: 'target_job_name_here', parameters:
[
string(name: 'parameter_2', value: 'Parameter2-value')
]
}
}
catch (err){
echo "2nd Parameter fail"
}
try{
stage('3rd Parameter')
{
build job: 'target_job_name_here', parameters:
[
string(name: 'parameter_3', value: 'Parameter3-value')
]
}
}
catch (err){
echo "3rd Parameter fail"
}
}
Not sure if that would help?
Related
I have a pipeline with some information detailed behind
pipeline {
parameters {
booleanParam(name: 'RERUN', defaultValue: false, description: 'Run Failed Tests')
}
stage('Run tests ') {
steps {
runTest()
}
}
post {
always {
reRun()
}
}
}
def reRun() {
if ("SUCCESS".equals(currentBuild.result)) {
echo "LAST BUILD WAS SUCCESS"
} else if ("UNSTABLE".equals(currentBuild.result)) {
echo "LAST BUILD WAS UNSTABLE"
}
}
but I want that after the stage "Run tests" execute, if some tests fail I want to re-run the pipeline with parameters RERUN true instead of false. How can I replay via script instead of using plugins ?
I wasn't able to find how to re-run using parameters on my search, if someone could help me I will be grateful.
First of you can use the post step to determine if the job was unstable:
post{
unstable{
echo "..."
}
}
Then you could just trigger the same job with the new parameter like this:
build job: 'your-project-name', parameters: [[$class: 'BooleanParameterValue', name: 'RERUN', value: Boolean.valueOf("true")]]
To explain the issue, consider that I have 2 jenkins jobs.
Job1 : PARAM_TEST1
it accepts a parameterized value called 'MYPARAM'
Job2: PARAM_TEST2
it also accepts a parameterized value called 'MYPARAM'
Sometimes I am in need of running these 2 jobs in sequence - so i created a separate pipeline job as shown below. It works just fine.
it also accepts a parameterized value called 'MYPARAM' to simply pass it to the build job steps.
pipeline {
agent any
stages {
stage("PARAM 1") {
steps {
build job: 'PARAM_TEST1', parameters: [string(name: 'MYPARAM', value: "${params.MYPARAM}")]
}
}
stage("PARAM 2") {
steps {
build job: 'PARAM_TEST2', parameters: [string(name: 'MYPARAM', value: "${params.MYPARAM}")]
}
}
}
}
My question:
This example is simple. Actually I have 20 jobs. I do not want to repeat parameters: [string(name: 'MYPARAM', value: "${params.MYPARAM}")] in every single stage.
Is there any way to set the parameters for all the build job steps in one single place?
What you could do is place the common params on the pipeline level and add specific ones to those in the stages
pipeline {
agent any
parameters {
string(name: 'PARAM1', description: 'Param 1?')
string(name: 'PARAM2', description: 'Param 2?')
}
stages {
stage('Example') {
steps {
echo "${params}"
script {
def myparams = params + string(name: 'MYPARAM', value: "${params.MYPARAM}")
build job: 'downstream-pipeline-with-params', parameters: myparams
}
}
}
}
}
I am new to Jenkins. I have spent the last few weeks creating jobs to execute chains of shell commands, but now when I tried to find out how to chain jobs together, I have failed to find the answer I was looking for.
I have a CreateStack job, and if it fails somehow, I'd like to run DeleteStack to remove the stuff that CreateStack left behind while failing. If CreateStack does not fail, build the rest of the jobs.
Something like this:
b = build(job: "CreateStack", propagate: false, parameters: [string(name: 'TASVersion', value: "$TASVersion"), string(name: 'CloudID', value: "$CloudID"), string(name: 'StackName', value: "$StackName"), booleanParam(name: 'Swap partition required', value: true)]).result
if(b == 'FAILURE') {
echo "CreateStack has failed. Running DeleteStack."
build(job: "DeleteStack", parameters: [string(name: 'CloudID', value: "$CloudID"), string(name: 'StackName', value: "$StackName")]
}
else {
build job: 'TAS Deploy', parameters: [string(name: 'FT_NODE_IP', value: "$FT-NodeIP"), string(name: 'TASVersion', value: "RawTASVersion")]
}
Could somebody help me out with this, please?
Also, can I use variables in a pipeline script like this? I set the project to be parameterized and added the necessary choice parameters, e.g.: $StackName
You can try something like this in a scripted pipeline:
node {
try {
stage('CreateStack') {
build(job: 'CreateStack', parameters: [<parameters>])
}
stage('OtherJobs') {
#build the rest of the jobs
}
} catch (error) {
build(job: 'DeleteStack', parameters: [<parameters>])
currentBuild.result = "FAILURE"
throw error
} finally {
build(job: 'LastJob', parameters: [<parameters>])
}
}
Please note, that the catch block is executed if any job fails. There you have to implent a little additional logic.
I've tried using the following Script, but all downstream jobs are running on different nodes.
Any idea how can I get a random node and run all downstream jobs on the same one?
#!/usr/bin/env groovy
pipeline {
agent { label 'WindowsServer' }
stages{
stage("Get Dev Branch"){
steps {
script {
build(job: "GetDevBranchStep", parameters: [string(name: 'DevBranchName', value: "${params.CloudDevBranch}")])
}
}
}
stage("Get SA Branch"){
steps {
script {
build(job: "GetSABranchStep", parameters: [string(name: 'SABranchName', value: "${params.SABranch}")])
}
}
}
stage("Compile Models and Copy To Network Folder"){
steps {
script {
build(job: "CompileNewModelsAndCopyToNetwork", parameters: [string(name: 'DevBranchName', value: "${params.CloudDevBranch}"), string(name: 'SABranchName', value: "${params.SABranch}"), string(name: 'GetSAStepJobName', value: "GetSABranchStep"), string(name: 'GetDevRepoJobName', value: "GetDevBranchStep"), string(name: 'NetworkFoderToCopyTo', value: "NetworkFolderAddress")])
}
}
}
}
}
provide a downstream job with ${NODE_NAME} as additional parameter
in downstream job in agent section you can use:
agent { label "${params.NODE_NAME}" }
(meanwhile did not found how to inject parameters of upstream job to the downstream without actually insert them one by one as input parameters)
I have a Jenkins Job DSL seed job that calls out to a couple of pipeline jobs e.g.
pipelineJob("job1") {
definition {
cps {
script(readFileFromWorkspace('job1.groovy'))
}
parameters {
choiceParam('ENV', ['dev', 'prod'], 'Build Environment')
}
}
}
pipelineJob("job2") {
definition {
cps {
script(readFileFromWorkspace('job2.groovy'))
}
parameters {
choiceParam('ENV', ['dev', 'prod'], 'Build Environment')
}
}
}
job1.groovy and job2.groovy are standard Jenkinsfile style pipelines.
I want to pass a couple of common maps into these jobs. These contains things that may vary between environments, e.g. target servers, credential names.
Something like:
def SERVERS_MAP = [
'prod': [
'prod-server1',
'prod-server2',
],
'dev': [
'dev-server1',
'dev-server2',
],
]
Can I define a map in my seed job that I can then pass and access as a map in my pipeline jobs?
I've come up with a hacky workaround using the pipeline-utility-steps plugin.
Essentially I pass my data maps around as JSON.
So my seed job might contain:
def SERVERS_MAP = '''
{
"prod": [
"prod-server1",
"prod-server2"
],
"dev": [
"dev-server1",
"dev-server2"
]
}
'''
pipelineJob("job1") {
definition {
cps {
script(readFileFromWorkspace('job1.groovy'))
}
parameters {
choiceParam('ENV', ['dev', 'prod'], 'Build Environment')
stringParam('SERVERS_MAP', "${SERVERS_MAP}", "")
}
}
}
and my pipeline would contain something like:
def serversMap = readJSON text: SERVERS_MAP
def targetServers = serversMap["${ENV}"]
targetServers.each { server ->
echo server
}
I could also extract these variables into a JSON file and read them from there.
Although it works, it feels wrong somehow.
You can use string parameter pass the Map val, downstream read it as json format.
UPSTREAM PIPELINE
timestamps{
node("sse_lab_CI_076"){ //${execNode}
currentBuild.description="${env.NODE_NAME};"
stage("-- regression execute --"){
def test_map =
"""
{
"gerrit_patchset_commit": "aad5fce",
"build_cpu_x86_ubuntu": [
"centos_compatible_build_test",
"gdb_compatible_build_test",
"visual_profiler_compatible_build_test"
],
}
"""
build(job: 'tops_regression_down',
parameters: [string(name: 'UPSTREAM_JOB_NAME',
value: "${env.JOB_BASE_NAME}"),
string(name: 'UPSTREAM_BUILD_NUM',
value: "${env.BUILD_NUMBER}"),
string(name: 'MAP_PARAM',
value: "${test_map}"),
],
propagate: true,
wait: true)
}
}
}
DOWNSTREAM PIPELINE
timestamps{
node("sse_lab_inspur_076"){ //${execNode}
currentBuild.description="${env.NODE_NAME};"
stage('--in precondition--'){
dir('./'){
cleanWs()
println("hello world")
println("${env.MAP_PARAM}")
Map result_json = readJSON(text: "${env.MAP_PARAM}")
println(result_json)
}
}
}
}