I want to trigger several different pipeline jobs, depending on the input parameters of a Controller Pipeline job.
Within this job I build the names of the other pipelines, I want to trigger from a list, given back from a python script.
node {
stage('Get_Clusters_to_Build') {
copyArtifacts filter: params.file_name_var_mapping, fingerprintArtifacts: true, projectName: 'UpdateConfig', selector: lastSuccessful()
script {
cmd_string = 'determine_ci_builds --jobname ' + env.JOB_NAME
clusters = bat(script: cmd_string, returnStdout: true)
output_array = clusters.split('\n')
cluster_array = output_array[2].split(',')
}
echo "${clusters}"
}
jobs = Hudson.instance.getAllItems(AbstractProject.class)
echo "$jobs"
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
stage ("TriggerTests") {
echo "Done"
}
}
My problem is, it might be the case, that a couple of jobs with the names I get from the Stage Get_Clusters_to_Build do not exist. Therefore they cannot be triggered and my job fails.
Now to my question, is there a way to get the names of all pipeline jobs, and how can I use them to check if I can trigger a build?
I tried by jobs = Hudson.instance.getAllItems(AbstractProject.class) but this gives me only the "normal" FreeStyleProject-Jobs.
I want to do something like this in the loop:
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
// This part I only want to be executed if job_to_build is found in the jobs list, somehow like:
if job_to_build in jobs: // I know, this is not proper groovy syntax
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
All pipeline jobs are instantces of org.jenkinsci.plugins.workflow.job.WorkflowJob. So you can get names of all Pipeline jobs using the following function
#NonCPS
def getPipelineJobNames() {
Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)*.fullName
}
Then you can use it this way
//...
def jobs = getPipelineJobNames()
if (job_to_build in jobs) {
//....
}
try this syntax to get standard and pipeline jobs:
def jobs = Hudson.instance.getAllItems(hudson.model.Job.class)
As #Vitalii Vitrenko wrote, that is working fine
for (job in Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)) {
println job.fullName
}
Related
I have a Jenkins pipeline where I trigger a single job with different parameters. The number of parameters may also change which changes the number of times the job needs to be triggered. This is why I have the build job in a for loop. Here's what the code looks like:
pipeline{
stages{
stage('Setup'){
steps{
script {
for (int i=0; i<list_one.size(); i++ ) {
def index_i = i
for (int j=0; j<list_two.size(); j++) {
def index_j = j
stage ("${list_one[i]} ${list_two[j]}") {
sh "echo 'index_i: ${index_i}'"
sh "echo 'index_j: ${index_j}'"
build job: 'Downstream Job', parameters: [
string(name: 'some_param', value: "${list_one[index_i]}")]
}
}
}
}
}
}
}
}
When I run this pipeline, it only runs once for the first iteration of both the loops. However when I remove the build job line, the pipeline runs for all the values in the list. I'm perplexed by why this would be and would like some assistance in the matter.
Or you can use something like propagate=false.
Is there a way to use "propagate=false" in a Jenkinsfile with declarative syntax directly for stage/step?
I am trying to create a pipeline in Jenkins which triggers same job multiple times in different node(agents).
I have "Create_Invoice" job Jenkins, configured : (Execute Concurrent builds if necessary)
If I click on Build 10 times it will run 10 times in different (available) agents/nodes.
Instead of me clicking 10 times, I want to create a parallel pipeline.
I created something like below - it triggers the job but only once.
What Am I missing or is it even possible to trigger same test more than once at the same time from pipeline?
Thank you in advance
node {
def notifyBuild = { String buildStatus ->
// build status of null means successful
buildStatus = buildStatus ?: 'SUCCESSFUL'
// Default values
def tasks = [:]
try {
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", propagate: false).result
}
}
parallel tasks
} catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
}
finally {
notifyBuild(currentBuild.result)
}
}
}
I had the same problem and solved it by passing different parameters to the same job. You should add parameters to your build steps, although you obviously don't need them. For example, I added a string parameter.
tasks["Test-1"] = {
stage ("Test-1") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "1")], propagate: false).result
}
}
tasks["Test-2"] = {
stage ("Test-2") {
b = build(job: "Create_Invoice", parameters: [string(name: "PARAM", value: "2")], propagate: false).result
}
}
As long as the same parameters or no parameters are passed to the same job, the job is only tirggered once.
See also this Jenkins issue, it describes the same problem:
https://issues.jenkins.io/browse/JENKINS-55748
I think you have to switch to Declarative pipeline instead of Scripted pipeline.
Declarative pipeline has parallel stages support which is your goal:
https://www.jenkins.io/blog/2017/09/25/declarative-1/
This example will grab the available agent from the Jenkins and iterate and run the pipeline in all the active agents.
with this approach, you no need to invoke this job from an upstream job many time to build on a different agent. This Job itself will manage everything and run all the stages define in all the online node.
jenkins.model.Jenkins.instance.computers.each { c ->
if(c.node.toComputer().online) {
node(c.node.labelString) {
stage('steps-one') {
echo "Hello from Steps One"
}
stage('stage-two') {
echo "Hello from Steps Two"
}
}
} else {
println "SKIP ${c.node.labelString} Because the status is : ${c.node.toComputer().online} "
}
}
I'm new to Jenkins so I hope my terms are correct:
I have a Jenkins job that triggers another job. This second job tests a very long list of items (maybe 2000) it gets from the trigger.
Because it's such a long list, I pass it to the second job in groups of 20.
Unfortunately, this list turned out to take an extremely long time, and I can't stop it.
No matter what I tried, stop/kill only stop the current group of 20, and proceeds to the group.
Waiting for it to finish, or doing this manually for each group is not an option.
I guess the entire list was already passed to the second job, and it's loading the next group whenever the current one ends.
What I tried:
Clicking the "stop" button next to the build on the trigger and the second job
Using purge build queue add on
Using the following script in script console:
def jobname = "Trigger Job"
def buildnum = 123
def job = Jenkins.instance.getItemByFullName(jobname)
for (build in job.builds) {
if (buildnum == build.getNumber().toInteger()){
if (build.isBuilding()){
build.doStop();
build.doKill();
}
}
}
Using the following script in script console:
String job = 'Job name';
List<Integer> build_list = [];
def result = jenkins.model.Jenkins.instance.getItem(job).getBuilds().findAll{
it.isBuilding() == true && (!build_list || build_list.contains(it.id.toInteger()))}.each{it.doStop()}.collect{it.id};
println new groovy.json.JsonBuilder(result).toPrettyString(); ```
This is my groovy part of the code that splits it into groups of 20. Maybe I should put the parallel part outside the sub list loop?
Is there a better way to divide into sub lists for future use?
stages {
stage('Execute tests') {
steps {
script {
// Limit number of items to run
def shortList = IDs.take(IDs.size()) // For testing purpose, can be removed if not needed
println(Arrays.toString(shortList))
// devide the list of items into small, equal,sub-lists
def colList = shortList.toList().collate(20)
for (subList in colList) {
testStepsForParallel = subList.collectEntries {
["Testing on ${it}": {
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
stage(it) {
def buildWrapper = build job: "Job name",
parameters: [
string(name: 'param1', value: it.trim()),
string(name: 'param2', value: "")
],
propagate: false
remoteBuildResult = buildWrapper.result
println("Remote build results: ${remoteBuildResult}")
if (remoteBuildResult == "FAILURE") {
currentBuild.result = "FAILURE"
}
catchError(stageResult: 'UNSTABLE') {
copyArtifacts projectName: "Job name", selector: specific("${buildWrapper.number}")
}
}
}
}]
}
parallel testStepsForParallel
}
}
}
}
}
Thanks for your help!
Don't know what else to do to stop this run.
Currently, I'm trying to iterate through the results of the current online Jenkins Slaves in order to execute commands inside of each one, but currently, I'm not sure what I could be missing since creating parallel stages based on the available agents isn't working as expected.
This is the current code that I'm using:
pipeline {
agent any
stages {
stage('Deploy') {
steps {
script {
def jenkins = Jenkins.instance
def computers = jenkins.computers
agents = [:]
for (i in computers) {
// Printing the output works,
echo "${i.displayName}"
if (i.hostName) {
// this doesn't works
agents["${i.displayName}"] = {
echo 'this would be executed'
}
// end
}
}
parallel agents
}
}
}
}
}
Question
I have simple parallel pipeline (see code) which I use together with Jenkins 2.89.2. Additionally I use parameters and now want to be able to in-/decrease the number of deployVM A..Z stages automatically by providing the parameter before job execution.
How can I dynamically build my pipeline by providing a parameter?
Researched so far:
Jenkins pipeline script created dynamically - Not getting this to work with my Jenkins version
Can I create dynamically stages in a Jenkins pipeline? - Not working either
Code
The pseudo code of what I want - dynamic generation:
pipeline {
agent any
parameters {
string(name: 'countTotal', defaultValue: '3')
}
stages {
stage('deployVM') {
def list = [:]
for(int i = 0; i < countTotal.toInteger; i++) {
list += stage("deployVM ${i}") {
steps {
script {
sh "echo p1; sleep 12s; echo phase${i}"
}
}
}
}
failFast true
parallel list
}
}
}
The code I have so far - executes parallel but is static:
pipeline {
agent any
stages {
stage('deployVM') {
failFast true
parallel {
stage('deployVM A') {
steps {
script {
sh "echo p1; sleep 12s; echo phase1"
}
}
}
stage('deployVM B') {
steps {
script {
sh "echo p1; sleep 20s; echo phase2"
}
}
}
}
}
}
}
Although the question assumes using declarative pipeline I would suggest to use scripted pipeline because it's way more flexible.
Your task can be accomplished this way
properties([
parameters([
string(name: 'countTotal', defaultValue: '3')
])
])
def stages = [failFast: true]
for (int i = 0; i < params.countTotal.toInteger(); i++) {
def vmNumber = i //alias the loop variable to refer it in the closure
stages["deployVM ${vmNumber}"] = {
stage("deployVM ${vmNumber}") {
sh "echo p1; sleep 12s; echo phase${vmNumber}"
}
}
}
node() {
parallel stages
}
Also take a look at snippet generator which allows you to generate some scripted pipeline code.
Using Declarative pipeline also you can achieve this.
Follow my answer HERE
In above link answer I have used Var.collectEntries but map also can be used.
#Vitalii
I wrote similiar code piece, but unfoutunelty, all three element been loopped all shows the last one, not sure if it had something to do with groovy / jenkinsfile itself, that some clouse / reference went break with wrong usage
my purpose is to distribute tasks to specific work nodes
node_candicates = ["worker-1", "worder-2", "worker-3"]
def jobs = [:]
for (node_name in node_candidates){
jobs["run on $node_name"] = { // good
stage("run on $node_name"){ // all show the third
node(node_name){ // all show the third
print "on $node_name"
sh "hostname"
}
}
}
}
parallel jobs
it went totally Ok if I expand / explain the loop, instead of loop over it, like
parallel worker_1: {
stage("worker_1"){
node("worker_1"){
sh """hostname ; pwd """
print "on worker_1"
}
}
}, worker_2: {
stage("worker_2"){
node("worker_2"){
sh """hostname ; pwd """
print "on worker_2"
}
}
}, worker_3: {
stage("worker_3"){
node("worker_3"){
sh """hostname ; pwd """
print "on worker_3"
}
}
}