How to trigger multiple jobs at once in Jenkins pipeline? - jenkins

I've got a Jenkins job hierarchy looking something like this:
Collection/
ParentJob
Children/
Foo
Bar
Baz
ParentJob has a few different configurations and the jobs in Children/ need to be built for each of those configurations. These configurations are basically just checking out different branches and building those branches. Additionally, part of each configuration of ParentJob has to be completed before the child jobs can run.
How can I trigger all the jobs in Children after the necessary parts of each ParentJob configuration are finished?
My first thought was to just put a build 'Children/*' step in ParentJob's pipeline, but it seems Jenkins does not support wildcards in that context.
I could explicitly list all the jobs, but that would be tedious (there's several dozen child jobs) and prone to breakage as child jobs may be added or removed.
Ideally a solution would allow me to just set up a new child job without touching anything else and have it automatically triggered the next time ParentJob runs.

You could get a list of the child jobs and use a for loop to build them. I haven't tested this, but I see no reason why it would not work.
I structure my job folders in a similar fashion to take advantage of naming conventions and for role-based security.
import jenkins.model.*
def childJobNames = Jenkins.instance.getJobNames().findAll { it.contains("Collection/Children") }
for (def childJobName : childJobsNames)
{
build job: childJobName, parameters: [
string(name: 'Computer', value: manager.ComputerName)
],
wait: false
}
http://javadoc.jenkins.io/index.html?hudson/model/Hudson.html

You need to use the Jenkins Workflow or Pipeline, and then you can run a stage, then some in parallel, and then a sequential set of stages, etc. This StackOverflow Question and Answer seems to be a good reference.

Related

Expected behavior of Jenkin's Node Label Plugin - Not running on all nodes

I am trying to use the plugin Node label plugin, adding a Label parameter and selecting the Run on all nodes matching label in a pipeline job.
But this only runs on one of the nodes even though nodes are discover-able by Show nodes in the build page.
I have also tried using the All Nodes for Label Factory option the same plugin provides, but this fails when I want more than one label in the jobs, as described here: https://issues.jenkins-ci.org/browse/JENKINS-59431 (including latest comment)
After many hours spent on google I have come to believe that when I use Label with run in all all of the nodes will run concurrently, the only difference with other examples I've seen online is the fact that they are not pipeline jobs, so concurrent jobs is a selectable option, compared to the Do not allow concurrent builds option in pipeline (which is not selected)
In case anyone had this problem, I decided to answer this question.
I made the trigger job a pipeline, where I build the downstream job with 2 parameters inside a loop through all the nodes with label:
def nodeArray = nodesByLabel label:"${params.labeled}", offline: false
for(item in nodeArray) {
build job:"DownstreamJob", parameters:[
[$class: 'LabelParameterValue', name: 'node', label: "${item}"],
string(name:"nodeToRunIn", value:"${item}")
], propagate: false, wait:false
}
And in the DownstreamJob I start with:
node (params.nodeToRunIn) {
May not be the nicest solution, as that would have been to have the plugin work as expected, but this is currently working.

Calling one parametrized job (aka "template" ) from many other jobs - is this a feasible approach?

I have jobs like this:
template__build_docker
build_dockerA
build_dockerB
...
build_dockerX
template__build_docker is a parametrized job, like this:
node {
[string(name: 'docker_name', trim: true)]
... build the container - git clone, etc ...
}
each of the build_dockerA, build_dockerB... do this :
stage('call build template'){
build job: 'template__build_docker ', parameters: [string(name: 'docker_name', value: 'MyDockerImageA')]
}
I know this is a little clunky, and that using declarative pipelines I could use master pipelines (did not look into details) but that is what I am working with.
QUESTION: Is this a feasible approach or are there any concerns with this that would make this a improper way to achieve pipeline reuse / refactoring?
I know of already one issue - if I quickly start a few build_dockerX jobs, more then the configured "# of executors", then I ran into deadlock - the jobs cannot start the template job since no executor is available.
or there other gotchas like this?
Short answer is YES, but you should use shared libraries. Look at a simple example written here, or the one where there is whole declarative pipeline defined and used from a jenkinsfile. I remembered also a question I posted here on SO which could also serve as an example.
As far as "gothas" go, I think that it is still impossible to load one jenkins library from another. Other then that it's perfect.

Trigger Multibranch Job from another

I have a job in Jenkins and I need to trigger another one when it ends (if it ends right).
The second job is a multibranch, so I want to know if there's any way to, when triggering this job, pass the branch I want to. For example, if I start the first job in the branch develop, I need it to trigger the second one for the develop branch also.
Is there any way to achieve this?
Just think about the multibranch job being a folder containing the real jobs named after the available branches:
Using Pipeline Job
When using the pipeline build step you'll have to use something like:
build(job: 'JOB_NAME/BRANCH_NAME'). Of course you may use a variable to specify the branch name.
Using Freestyle Job
When triggering from a Freestyle job you most probably have to
Use the parameterized trigger plugin as the plain old downstream build plugin still has issues triggering pipeline jobs (at least the version we're using)
As job name use the same pattern as described above: JOB_NAME/BRANCH_NAME
Should be possible to use a job parameter to specify the branch name here. However I didn't give it a try, though.
Yes, you can call downstream job by adding post build step: Trigger/Call build on other projects(you may need to install "Parameterized Trigger Plugin"):
where in Parameters section you define vars for the downstream job associated with vars from current job.
Also multibranch_PARAM1 and *PARAM2 must be configured in the downstreamjob:
Sometimes you want to call one or more subordinate multibranch jobs and have them build all of their branches, not just one. A script can retrieve the branch names and build them.
Because the script calls the Jenkins API, it should be in a shared library to avoid sandbox restrictions. The script should clear non-serializable references before calling the build step.
Shared library script jenkins-lib/vars/mbhelper.groovy:
def callMultibranchJob(String name) {
def item = jenkins.model.Jenkins.get().getItemByFullName(name)
def jobNames = item.allJobs.collect {it.fullName}
item = null // CPS -- remove reference to non-serializable object
for (jobName in jobNames) {
build job: jobName
}
}
Pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
library 'jenkins-lib'
mbhelper.callMultibranchJob 'multibranch-job-one'
mbhelper.callMultibranchJob 'multibranch-job-two'
}
}
}
}
}

How can I trigger a Jenkins job upon completion of a set of other jobs?

The simple case where you just have one job depending on the completion of a set of other jobs is easy: either use a multijob or use the build flow plugin with parallel { ... }. The case I am trying to solve is more general, for example:
JobA depends on JobX and JobZ
JobB depends on JobY and JobZ
SuperJob depends on JobA and JobB
I want each of these jobs to trigger as soon as, and only when their prerequisites complete.
It would appear that neither the build flow plugin, nor the join plugin or the job DSL plugin have a good mechanism for this. I can, of course, just start all my jobs and have them poll Jenkins, but that would be quite ugly.
Another dead end is the "Upstream job trigger". I want to trigger off a specific build of a job, not just any run of an upstream job.
update
One answer mentions the multijob plugin. It can indeed be used to solve this problem, but the scheduling and total build time is almost always worst case. For example, assume this dependency graph, with the build times as indicated:
left1 (1m) right1 (55m)
| |
left2 (50m) right2 (2m)
|____________|
|
zip
With the multijob plugin, you get:
Phase 1:
left1, right1 // done in 55m
Phase 2:
left2, right2 // done in 50m
Phase 3:
zip // total time 105m
If I had a way to trigger the next job exactly when all prerequisites are done, then the total build time would be just 57m.
The answer here should explain how I can obtain that behavior, preferably without writing my own polling mechanism.
update 1 1/2
In the comments below, it was suggested I group the left tasks and the right tasks into a single subtask. Yes, this can be done in this example, but it is very hard to do this in general, and automatically. For example, assume there is an additional dependency: right2 depends on left1. With the build times given, the optimal build time should not change, since left1 is long done before right2 is launched, but without this knowledge, you can no longer lump left1 and left2 in the same group, without running the risk of not having right1 available.
update 2
It looks like there is no ready made answer here. It seems I am going to have to code up a system groovy script myself. See my own answer to the question.
update 3
We ended up forking the multijob plugin and writing new logic within. I hope we can publish it as a new plugin after some cleanup...
Since you added the jenkins-workflow tag, I guess that using Jenkins Workflow Plugin is ok to you, so perhaps this Workflow script fit your needs:
node {
parallel left: {
build 'left1'
build 'left2'
}, right: {
build 'right1'
build 'right2'
},
failFast: true
build 'zip'
}
This workflow will trigger zip as soon as both parallel branches finish.
As far as I can tell, there is no published solution to my problem, so I have to roll my own. The following system groovy script works, but can obviously use some enhancements. Specifically, I really miss a nice simple one page build status overview...
This gist implements my solution, including proper handling of job cancellations: https://gist.github.com/cg-soft/0ac60a9720662a417cfa
You can use Build other projects as Post Build Actions in the configuration of one of your parent job which would trigger second parent job on successful build of the job. When the second parent job also gets completed, trigger your child job by same method.
Multijob plugin could be used to make hierarchy of jobs.
First select Multijob Project in new item and then in configuration you can add as many jobs as you want. You need to also specify phase for each Job.

Simple way to temporary exclude a job from running on a node in a label group

I wan't to be able to temporary exclude a specific job from running on a node in a label group.
jobA, jobB, jobC are tied to run on label general
nodeA,nodeB,nodeC have the label general on them.
Let's say that jobA starts to fail consistently on nodeA.
The only solutions that I see today are taking nodeA offline for all jobs or reconfigure many jobs or nodes which is pretty time consuming. We are using JOB-DSL to configure the jobs so changing in the job configuration requires a checkin.
An ideal situation for us would be to have a configuration on the node:
Exclude job with name: jobA
Is there some easy way to configure that jobA should temporarily only run on nodeB and node C and that jobB/C should still run on all nodes in label general?
Create a parameterized job to run some job-dsl configuration. Make one of the parameters a "Choice" listing the job names that you might want to change.
Another parameter would select a label defining the node(s) you want to run the job on. (You can have more than one label on a node).
The job-dsl script then updates the job label.
This groovy script will enable/disable all jobs in a folder:
// "State" job parameter (choice, DISABLED|ENABLED)
def targetState = ('DISABLED'.equalsIgnoreCase(State))
// "Folder" job parameter (choice or free-text)
def targetFolderPath = Folder.trim()
def folder = findFolder(jenkins, targetFolderPath)
println "Setting all jobs in '${folder.name}' to '${targetState}'"
for (job in folder.getAllJobs()) {
job.disabled = targetState
println "updated job: ${job.name}"
}
I just came across the same issue, I want the job to run on the device with lable, say "lableA", but do not want it to run on device with lable "lableB".
We may try this:
node(nodeA && !nodeB) {
}
Refer to: https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#node-allocate-node
you can also use NodeLabel Parameter Plugin in jobA. Using this plugin you can define nodes on which the job should be allowed to be executed on. Just add parameter node and select all nodes but nodeA.
https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin
For a simple quick exclude, what I think the original question refers to as "The only solutions that I see today are ... reconfigure ... jobs or nodes" see this other answer: https://stackoverflow.com/a/29611255/598656
To stop using a node with a given label, one strategy is to simply change the label. E.g. suppose the label is
BUILDER
changing the label to
-BUILDER
will preserve information for the administrator but any job using BUILDER as the label will not select that node.
To allow a job to run on the node, you can change the node selection to
BUILDER||-BUILDER
A useful paradigm when shuffling labels around.
NOTE that jobs may still select using the prior label for a period of time.

Resources