Trigger Multibranch Job from another - jenkins

I have a job in Jenkins and I need to trigger another one when it ends (if it ends right).
The second job is a multibranch, so I want to know if there's any way to, when triggering this job, pass the branch I want to. For example, if I start the first job in the branch develop, I need it to trigger the second one for the develop branch also.
Is there any way to achieve this?

Just think about the multibranch job being a folder containing the real jobs named after the available branches:
Using Pipeline Job
When using the pipeline build step you'll have to use something like:
build(job: 'JOB_NAME/BRANCH_NAME'). Of course you may use a variable to specify the branch name.
Using Freestyle Job
When triggering from a Freestyle job you most probably have to
Use the parameterized trigger plugin as the plain old downstream build plugin still has issues triggering pipeline jobs (at least the version we're using)
As job name use the same pattern as described above: JOB_NAME/BRANCH_NAME
Should be possible to use a job parameter to specify the branch name here. However I didn't give it a try, though.

Yes, you can call downstream job by adding post build step: Trigger/Call build on other projects(you may need to install "Parameterized Trigger Plugin"):
where in Parameters section you define vars for the downstream job associated with vars from current job.
Also multibranch_PARAM1 and *PARAM2 must be configured in the downstreamjob:

Sometimes you want to call one or more subordinate multibranch jobs and have them build all of their branches, not just one. A script can retrieve the branch names and build them.
Because the script calls the Jenkins API, it should be in a shared library to avoid sandbox restrictions. The script should clear non-serializable references before calling the build step.
Shared library script jenkins-lib/vars/mbhelper.groovy:
def callMultibranchJob(String name) {
def item = jenkins.model.Jenkins.get().getItemByFullName(name)
def jobNames = item.allJobs.collect {it.fullName}
item = null // CPS -- remove reference to non-serializable object
for (jobName in jobNames) {
build job: jobName
}
}
Pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
library 'jenkins-lib'
mbhelper.callMultibranchJob 'multibranch-job-one'
mbhelper.callMultibranchJob 'multibranch-job-two'
}
}
}
}
}

Related

Jenkins job dsl is not idempotent and triggers a branch scan every run

I use the job dsl to create multibranchPipelineJob jobs.
In my job dsl script I create some multibranchPipelineJobs. If I run the seed job, no matter if the config changed or not, it triggers a branch scan job for all the multibranchPipelineJobs. This must mean it's not idempotent and is just reapplying the config and saving it causing Jenkins to trigger scans for the jobs. Is this expected? Is there a way to have the job dsl check if there are changes first before just clobbering the whole thing every time?
I want to confirm the behavior I'm seeing is expected, or if I'm doing something wrong.
Normally, this is not the case, and DSL will not re-apply the configuration (I did not use DSL with multibranchPipelineJobs yet, though).
However, some plugins do silently modify the job configuation once a build has been performed. In those cases, running the DSL script will re-apply (ot rather: restore) the job configuration according to the DSL spec.
You can trace such cases with the "Job Configuration History" plugin. Silent modification of the original job config will appear as modifications by "system" in the changelog.
I recently had the same issue. It is described here: https://issues.jenkins-ci.org/browse/JENKINS-43693
The solution is to make sure all your branch sources have an id specified that does not change. When JobDSL runs your definitions, if an ID is not specified it will generate a new one. Multibranch pipelines then lose track of which branches have been built, thus the re-building. This ID must be unique across all jobs, so I usually use a variation of the job name itself.
For bitbucket branch, sources, it's specified like so:
multibranchPipelineJob(jobName) {
branchSources {
branchSource {
source {
bitbucket {
id 'some-unique-constant-id'
//etc
}
}
}
}
}
For git, it is similar:
multibranchPipelineJob(jobName) {
branchSources {
branchSource {
source {
git {
id 'must-be-unique'
}
}
}
}
}

ensure jenkins pipeline using same node for download stream job

Case:
I have 3 machine (A,B,C) for slave (sharing the same node label e.g 'build')
I have a pipeline which may trigger different downstream job. And i need to make sure that all the job and downstream job using same node (for sharing some file etc.). How i can do that?
a) I pass the node label to downstream but i am not sure if the downstream will take the same node.(parent job using slave "A" and i pass the node label 'build' to downstream job but maybe in downstream job it take slave 'B')
b) is that some way to get the runtime slave when the pipeline is executing, when i pass this slave name to downstream?
or is that any better way to do that?
I advice you to try NodeLable Parameter Plugin.
Once installed, check 'This project is parametrized' option and select 'node' from 'Add Parameter' drop down.
It will populate all nodes as drop down while building job with parameters.
It also have some other options which may help you.
Most important question to me would be: Why do they need to run on the very same node?
Anyway. One way to achieve this would be to retrieve the name of the node in the node block in the first pipeline, like (CAUTION: was not able to verify code written below):
// Code for upstream job
#NonCPS
def getNodeName(def context) {
context.toComputer().name
}
def nodeName = 'undefined'
node('build') {
nodeName = steps.getContext(FilePath)
}
build job: 'downstream', parameters: [string(name: 'nodeName', value: nodeName)]
In the downtstream you use that string parameter as input to your node block - of course you should make sure that the downstream actually is parameterized in the first place having a string parameter named nodeName:
node(nodeName) {
// do some stuff
}
Even having static agents, workspaces are eventually cleaned up, so don't rely on existence of files in the workspace on your builds.
Just archive whatever you need in the upstream job (using the archive step) and then use Copy Artifact Plugin in downstream jobs to get what you need there. You'll probably need to parameterize downstream jobs to pass them the reference to the upstream artifact(s) you need (there are plenty of selectors available in the Copy Artifact plugin that you can play with to achieve what you want.
If you are triggering child jobs manually from pipeline, then you can use syntax as this to pass the specific node label
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester1']]
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester2']]
current label of node you should be able to get this way ${env.NODE_NAME}"
found at How to trigger a jenkins build on specific node using pipeline plugin?
ref. to docs- https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
But yes, if you want to manipulate with some files from this job in other jobs, then you will need to use eg. mentioned copy artifacts plugin, because workspaces of the jobs are independent and each job will have different content.

Jenkins Job DSL Plugin: How to Modify Parameters on other jobs

I want to create a job in Jenkins which modifies an existing parameter on another job.
I'm using the Job DSL Plugin. The code I'm using is:
job('jobname'){
using('jobname')
parameters {
choiceParam('PARAMETER1',['newValue1', 'newValue2'],'')
}
}
However, this only adds another parameter with the same name in the other job.
I'm trying the alternative to delete all parameters and start from scratch, but I haven't found the way to do that using Job DSL (not even with the Configure block).
Another alternative would be to define the other job completely and start from scratch, but that would make the job too complicated, specially if I want to apply this change to many jobs at a time.
¿Is there a way to edit or delete lines on the config.xml file using the Job DSL plugin?

How to trigger multiple jobs at once in Jenkins pipeline?

I've got a Jenkins job hierarchy looking something like this:
Collection/
ParentJob
Children/
Foo
Bar
Baz
ParentJob has a few different configurations and the jobs in Children/ need to be built for each of those configurations. These configurations are basically just checking out different branches and building those branches. Additionally, part of each configuration of ParentJob has to be completed before the child jobs can run.
How can I trigger all the jobs in Children after the necessary parts of each ParentJob configuration are finished?
My first thought was to just put a build 'Children/*' step in ParentJob's pipeline, but it seems Jenkins does not support wildcards in that context.
I could explicitly list all the jobs, but that would be tedious (there's several dozen child jobs) and prone to breakage as child jobs may be added or removed.
Ideally a solution would allow me to just set up a new child job without touching anything else and have it automatically triggered the next time ParentJob runs.
You could get a list of the child jobs and use a for loop to build them. I haven't tested this, but I see no reason why it would not work.
I structure my job folders in a similar fashion to take advantage of naming conventions and for role-based security.
import jenkins.model.*
def childJobNames = Jenkins.instance.getJobNames().findAll { it.contains("Collection/Children") }
for (def childJobName : childJobsNames)
{
build job: childJobName, parameters: [
string(name: 'Computer', value: manager.ComputerName)
],
wait: false
}
http://javadoc.jenkins.io/index.html?hudson/model/Hudson.html
You need to use the Jenkins Workflow or Pipeline, and then you can run a stage, then some in parallel, and then a sequential set of stages, etc. This StackOverflow Question and Answer seems to be a good reference.

How can I get the project details which has triggered the specific job?

I am triggering a job (child job) in 'Server B' from a job (parent job) in 'Server A' through a python script. I have 2-3 parent jobs. So I want to know which parent job is triggered the child job. How can I know which parent job triggered child job?
Can I pass the parent job name to child job name ?
Or
Can I get the parent name directly from child job ? (Environment variable / using python scripts)
Every build has an environment variable JOB_NAME. You can pass this as a string parameter to your child job.
Following description is provided in the /env-vars.html:
JOB_NAME
Name of the project of this build, such as "foo" or "foo/bar". (To strip off folder paths from a Bourne shell script, try: ${JOB_NAME##*/})
Passing the job name as an environment variable as mentioned by OltzU, may be the way to go, but that depends on how you are triggering the child job. If you are directly triggering the child job from the parent job using a post-build step, you can use something like the Parameterized Remote Build plugin to pass the Job name along. If you are using a script in the parent job to fire off the child job, you can string the Job name on as a parameter.
If you can't pass the triggering job as a parameter, you can programmatically get the build trigger(s) using Groovy. Groovy is really the only language that fully integrates with the Jenkins api, so if you want to use another language (python), you are stuck with using the rest api or jenkins cli, which are both limited in what they can give you (e.g. neither can give you the triggering job to my knowledge).
If you want to use groovy to get the trigger job, you will need the Groovy Plugin, which you will run as build step in your child job. Here's a snippet of code to get the chain of upstream jobs that triggered your build. You may need to modify the code depending on the type of trigger that is used.
def getUpstreamProjectTriggers(causes) {
def upstreamCauses = []
for (cause in causes) {
if (cause.class.toString().contains("UpstreamCause")) {
upstreamCauses.add(cause.getUpstreamProject())
}
}
return upstreamCauses
}
getUpstreamProjectTriggers(build.getCauses())
From here, if you want to use the triggering job in, say, a python script, you would need to use groovy to set the triggering job in an environment variable. This SO thread gives more info on that, or you can skip to my answer in that thread to see how I do it.
In the child Jenkinsfile this Groovy code will get the name of the triggering job:
String getTriggeringProjectName() {
if (currentBuild.upstreamBuilds) {
return currentBuild.upstreamBuilds[0].projectName
} else {
return ""
}
}
currentBuild.upstreamBuilds is a list of RunWrapper objects
You could use an additional parameter for your child job.

Resources