Jenkins: Run only if other job is clean - jenkins

I have a full set of unit tests I'd like to run daily overnight in Jenkins, but only if my application has built correctly in another job. I DON'T want the unit tests to trigger throughout the day as commits are added to the application.
How do I configure this? To restate: there are two Jenkins jobs:
A and B:
A runs each checkin, unless B is running, in which case it waits for B.
B runs at midnight, IFF A is in a good state. If A is running, B waits for A.
I already have A set up as "A runs each checkin."

I assume you are using Jenkins pipeline. There might be many ways but I would address this by adding a new stage in JOB B that check the status of JOB A and a utility function to check status.
stage('check Job A status'){
// If A is running, B waits for A.
if(checkStatus() == "RUNNING" ){
timeout(time: 60, unit: 'MINUTES') {
waitUntil {
def status = checkStatus()
return (status == "SUCCESS" || status == "FAILURE" || status == "UNSTABLE" || status == "ABORTED")
}
}
}
// Proceed with B, only when A is in a good state
if( checkStatus() != "SUCCESS" ){
error('Stopping Job B becuase job A is not successful.')
}
}
def checkStatus() {
def statusUrl = httpRequest "https://jenkins.example.com/job/${job-A-Name}/lastBuild/api/json"
def statusJson = new JsonSlurper().parseText(statusUrl.getContent())
return statusJson['result']
}

My answer is a bit late to the party here (sorry :-7) but a useful question and not answered properly(sorry guys - not your fault - it took me/us a few years to find out the best different ways of doing this (originally I had some post-build groovy and other scripts doing funky things like triggering other jobs)). Actually jenkins has quite a flexible choice of methods for jobs that need to interact with one another.
There is a built in "Post-build Action: build other projects" and there are a couple of plugins which can be used. The "Post-build Action: build other projects" is probably most suitable. And the "Lockable Resources Plug-in" can be used to make the jobs mutually exclusive.
* SIMPLEST ANSWER: *
Install Lockable Resource plugin and add a lockable resource "build_or_test" and configure jobs A and B to lock on that resource.
Configure the build job A, Add Post-build Action: Build other projects
Build job B if job A is Stable.
* LIST of useful built-ins and plugins: *
It is also useful to use FSTrigger plugin, build jobs or other jobs may generate logs or image files or test reports. Jobs can be triggered to run when these files or directories appear or are updated. Jobs in remote jenkins or external to jenkins can be used to trigger jobs using this method.
Built in Post-build Action:
Build other projects
* Trigger only if build is stable
* Trigger even if the build is unstable
* Trigger even if the build fails
BuildResultTrigger Plug-in -
This plugin makes it possible to monitor the build results of other jobs.
Similar to "Post-build Action: build other projects" only at top of job config as a trigger with cron schedule.
Filesystem Trigger Plug-in -
The plug-in makes it possible to monitor changes of a file or a set of files in a folder.
Parameterized Trigger Plug-in (which adds Post-build Action:
Trigger parameterized build on other projects)
Similar to "Post-build Action: build other projects but convenient to pass build information e.g. in parameters.ini style file or boolean or other params from one job to another.
Lockable Resources Plug-in
This plugin allows to define external resources (such as printers, phones, computers) that can be locked by builds. If a build requires an external resource which is already locked, it will wait for the resource to be free.

Off the top of my head, I can't think of a way to do exactly what you want. But that might be because it is probably not the best way to handle it.
In job A, you should probably just not deploy/deliver the artifacts to the place where B will look unless the build is successful. Then B will always run against a successful build from A.
But without understanding your entire setup or environment, I can't really comment on what is "right". But maybe you need to rethink the problem?

You can publish a "state" on completion of Job A. Say a property file in your source code repo, or even in DB.
This value can be boolean. If Job A is running, value will be false till Job A build successfully.
Now, when Job B gets triggered, first check if the above value is true or not.
It seems there is no plugin to support this. Most of the plugins will trigger the Job B as soon as Job A is done (ie it will monitor status of Job A).

Related

Trigger Multibranch Job from another

I have a job in Jenkins and I need to trigger another one when it ends (if it ends right).
The second job is a multibranch, so I want to know if there's any way to, when triggering this job, pass the branch I want to. For example, if I start the first job in the branch develop, I need it to trigger the second one for the develop branch also.
Is there any way to achieve this?
Just think about the multibranch job being a folder containing the real jobs named after the available branches:
Using Pipeline Job
When using the pipeline build step you'll have to use something like:
build(job: 'JOB_NAME/BRANCH_NAME'). Of course you may use a variable to specify the branch name.
Using Freestyle Job
When triggering from a Freestyle job you most probably have to
Use the parameterized trigger plugin as the plain old downstream build plugin still has issues triggering pipeline jobs (at least the version we're using)
As job name use the same pattern as described above: JOB_NAME/BRANCH_NAME
Should be possible to use a job parameter to specify the branch name here. However I didn't give it a try, though.
Yes, you can call downstream job by adding post build step: Trigger/Call build on other projects(you may need to install "Parameterized Trigger Plugin"):
where in Parameters section you define vars for the downstream job associated with vars from current job.
Also multibranch_PARAM1 and *PARAM2 must be configured in the downstreamjob:
Sometimes you want to call one or more subordinate multibranch jobs and have them build all of their branches, not just one. A script can retrieve the branch names and build them.
Because the script calls the Jenkins API, it should be in a shared library to avoid sandbox restrictions. The script should clear non-serializable references before calling the build step.
Shared library script jenkins-lib/vars/mbhelper.groovy:
def callMultibranchJob(String name) {
def item = jenkins.model.Jenkins.get().getItemByFullName(name)
def jobNames = item.allJobs.collect {it.fullName}
item = null // CPS -- remove reference to non-serializable object
for (jobName in jobNames) {
build job: jobName
}
}
Pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
library 'jenkins-lib'
mbhelper.callMultibranchJob 'multibranch-job-one'
mbhelper.callMultibranchJob 'multibranch-job-two'
}
}
}
}
}

How to trigger multiple jobs at once in Jenkins pipeline?

I've got a Jenkins job hierarchy looking something like this:
Collection/
ParentJob
Children/
Foo
Bar
Baz
ParentJob has a few different configurations and the jobs in Children/ need to be built for each of those configurations. These configurations are basically just checking out different branches and building those branches. Additionally, part of each configuration of ParentJob has to be completed before the child jobs can run.
How can I trigger all the jobs in Children after the necessary parts of each ParentJob configuration are finished?
My first thought was to just put a build 'Children/*' step in ParentJob's pipeline, but it seems Jenkins does not support wildcards in that context.
I could explicitly list all the jobs, but that would be tedious (there's several dozen child jobs) and prone to breakage as child jobs may be added or removed.
Ideally a solution would allow me to just set up a new child job without touching anything else and have it automatically triggered the next time ParentJob runs.
You could get a list of the child jobs and use a for loop to build them. I haven't tested this, but I see no reason why it would not work.
I structure my job folders in a similar fashion to take advantage of naming conventions and for role-based security.
import jenkins.model.*
def childJobNames = Jenkins.instance.getJobNames().findAll { it.contains("Collection/Children") }
for (def childJobName : childJobsNames)
{
build job: childJobName, parameters: [
string(name: 'Computer', value: manager.ComputerName)
],
wait: false
}
http://javadoc.jenkins.io/index.html?hudson/model/Hudson.html
You need to use the Jenkins Workflow or Pipeline, and then you can run a stage, then some in parallel, and then a sequential set of stages, etc. This StackOverflow Question and Answer seems to be a good reference.

How can I trigger a Jenkins job upon completion of a set of other jobs?

The simple case where you just have one job depending on the completion of a set of other jobs is easy: either use a multijob or use the build flow plugin with parallel { ... }. The case I am trying to solve is more general, for example:
JobA depends on JobX and JobZ
JobB depends on JobY and JobZ
SuperJob depends on JobA and JobB
I want each of these jobs to trigger as soon as, and only when their prerequisites complete.
It would appear that neither the build flow plugin, nor the join plugin or the job DSL plugin have a good mechanism for this. I can, of course, just start all my jobs and have them poll Jenkins, but that would be quite ugly.
Another dead end is the "Upstream job trigger". I want to trigger off a specific build of a job, not just any run of an upstream job.
update
One answer mentions the multijob plugin. It can indeed be used to solve this problem, but the scheduling and total build time is almost always worst case. For example, assume this dependency graph, with the build times as indicated:
left1 (1m) right1 (55m)
| |
left2 (50m) right2 (2m)
|____________|
|
zip
With the multijob plugin, you get:
Phase 1:
left1, right1 // done in 55m
Phase 2:
left2, right2 // done in 50m
Phase 3:
zip // total time 105m
If I had a way to trigger the next job exactly when all prerequisites are done, then the total build time would be just 57m.
The answer here should explain how I can obtain that behavior, preferably without writing my own polling mechanism.
update 1 1/2
In the comments below, it was suggested I group the left tasks and the right tasks into a single subtask. Yes, this can be done in this example, but it is very hard to do this in general, and automatically. For example, assume there is an additional dependency: right2 depends on left1. With the build times given, the optimal build time should not change, since left1 is long done before right2 is launched, but without this knowledge, you can no longer lump left1 and left2 in the same group, without running the risk of not having right1 available.
update 2
It looks like there is no ready made answer here. It seems I am going to have to code up a system groovy script myself. See my own answer to the question.
update 3
We ended up forking the multijob plugin and writing new logic within. I hope we can publish it as a new plugin after some cleanup...
Since you added the jenkins-workflow tag, I guess that using Jenkins Workflow Plugin is ok to you, so perhaps this Workflow script fit your needs:
node {
parallel left: {
build 'left1'
build 'left2'
}, right: {
build 'right1'
build 'right2'
},
failFast: true
build 'zip'
}
This workflow will trigger zip as soon as both parallel branches finish.
As far as I can tell, there is no published solution to my problem, so I have to roll my own. The following system groovy script works, but can obviously use some enhancements. Specifically, I really miss a nice simple one page build status overview...
This gist implements my solution, including proper handling of job cancellations: https://gist.github.com/cg-soft/0ac60a9720662a417cfa
You can use Build other projects as Post Build Actions in the configuration of one of your parent job which would trigger second parent job on successful build of the job. When the second parent job also gets completed, trigger your child job by same method.
Multijob plugin could be used to make hierarchy of jobs.
First select Multijob Project in new item and then in configuration you can add as many jobs as you want. You need to also specify phase for each Job.

How to run the same job multiple times in parallel with Jenkins?

I'm testing Jenkins to see if it will fit our build and testing framework. I found that Jenkins and its available plugins fit most of our needs. Except that I can't seem to find help on how to do one particular type of task.
We are creating application for embedded devices. We have 100s of tests that need to be run on these devices. If we run all the tests on one device after a build then it will take several hours to get the results. However, if we run the tests on 100 of the devices in parallel then we can get results in much shorter time.
All the tests will have very similar starting point. A test script is called with IP address of device to run the test on and user name/pw. The script would do the necessary test on the device and report back pass/fail for each test item.
I think the long/painful way of doing this is writing 100 jobs in Jenkins, each will be a different test script directly (with above parameters) and run these in parallel using available plugins. However, maintaining all these jobs will be very difficult in the long run.
So, the better way to do this would be to create a Job (let's call it child_tester) that can take parameters such as: test script name, IP address of device, user name/pw, etc. Then use another job (let's call it mother_tester) to call child_tester job 100 times with different IP addresses and run them in parallel. I would need some way of accumulating all the test results of each individual run of the child_tester jobs and report them back to mother_tester.
My question is there a plugin or any way of accomplishing this in Jenkins? I have looked into the information of the plugins called "Build Flow", "Parallel Test Executor", and "Parameterized Trigger". However, they don't seem to fit my needs.
I understand you've looked into the Build Flow plugin, but I'm not sure why you've dismissed it. Perhaps you can point out the holes in my proposal.
Assuming you have enough executors in your system to run jobs in parallel, I think that the Build Flow plugin and Build Flow Test Aggregator plugin can do what you want.
The Build Flow plugin supports running jobs in parallel. I don't see any reason why Build Flow could not schedule your "child" job to run in parallel with different parameters.
The Build Flow Test Aggregator grabs test results from the scheduled builds of a Build Flow job, so your "child" job will need to publish its own test results.
You will need to configure your "child" job so that it can run in parallel by checking the "Execute concurrent builds if necessary" in the job configuration.
Whatever set of slaves provide the connection to the embedded devices will need enough executors to run your jobs in parallel.
Update: with the simple Build Flow definition:
parallel (
{ build("dbacher flow child", VALUE: 1) },
{ build("dbacher flow child", VALUE: 2) },
{ build("dbacher flow child", VALUE: 3) },
{ build("dbacher flow child", VALUE: 4) }
)
I get the output:
parallel {
Schedule job dbacher flow child
Schedule job dbacher flow child
Schedule job dbacher flow child
Schedule job dbacher flow child
Build dbacher flow child #5 started
Build dbacher flow child #6 started
Build dbacher flow child #7 started
Build dbacher flow child #8 started
dbacher flow child #6 completed
dbacher flow child #7 completed
dbacher flow child #5 completed
dbacher flow child #8 completed
}
The job history shows that all four jobs are scheduled within seconds of each other. But the job build step contains an artificial delay (sleep) that would prevent any single build from completing that quickly.
Update 2: Here is an example of generating the list of parallel tasks dynamically from another data structure:
// create a closure for the deploy job for each server
def paramValues = (1..4)
def testJobs = []
for (param in paramValues) {
def jobParams = [VALUE: param]
def testJob = {
// call build
build(jobParams, "dbacher flow child")
}
println jobParams
testJobs.add(testJob)
}
parallel(testJobs)
The list passed to parallel is a list of closures that call the build with unique parameters. I had to make sure to define the job parameters outside of the closure function to ensure the jobs would be scheduled separately.
I cribbed the syntax from another answer and this thread on the Jenkins mailing list.
Please make sure that the number of executors in the Manage Jenkins -> Manage Nodes settings is more than the number of individual jobs in MultiJob project.
By default I guess it is 2. Hence we need to increase it.

Jenkins: How to put a new job for each scm-change into the build-queue?

I am using Jenkins for Continous-Integration.
I configured a job, which polls the scm for changes. I have one executor. When there is more than one scm-change, but the executor is already working, there is still only one job added to queue, where I want it to queue more than one job.
I already tried my job "parametrized" as a workaround, but as long as polling does not set any parameters¹ (even not the default ones²), this does not help, too.
Is there any way to get for each scm-change a new build in the job-queue?
[1] https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build
[2] I tried to combine this scenario with https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Dynamic+Parameter+Plug-in
You can write a script with the Jenkins Adaptive Plugin to be triggered by SVN and create a new build regardless of what is currently running.
Another option would be to create two jobs, one that monitors SCM and one that runs the build. Every time there is an SCM change you have the first job add an instance of the second to the queue and complete immediately so that it can continue to poll.
Described scenario is possible in Jenkins by using a workaround which requires two steps:
[JobA_trigger] One Job which triggers another job 'externally', via curl or jenkins-cli.jar¹.
[JobA] The actual job which has to be a parametrized one.
In my setup, JobA_trigger polls SCM periodically. If there is a change, JobA is triggered via curl and the current dateTime is submitted². This 'external' triggering is necessary to submit parameters to JobA.
# JobA_trigger "execute shell"
curl ${JENKINS_URL}job/JobA/buildWithParameters?SVN_REVISION=`date +"%Y-%m-%d"`%20`date +"%H:%M:%S"`
# SVN_REVISION, example (decoded): "2012-11-07 12:56:50" ("%20" is url-encoded space)
JobA itself is parametrized and accepts a String-Param "SVN_REVISION". Additionally I had to change the SVN-URL to
# Outer brackets for usage of SVN revision dates³ - must be avoided if working on a revision-number.
https://svn.someaddress.com/trunk#{${SVN_REVISION}}
Using this workaround, for each scm-change there is new run of JobA queued which has the related svn-revision/dateTime attached as a parameter and is used as the software-state which is being tested by this job.
¹ https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
² I decided to have dateTime-bases updates instead of revision-based ones, as I have svn-externals which would be updated to HEAD each, if I would be working revision-based.
³ http://svnbook.red-bean.com/en/1.7/svn.tour.revs.specifiers.html#svn.tour.revs.dates

Resources