I'm testing Jenkins to see if it will fit our build and testing framework. I found that Jenkins and its available plugins fit most of our needs. Except that I can't seem to find help on how to do one particular type of task.
We are creating application for embedded devices. We have 100s of tests that need to be run on these devices. If we run all the tests on one device after a build then it will take several hours to get the results. However, if we run the tests on 100 of the devices in parallel then we can get results in much shorter time.
All the tests will have very similar starting point. A test script is called with IP address of device to run the test on and user name/pw. The script would do the necessary test on the device and report back pass/fail for each test item.
I think the long/painful way of doing this is writing 100 jobs in Jenkins, each will be a different test script directly (with above parameters) and run these in parallel using available plugins. However, maintaining all these jobs will be very difficult in the long run.
So, the better way to do this would be to create a Job (let's call it child_tester) that can take parameters such as: test script name, IP address of device, user name/pw, etc. Then use another job (let's call it mother_tester) to call child_tester job 100 times with different IP addresses and run them in parallel. I would need some way of accumulating all the test results of each individual run of the child_tester jobs and report them back to mother_tester.
My question is there a plugin or any way of accomplishing this in Jenkins? I have looked into the information of the plugins called "Build Flow", "Parallel Test Executor", and "Parameterized Trigger". However, they don't seem to fit my needs.
I understand you've looked into the Build Flow plugin, but I'm not sure why you've dismissed it. Perhaps you can point out the holes in my proposal.
Assuming you have enough executors in your system to run jobs in parallel, I think that the Build Flow plugin and Build Flow Test Aggregator plugin can do what you want.
The Build Flow plugin supports running jobs in parallel. I don't see any reason why Build Flow could not schedule your "child" job to run in parallel with different parameters.
The Build Flow Test Aggregator grabs test results from the scheduled builds of a Build Flow job, so your "child" job will need to publish its own test results.
You will need to configure your "child" job so that it can run in parallel by checking the "Execute concurrent builds if necessary" in the job configuration.
Whatever set of slaves provide the connection to the embedded devices will need enough executors to run your jobs in parallel.
Update: with the simple Build Flow definition:
parallel (
{ build("dbacher flow child", VALUE: 1) },
{ build("dbacher flow child", VALUE: 2) },
{ build("dbacher flow child", VALUE: 3) },
{ build("dbacher flow child", VALUE: 4) }
)
I get the output:
parallel {
Schedule job dbacher flow child
Schedule job dbacher flow child
Schedule job dbacher flow child
Schedule job dbacher flow child
Build dbacher flow child #5 started
Build dbacher flow child #6 started
Build dbacher flow child #7 started
Build dbacher flow child #8 started
dbacher flow child #6 completed
dbacher flow child #7 completed
dbacher flow child #5 completed
dbacher flow child #8 completed
}
The job history shows that all four jobs are scheduled within seconds of each other. But the job build step contains an artificial delay (sleep) that would prevent any single build from completing that quickly.
Update 2: Here is an example of generating the list of parallel tasks dynamically from another data structure:
// create a closure for the deploy job for each server
def paramValues = (1..4)
def testJobs = []
for (param in paramValues) {
def jobParams = [VALUE: param]
def testJob = {
// call build
build(jobParams, "dbacher flow child")
}
println jobParams
testJobs.add(testJob)
}
parallel(testJobs)
The list passed to parallel is a list of closures that call the build with unique parameters. I had to make sure to define the job parameters outside of the closure function to ensure the jobs would be scheduled separately.
I cribbed the syntax from another answer and this thread on the Jenkins mailing list.
Please make sure that the number of executors in the Manage Jenkins -> Manage Nodes settings is more than the number of individual jobs in MultiJob project.
By default I guess it is 2. Hence we need to increase it.
Related
I have a full set of unit tests I'd like to run daily overnight in Jenkins, but only if my application has built correctly in another job. I DON'T want the unit tests to trigger throughout the day as commits are added to the application.
How do I configure this? To restate: there are two Jenkins jobs:
A and B:
A runs each checkin, unless B is running, in which case it waits for B.
B runs at midnight, IFF A is in a good state. If A is running, B waits for A.
I already have A set up as "A runs each checkin."
I assume you are using Jenkins pipeline. There might be many ways but I would address this by adding a new stage in JOB B that check the status of JOB A and a utility function to check status.
stage('check Job A status'){
// If A is running, B waits for A.
if(checkStatus() == "RUNNING" ){
timeout(time: 60, unit: 'MINUTES') {
waitUntil {
def status = checkStatus()
return (status == "SUCCESS" || status == "FAILURE" || status == "UNSTABLE" || status == "ABORTED")
}
}
}
// Proceed with B, only when A is in a good state
if( checkStatus() != "SUCCESS" ){
error('Stopping Job B becuase job A is not successful.')
}
}
def checkStatus() {
def statusUrl = httpRequest "https://jenkins.example.com/job/${job-A-Name}/lastBuild/api/json"
def statusJson = new JsonSlurper().parseText(statusUrl.getContent())
return statusJson['result']
}
My answer is a bit late to the party here (sorry :-7) but a useful question and not answered properly(sorry guys - not your fault - it took me/us a few years to find out the best different ways of doing this (originally I had some post-build groovy and other scripts doing funky things like triggering other jobs)). Actually jenkins has quite a flexible choice of methods for jobs that need to interact with one another.
There is a built in "Post-build Action: build other projects" and there are a couple of plugins which can be used. The "Post-build Action: build other projects" is probably most suitable. And the "Lockable Resources Plug-in" can be used to make the jobs mutually exclusive.
* SIMPLEST ANSWER: *
Install Lockable Resource plugin and add a lockable resource "build_or_test" and configure jobs A and B to lock on that resource.
Configure the build job A, Add Post-build Action: Build other projects
Build job B if job A is Stable.
* LIST of useful built-ins and plugins: *
It is also useful to use FSTrigger plugin, build jobs or other jobs may generate logs or image files or test reports. Jobs can be triggered to run when these files or directories appear or are updated. Jobs in remote jenkins or external to jenkins can be used to trigger jobs using this method.
Built in Post-build Action:
Build other projects
* Trigger only if build is stable
* Trigger even if the build is unstable
* Trigger even if the build fails
BuildResultTrigger Plug-in -
This plugin makes it possible to monitor the build results of other jobs.
Similar to "Post-build Action: build other projects" only at top of job config as a trigger with cron schedule.
Filesystem Trigger Plug-in -
The plug-in makes it possible to monitor changes of a file or a set of files in a folder.
Parameterized Trigger Plug-in (which adds Post-build Action:
Trigger parameterized build on other projects)
Similar to "Post-build Action: build other projects but convenient to pass build information e.g. in parameters.ini style file or boolean or other params from one job to another.
Lockable Resources Plug-in
This plugin allows to define external resources (such as printers, phones, computers) that can be locked by builds. If a build requires an external resource which is already locked, it will wait for the resource to be free.
Off the top of my head, I can't think of a way to do exactly what you want. But that might be because it is probably not the best way to handle it.
In job A, you should probably just not deploy/deliver the artifacts to the place where B will look unless the build is successful. Then B will always run against a successful build from A.
But without understanding your entire setup or environment, I can't really comment on what is "right". But maybe you need to rethink the problem?
You can publish a "state" on completion of Job A. Say a property file in your source code repo, or even in DB.
This value can be boolean. If Job A is running, value will be false till Job A build successfully.
Now, when Job B gets triggered, first check if the above value is true or not.
It seems there is no plugin to support this. Most of the plugins will trigger the Job B as soon as Job A is done (ie it will monitor status of Job A).
I have a pipeline script where I want to kick off parallel builds on two different build machines, and once it's all done, perform some post-run activity like unstashing and publishing test results, creating an archive from all of the binaries and libraries generated, etc.
It basically looks like this, where 'master' is a MacOS machine and we've got a separate machine for Windows builds:
// main run stuff
parallel (
"mac" : {
node ('master') {
for (job in macJobs) {
job.do()
}
}
},
"windows" : {
node ('windowsMachine') {
for (job in windowsJobs) {
job.do()
}
}
}
}
node('master') {
// post-run stuff
}
If I kick off a single build with this script then it completes no problem.
But, if a second build kicks off while the first is still working through the parallel block (i.e. its polling SCM and someone did a push while the first build is still going), then the post-run block doesn't get executed until the second job's parallel block completes.
There's obviously a priority queue based on who gets to request the node first, but I'd like for one complete script run to finish before Jenkins moves on to the next, so we don't end up with jobs piling up on the post-run block which normally only takes a couple of seconds to complete...
How do I modify the script to do this? I've tried wrapping it all in a single stage block, but no luck there.
I might guess that part of the problem lies around your post-run stuff sharing your master node with one of your parallel tasks. Especially if your master node only has a one or two executors, which would definitely put it at 100% load with more than one concurrent build.
If this sounds like it might be part of your problem, you can try giving your post-run stuff a dedicated node to guarantee availability independent of triggered builds. Or increase the executors available on your master node to guarantee that even if there are a couple concurrent builds, there are still executors available for those post-runs.
Jenkins doesn't really care about the origin of a block to execute. So if you have two jobs running at the same time, and each uses the master node in two separate blocks. There is a real chance the first block of each job will execute together before either of their second block is reached. If your executor only has two executors available, then you may even end up with a starved queue for that node, but at the very least, an executor must become available before either of those second blocks can begin.
I have Upstream Job(MultiJob) which takes a String Parameter called freshORrerun, to take string value as "fresh" or "rerun" string value, which i need to pass on to downstream(standalone build) jobs to check the value is "fresh" or "rerun". Based on which, in child jobs's i will trigger complete tests run (pybot) or rerun (rebot) of failed tests.
here i have attached the screenshots how i have configured. When i print the passed string in child job it is empty.
Overall Job configuration.
Multi Job phase config and child Jobs
I have many no.of robot tests running them takes a lot of time. i need a way to run only failures of previous run, so that it gives me quick picture of how many got fixed. Could Some one please help me with this.
Click the 'Add parameters' button, select 'predefined parameters' and add: freshORrerun=${freshORrerun} to the list.
You can do it using one plugin called parameterized job trigger in which you will get options to pass parent job parameters to child job.
Note:- For this, you have to create parameters in child job also. These parameters will be overwritted.
plugin link
The simple case where you just have one job depending on the completion of a set of other jobs is easy: either use a multijob or use the build flow plugin with parallel { ... }. The case I am trying to solve is more general, for example:
JobA depends on JobX and JobZ
JobB depends on JobY and JobZ
SuperJob depends on JobA and JobB
I want each of these jobs to trigger as soon as, and only when their prerequisites complete.
It would appear that neither the build flow plugin, nor the join plugin or the job DSL plugin have a good mechanism for this. I can, of course, just start all my jobs and have them poll Jenkins, but that would be quite ugly.
Another dead end is the "Upstream job trigger". I want to trigger off a specific build of a job, not just any run of an upstream job.
update
One answer mentions the multijob plugin. It can indeed be used to solve this problem, but the scheduling and total build time is almost always worst case. For example, assume this dependency graph, with the build times as indicated:
left1 (1m) right1 (55m)
| |
left2 (50m) right2 (2m)
|____________|
|
zip
With the multijob plugin, you get:
Phase 1:
left1, right1 // done in 55m
Phase 2:
left2, right2 // done in 50m
Phase 3:
zip // total time 105m
If I had a way to trigger the next job exactly when all prerequisites are done, then the total build time would be just 57m.
The answer here should explain how I can obtain that behavior, preferably without writing my own polling mechanism.
update 1 1/2
In the comments below, it was suggested I group the left tasks and the right tasks into a single subtask. Yes, this can be done in this example, but it is very hard to do this in general, and automatically. For example, assume there is an additional dependency: right2 depends on left1. With the build times given, the optimal build time should not change, since left1 is long done before right2 is launched, but without this knowledge, you can no longer lump left1 and left2 in the same group, without running the risk of not having right1 available.
update 2
It looks like there is no ready made answer here. It seems I am going to have to code up a system groovy script myself. See my own answer to the question.
update 3
We ended up forking the multijob plugin and writing new logic within. I hope we can publish it as a new plugin after some cleanup...
Since you added the jenkins-workflow tag, I guess that using Jenkins Workflow Plugin is ok to you, so perhaps this Workflow script fit your needs:
node {
parallel left: {
build 'left1'
build 'left2'
}, right: {
build 'right1'
build 'right2'
},
failFast: true
build 'zip'
}
This workflow will trigger zip as soon as both parallel branches finish.
As far as I can tell, there is no published solution to my problem, so I have to roll my own. The following system groovy script works, but can obviously use some enhancements. Specifically, I really miss a nice simple one page build status overview...
This gist implements my solution, including proper handling of job cancellations: https://gist.github.com/cg-soft/0ac60a9720662a417cfa
You can use Build other projects as Post Build Actions in the configuration of one of your parent job which would trigger second parent job on successful build of the job. When the second parent job also gets completed, trigger your child job by same method.
Multijob plugin could be used to make hierarchy of jobs.
First select Multijob Project in new item and then in configuration you can add as many jobs as you want. You need to also specify phase for each Job.
I am using Jenkins for Continous-Integration.
I configured a job, which polls the scm for changes. I have one executor. When there is more than one scm-change, but the executor is already working, there is still only one job added to queue, where I want it to queue more than one job.
I already tried my job "parametrized" as a workaround, but as long as polling does not set any parameters¹ (even not the default ones²), this does not help, too.
Is there any way to get for each scm-change a new build in the job-queue?
[1] https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build
[2] I tried to combine this scenario with https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Dynamic+Parameter+Plug-in
You can write a script with the Jenkins Adaptive Plugin to be triggered by SVN and create a new build regardless of what is currently running.
Another option would be to create two jobs, one that monitors SCM and one that runs the build. Every time there is an SCM change you have the first job add an instance of the second to the queue and complete immediately so that it can continue to poll.
Described scenario is possible in Jenkins by using a workaround which requires two steps:
[JobA_trigger] One Job which triggers another job 'externally', via curl or jenkins-cli.jar¹.
[JobA] The actual job which has to be a parametrized one.
In my setup, JobA_trigger polls SCM periodically. If there is a change, JobA is triggered via curl and the current dateTime is submitted². This 'external' triggering is necessary to submit parameters to JobA.
# JobA_trigger "execute shell"
curl ${JENKINS_URL}job/JobA/buildWithParameters?SVN_REVISION=`date +"%Y-%m-%d"`%20`date +"%H:%M:%S"`
# SVN_REVISION, example (decoded): "2012-11-07 12:56:50" ("%20" is url-encoded space)
JobA itself is parametrized and accepts a String-Param "SVN_REVISION". Additionally I had to change the SVN-URL to
# Outer brackets for usage of SVN revision dates³ - must be avoided if working on a revision-number.
https://svn.someaddress.com/trunk#{${SVN_REVISION}}
Using this workaround, for each scm-change there is new run of JobA queued which has the related svn-revision/dateTime attached as a parameter and is used as the software-state which is being tested by this job.
¹ https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
² I decided to have dateTime-bases updates instead of revision-based ones, as I have svn-externals which would be updated to HEAD each, if I would be working revision-based.
³ http://svnbook.red-bean.com/en/1.7/svn.tour.revs.specifiers.html#svn.tour.revs.dates