Dynamically grouping jenkins job results - jenkins

I've had a dig around but can't find an elegant solution for what I want to do, so I hope some of you may be able to offer some suggestions. I've also asked this question on a jenkins forum, but no takers.
I want to be able to run a jenkins parent job with parameters that will feed down to triggered jobs, and then group all the job run results in a view dynamically.
The use case I'm trying to cover is: We have 10+ different jenkins jobs that run suites of tests, I want to simply manage a run of all those jobs to run against a specific code branch, on a specific test environment, and see the results (in one view) for only that run. The complication is the same Jenkin job may be run against another release or test environment and I don't want to see those results.
We already have the parent job triggering children with parameters, but I can't figure out how best to group the results.
I know I can create filters for views, but the name of jenkins jobs is static, and I want the view created at runtime, without having to build it myself. We do use the 'Set Build description' Plugin, so I could create a view that filters for a unique build descriptor, or something similar. But there doesn't seem to be a way to create views with filter programmatically.
Other considerations would be clean up. I wouldn't want a years worth of views clogging the views, so I need a way to clear out old runs too.
Any ideas to kick me off?

For groupping of reports you can just use a simple logic instead of finding a Jenkins plugin. You can place all the result files (preferably XMLs) in a common folder/ file server and at the end of execution of all the suites (jobs) you can trigger a common job which will process all the XML files and generate a common report. By this you can have " consolidated + individual reports ".
I have done it using Perf Publisher plugin which process XMLs and gives a beautiful aggregated report.
Job1 ----> Report1 ----> Move report of report folder
Job2 ----> Report2 ----> Move report of report folder
Job3 ----> Report3 ----> Move report of report folder
.
.
.
Job n ----> Report n ----> Move report of report folder
So after completion of job n, trigger Report job which will operate on "report" folder containing all the reports!
Hope it helps!

I have a partial solution:
All jobs accept a parameter called VIEW_IDENTIFIER.
Parent job is kicked off with a unique VIEW_IDENTIFIER being set, and all the child jobs have that passed into them when run.
After all jobs are run I edit a Jenkins View that has a 'Job Filter - > Parameterized Jobs Filter - > Name = VIEW_IDENTIFIER, Value = my unique ID set for the run'
This results in all jobs run with that unique ID being grouped in one single view for review.
The shame is I have to do the manual edit of the Job Filter.

Related

How to know where a Jenkins job is created in groovy code

When a Jenkins JobDSL seed job finishes creating other jobs, it shows a list of generated jobs.
For example:
GeneratedJob{name='my_example'}
GeneratedJob{name='my_mvn-test'}
Is there a way to print out at which line and in which file a job is created?
For instance:
10: job ("${prefix}-${prodname}-${suffix}") {
11: ...
...
20: }
Here line 10 is the location of job creation.
We have a big bunch of jobs generated in different dsl/groovy src files, and those jobs don't use fixed job names in the code, so it's hard to find where they are created without knowing the source code well.
Searched for things like Job DSL API hooks, but with no luck…
You can change your jobdsl code to print that information, you should be able to add println with custom messages anywhere in your code and log whatever is relevant to you.
It requires changing your code but it's a one off cost and it's worth evaluating against your current maintenance cost.
You can even add this information to the description of each job, so you don't depend on the seeder job logs.

Jenkins Build / Pipeline job - Job's listing in tree / layout ordered listing

Is it possible that for a given Build Pipeline job (which has downstream jobs either in the build or post build action as "Trigger build on other projects"), I can get a Tree listing view showing which Pipeline job# N called, what child jobs in the calling order (sequential or parallel) with child build# for that pipeline run build#.
For ex: If my pipeline job has this view:
then,
I'm expecting to get a listing of the top run similar to (in case I just put in simple text format):
vac-3.0-src:52 called: vac-3.0-unit-test-main:37
vac-3.0-unit-test-main:37 called: vac-3.0-unit-testA:36
vac-3.0-unit-test-main:37 called: vac-3.0-unit-testB:36
vac-3.0-unit-test-main:37 called: vac-3.0-unit-testC:35
vac-3.0-unit-test-main:37 called: vac-3.0-unit-testD:35
vac-3.0-unit-test-main:37 called: vac-3.0-unit-testReporting:35
vac-3.0-unit-testReporting:35 called: vac-3.0-integration-test-main:28
vac-3.0-integration-test-main:28 called: vac-3.0-integration-testA:27
vac-3.0-integration-test-main:28 called: vac-3.0-integration-testB:27
vac-3.0-integration-testB:27 called: vac-3.0-acceptance-test:25
vac-3.0-acceptance-test:25 called: vac-3.0-configure-something:24
vac-3.0-configure-something:24 called: vac-3.0-perform-someaction:23
vac-3.0-perform-someaction:23 called: vac-3.0-preview-step:22
vac-3.0-preview-step:22 called: vac-3.0-deb-delivery-job:27
vac-3.0-preview-step:22 called: vac-3.0-rpm-el6:23
vac-3.0-preview-step:22 called: vac-3.0-vagrant-provision:20
vac-3.0-preview-step:22 called: vac-3.0-vagrant-run:21
vac-3.0-vagrant-run:21 called: vac-3.0-demo:10
OR this information can be presented in a more robust structural manner i.e. it can be JSON blob where a parent job has structure which will have all jobs that it called (parallel/sequence) in the pipeline run / given order.
I tried the main job's URL (via curl) with Jenkins API i.e. /api/xml or /api/json?pretty=true&depth=10 or more but it doesn't give me the information that I'm looking for (related to a given pipeline run).
This information is visually available on the pipeline view (as per the image) and some information about subprojects is available on a given Jenkins job's dashboard (which was part of the pipeline) but the order is not there.
I'll appreciate if you have tried to solve this and have any solution to get this data. The reason for this effort is to find metrics horizontally for a given pipeline run (Rather than vertically for each individual job which are part of pipeline as I already have the vertical / individual job metrics for Total time, build#, result etc) but how I can relate each individual job's metrics for a given pipeline run, is what I'm trying to get.
If the above image example is big enough, we can refer a smaller run image snapshot here:
I see one possible solution, not sure if that's helpful but surely it's an attempt.
Algorithm Steps:==============
1) Maintain a Direct Parent-Child file (i.e. JobA:JobB, JobA:JobC, JobA:JobC, JobC:JobD, ....) i.e. this file will tell that for each JobX, what's are the direct Sub-child/downstream jobs of that. Via Jenkins Groovy script, this can be easily generated/available. PS: You can add more columns to this file i.e. JobA:JobB:Build:Sequential or JobA:JobB:Test:Parallel to get even better horizontal metrics for calculating turnaround time / per given step (build, test, deploy, etc) and whether a parent job called the child job in a sequence or in parallel with two or more jobs) and calculate the metrics accordingly.
2) Inside "Build Pipeline View" Configure (Settings), set the no. of jobs to be displayed as 1. PS: You can set this to whatever 5, 10, or more if you want to capture a given pipeline build# of that main pipeline job.
For testing purpose, I'm showing only 1 pipeline build run.
3) In Linux, use curl, get the "View Source" HTML page information on the build-pipeline-view's NAME (PS: This is NOT on the main pipeline job).
i.e. **not for jobA or xxvt-main or ** in this case, but use the View Name URL (which shows the whole pipeline). Let's assume the view name (via Build Pipeline View plugin) was created as "MyPipelineView"
ex: curl -s http://my-jenkins-server:8080/view/MyPipelineView/ > /tmp/9.txt
This will give you the HTML content.
Store this information in some file (Temporary). Let's assume I stored it in /tmp/9.txt
3) Run the following command to get the job's build#s. As per the second smaller pipeline image (in my post), the output of that will be:
grep -o "\"extId\":\"[a-zA-Z0-9_-][a-zA-Z0-9_-]*#[0-9][0-9]*\"" /tmp/9.txt
This will give you output like (use sed/cut to make it more cleaner):
"extId":"xxvt_main#157"
"extId":"xxvt_splunk_run_collect_operation#29"
"extId":"xxvt_splunk_run_process_operation#29"
"extId":"xxvt_splunk_update_date_restart_splunk#29"
"extId":"xxvt_splunk_get_jenkins_data#38"
"extId":"xxvt_splunk_get_clearquest_dr_data#47"
4) Now you have the above output for a given pipeline run, using the Parent-Child (direct relationship) file (which we generated in bullet 1), we can use that to create our final Build Pipeline Tree file i.e.
xxvt_main#157 called: xxvt_splunk_get_jenkins_data#38
xxvt_main#157 called: xxvt_splunk_get_clearquest_dr_data#47
xxvt_main#157 called: xxvt_splunk_run_collect_operation#29
xxvt_splunk_run_collect_operation#29 called: xxvt_splunk_run_process_operation#29
xxvt_splunk_run_process_operation#29 called: xxvt_splunk_update_date_restart_splunk#29
5) Upon knowing the a given run related job-name and its build#, we can use Jenkins's api/json?pretty=true&depth=1 or 2 or 3 carefully, to fetch fields we want to fetch for metrics and finally create/come up with a .csv file in whatever format you like, which will have metrics for a given pipeline run - HORIZONTALLY.
If you are working with Jenkinsfile DSL etc..
I achieved it via dynamically creating the stages, running them in parallel and also getting Jenkinsfile UI to show separate columns. This assumes parallel steps are independent of each other (otherwise don't use parallel) and you can nest them as deep as you want (depending upon the for loop).
Jenkinsfile Pipeline DSL: How to Show Multi-Columns in Jobs dashboard GUI - For all Dynamically created stages - When within PIPELINE section see here for more.

How to Pass Upstream Job Build Parameters to Downstream Jobs configured in a MultiJob Phase?

I have Upstream Job(MultiJob) which takes a String Parameter called freshORrerun, to take string value as "fresh" or "rerun" string value, which i need to pass on to downstream(standalone build) jobs to check the value is "fresh" or "rerun". Based on which, in child jobs's i will trigger complete tests run (pybot) or rerun (rebot) of failed tests.
here i have attached the screenshots how i have configured. When i print the passed string in child job it is empty.
Overall Job configuration.
Multi Job phase config and child Jobs
I have many no.of robot tests running them takes a lot of time. i need a way to run only failures of previous run, so that it gives me quick picture of how many got fixed. Could Some one please help me with this.
Click the 'Add parameters' button, select 'predefined parameters' and add: freshORrerun=${freshORrerun} to the list.
You can do it using one plugin called parameterized job trigger in which you will get options to pass parent job parameters to child job.
Note:- For this, you have to create parameters in child job also. These parameters will be overwritted.
plugin link

based on options selected in first job, how to trigger another job in Jenkins using Dynamic parameter plugin

I have 3 Jenkins jobs 'J1', 'J2', 'J3'. Now I have to create another Job 'Jselect' which takes user inputs or a drop down having values J1, J2, J3. Based on the user selection it should trigger the jobs J1 or J2 or J3.
To achieve this, I installed DynamicParameter plugin and created the job 'JSelect'.
In JSelect job, I selected 'This build is parameterized' option and then added a Dynamic Choice Parameter.
Provided the name as : Choose Target Job:
Choices Script as : def list=['J1','J2','J3']
When I Saved and build the job, a dropdown is coming as expected. But Iam not understanding where to capture this input and how to call other jobs based on this input.
I am on the right approach? Can someone please help on how to achieve this?
you need to trigger J1/J2/J3 based on some condition from Jselect.
I suggest you the below approach.
build step make variable or property file to decide the condition.
eg(when error ~>5=low ,error ~>15=medium, error ~>50 =high)
Use https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
It helps you to select downstream project based on different condition.
Make sure you uncheecked"trigger build when there is no parameter".
Hope its resolves your issue.

Jenkins: How to put a new job for each scm-change into the build-queue?

I am using Jenkins for Continous-Integration.
I configured a job, which polls the scm for changes. I have one executor. When there is more than one scm-change, but the executor is already working, there is still only one job added to queue, where I want it to queue more than one job.
I already tried my job "parametrized" as a workaround, but as long as polling does not set any parameters¹ (even not the default ones²), this does not help, too.
Is there any way to get for each scm-change a new build in the job-queue?
[1] https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build
[2] I tried to combine this scenario with https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Dynamic+Parameter+Plug-in
You can write a script with the Jenkins Adaptive Plugin to be triggered by SVN and create a new build regardless of what is currently running.
Another option would be to create two jobs, one that monitors SCM and one that runs the build. Every time there is an SCM change you have the first job add an instance of the second to the queue and complete immediately so that it can continue to poll.
Described scenario is possible in Jenkins by using a workaround which requires two steps:
[JobA_trigger] One Job which triggers another job 'externally', via curl or jenkins-cli.jar¹.
[JobA] The actual job which has to be a parametrized one.
In my setup, JobA_trigger polls SCM periodically. If there is a change, JobA is triggered via curl and the current dateTime is submitted². This 'external' triggering is necessary to submit parameters to JobA.
# JobA_trigger "execute shell"
curl ${JENKINS_URL}job/JobA/buildWithParameters?SVN_REVISION=`date +"%Y-%m-%d"`%20`date +"%H:%M:%S"`
# SVN_REVISION, example (decoded): "2012-11-07 12:56:50" ("%20" is url-encoded space)
JobA itself is parametrized and accepts a String-Param "SVN_REVISION". Additionally I had to change the SVN-URL to
# Outer brackets for usage of SVN revision dates³ - must be avoided if working on a revision-number.
https://svn.someaddress.com/trunk#{${SVN_REVISION}}
Using this workaround, for each scm-change there is new run of JobA queued which has the related svn-revision/dateTime attached as a parameter and is used as the software-state which is being tested by this job.
¹ https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
² I decided to have dateTime-bases updates instead of revision-based ones, as I have svn-externals which would be updated to HEAD each, if I would be working revision-based.
³ http://svnbook.red-bean.com/en/1.7/svn.tour.revs.specifiers.html#svn.tour.revs.dates

Resources