How to know where a Jenkins job is created in groovy code - jenkins

When a Jenkins JobDSL seed job finishes creating other jobs, it shows a list of generated jobs.
For example:
GeneratedJob{name='my_example'}
GeneratedJob{name='my_mvn-test'}
Is there a way to print out at which line and in which file a job is created?
For instance:
10: job ("${prefix}-${prodname}-${suffix}") {
11: ...
...
20: }
Here line 10 is the location of job creation.
We have a big bunch of jobs generated in different dsl/groovy src files, and those jobs don't use fixed job names in the code, so it's hard to find where they are created without knowing the source code well.
Searched for things like Job DSL API hooks, but with no luck…

You can change your jobdsl code to print that information, you should be able to add println with custom messages anywhere in your code and log whatever is relevant to you.
It requires changing your code but it's a one off cost and it's worth evaluating against your current maintenance cost.
You can even add this information to the description of each job, so you don't depend on the seeder job logs.

Related

Finding jobs generated by Job DSL plugin programmatically

We have a mix of DSL-seeded and manually created jobs on our Jenkins server.
I'd like to find all jobs NOT generated by DSL (or all generated by DSL at any time in the past).
I found no indication in job's config.xml that it was generated by DSL.
So, is it possible and how?
Well, I have the same problem as you.
I wonder how the link "Seed job" within the generated job is created. I don't see it in all of the generated jobs, either.
Unfortunately, I didn't get very far with my research.
In the script console, I listed the methods for one of my jobs (let's call it foo) :
Jenkins.instance.getItems().each {
if (it.getName() == 'foo') {
println it
it.class.getMethods().each { method ->
println method
}
}
};
However, I didn't see any methods containing jobdsl there.
I found a file $JENKINS_HOME/javaposse.jobdsl.plugin.ExecuteDslScripts.xml that contains generated job names and their seed jobs. But I don't know whether there is an official JobDSL API for reading it.
So... if you find more information, I'd be glad to know - good luck!

In Jenkins pipeline, how can I generate a concise report from the job outputs

I have a pipeline created that is a series of Powershell jobs in various parallel stages. Whilst the jobs are in stages, there is no dependency between them (I only split them into stages in order to avoid conflicts).
I want to gather a report from every job but at a pipeline level. Each job will output a single line of text, but the full report needs to be at pipeline level. The current pipeline console output just says that the job is starting and stopping, there is no additional output brought in from the jobs. I've considered the following;
I have seen the stash/unstash option, but that seems to be at a file level and I'm not sure how to use that to generate a report.
I can see the echo command in pipeline, but can't see a way of passing a string/variable from the job to the pipeline.
I tried taking the pipeline 'WORKSPACE' variable to pass to the job so the job can write directly to a single file, but the variable didn't work (and I've no idea if this is violating some unwritten 'rule' of pipelines).
How can I get a single line of text, from each job in a pipeline, out to a single text file?
Blockquote
I can see the echo command in pipeline, but can't see a way of passing a string/variable from the job to the pipeline.
Blockquote
if you use powershell you can use write-host or -- verbose
Blockquote
I tried taking the pipeline 'WORKSPACE' variable to pass to the job so the job can write directly to a single file, but the variable didn't work (and I've no idea if this is violating some unwritten 'rule' of pipelines).
Blockquote
İf you want to use jenkins variable, you have to use "%" symbol.
%WORKSPACE%

Jenkins Job DSL Plugin: How to Modify Parameters on other jobs

I want to create a job in Jenkins which modifies an existing parameter on another job.
I'm using the Job DSL Plugin. The code I'm using is:
job('jobname'){
using('jobname')
parameters {
choiceParam('PARAMETER1',['newValue1', 'newValue2'],'')
}
}
However, this only adds another parameter with the same name in the other job.
I'm trying the alternative to delete all parameters and start from scratch, but I haven't found the way to do that using Job DSL (not even with the Configure block).
Another alternative would be to define the other job completely and start from scratch, but that would make the job too complicated, specially if I want to apply this change to many jobs at a time.
¿Is there a way to edit or delete lines on the config.xml file using the Job DSL plugin?

Dynamically grouping jenkins job results

I've had a dig around but can't find an elegant solution for what I want to do, so I hope some of you may be able to offer some suggestions. I've also asked this question on a jenkins forum, but no takers.
I want to be able to run a jenkins parent job with parameters that will feed down to triggered jobs, and then group all the job run results in a view dynamically.
The use case I'm trying to cover is: We have 10+ different jenkins jobs that run suites of tests, I want to simply manage a run of all those jobs to run against a specific code branch, on a specific test environment, and see the results (in one view) for only that run. The complication is the same Jenkin job may be run against another release or test environment and I don't want to see those results.
We already have the parent job triggering children with parameters, but I can't figure out how best to group the results.
I know I can create filters for views, but the name of jenkins jobs is static, and I want the view created at runtime, without having to build it myself. We do use the 'Set Build description' Plugin, so I could create a view that filters for a unique build descriptor, or something similar. But there doesn't seem to be a way to create views with filter programmatically.
Other considerations would be clean up. I wouldn't want a years worth of views clogging the views, so I need a way to clear out old runs too.
Any ideas to kick me off?
For groupping of reports you can just use a simple logic instead of finding a Jenkins plugin. You can place all the result files (preferably XMLs) in a common folder/ file server and at the end of execution of all the suites (jobs) you can trigger a common job which will process all the XML files and generate a common report. By this you can have " consolidated + individual reports ".
I have done it using Perf Publisher plugin which process XMLs and gives a beautiful aggregated report.
Job1 ----> Report1 ----> Move report of report folder
Job2 ----> Report2 ----> Move report of report folder
Job3 ----> Report3 ----> Move report of report folder
.
.
.
Job n ----> Report n ----> Move report of report folder
So after completion of job n, trigger Report job which will operate on "report" folder containing all the reports!
Hope it helps!
I have a partial solution:
All jobs accept a parameter called VIEW_IDENTIFIER.
Parent job is kicked off with a unique VIEW_IDENTIFIER being set, and all the child jobs have that passed into them when run.
After all jobs are run I edit a Jenkins View that has a 'Job Filter - > Parameterized Jobs Filter - > Name = VIEW_IDENTIFIER, Value = my unique ID set for the run'
This results in all jobs run with that unique ID being grouped in one single view for review.
The shame is I have to do the manual edit of the Job Filter.

Jenkins Plugin: create a new job programmatically

How to create a new Jenkins job within a plugin?
I have a Jenkins plugin that listens to a message queue and, when a message arrives, fires a new event to create a new job (or start a run).
I'm looking for something like:
Job myJob = new Job(...);
I know I can use REST API or CLI but since I'm in the plugin I'd use java internal solution.
Use Job DSL Plugin.
From the plugin page:
Jenkins is a wonderful system for managing builds, and people love using its UI to configure jobs. Unfortunately, as the number of jobs grows, maintaining them becomes tedious, and the paradigm of using a UI falls apart. Additionally, the common pattern in this situation is to copy jobs to create new ones, these "children" have a habit of diverging from their original "template" and consequently it becomes difficult to maintain consistency between these jobs.
The Jenkins job-dsl-plugin attempts to solve this problem by allowing jobs to be defined with the absolute minimum necessary in a programmatic form, with the help of templates that are synced with the generated jobs. The goal is for your project to be able to define all the jobs they want to be related to their project, declaring their intent for the jobs, leaving the common stuff up to a template that were defined earlier or hidden behind the DSL.
You can create a new hudson/jenkins job by simply doing:
FreeStyleProject proj = Hudson.getInstance().createProject(FreeStyleProject.class, NAMEOFJOB);
If you want to be able to handle updates (and you already have the config.xml):
import hudson.model.AbstractItem
import javax.xml.transform.stream.StreamSource
import jenkins.model.Jenkins
final jenkins = Jenkins.getInstance()
final itemName = 'name-of-job-to-be-created-or-updated'
final configXml = new FileInputStream('/path/to/config.xml')
final item = jenkins.getItemByFullName(itemName, AbstractItem.class)
if (item != null) {
item.updateByXml(new StreamSource(configXml))
} else {
jenkins.createProjectFromXML(itemName, configXml)
}
Make sure though you have the core .jar file before doing this though.

Resources