Jenkins Plugin: create a new job programmatically - jenkins

How to create a new Jenkins job within a plugin?
I have a Jenkins plugin that listens to a message queue and, when a message arrives, fires a new event to create a new job (or start a run).
I'm looking for something like:
Job myJob = new Job(...);
I know I can use REST API or CLI but since I'm in the plugin I'd use java internal solution.

Use Job DSL Plugin.
From the plugin page:
Jenkins is a wonderful system for managing builds, and people love using its UI to configure jobs. Unfortunately, as the number of jobs grows, maintaining them becomes tedious, and the paradigm of using a UI falls apart. Additionally, the common pattern in this situation is to copy jobs to create new ones, these "children" have a habit of diverging from their original "template" and consequently it becomes difficult to maintain consistency between these jobs.
The Jenkins job-dsl-plugin attempts to solve this problem by allowing jobs to be defined with the absolute minimum necessary in a programmatic form, with the help of templates that are synced with the generated jobs. The goal is for your project to be able to define all the jobs they want to be related to their project, declaring their intent for the jobs, leaving the common stuff up to a template that were defined earlier or hidden behind the DSL.

You can create a new hudson/jenkins job by simply doing:
FreeStyleProject proj = Hudson.getInstance().createProject(FreeStyleProject.class, NAMEOFJOB);
If you want to be able to handle updates (and you already have the config.xml):
import hudson.model.AbstractItem
import javax.xml.transform.stream.StreamSource
import jenkins.model.Jenkins
final jenkins = Jenkins.getInstance()
final itemName = 'name-of-job-to-be-created-or-updated'
final configXml = new FileInputStream('/path/to/config.xml')
final item = jenkins.getItemByFullName(itemName, AbstractItem.class)
if (item != null) {
item.updateByXml(new StreamSource(configXml))
} else {
jenkins.createProjectFromXML(itemName, configXml)
}
Make sure though you have the core .jar file before doing this though.

Related

How to know where a Jenkins job is created in groovy code

When a Jenkins JobDSL seed job finishes creating other jobs, it shows a list of generated jobs.
For example:
GeneratedJob{name='my_example'}
GeneratedJob{name='my_mvn-test'}
Is there a way to print out at which line and in which file a job is created?
For instance:
10: job ("${prefix}-${prodname}-${suffix}") {
11: ...
...
20: }
Here line 10 is the location of job creation.
We have a big bunch of jobs generated in different dsl/groovy src files, and those jobs don't use fixed job names in the code, so it's hard to find where they are created without knowing the source code well.
Searched for things like Job DSL API hooks, but with no luck…
You can change your jobdsl code to print that information, you should be able to add println with custom messages anywhere in your code and log whatever is relevant to you.
It requires changing your code but it's a one off cost and it's worth evaluating against your current maintenance cost.
You can even add this information to the description of each job, so you don't depend on the seeder job logs.

Create Jenkins WorkflowMultibranchProject job with groovy init

I am automating the configuration of Jenkins masters to get to a one-click instantiation. We have 6 standard jobs we create for each instance and I'd like to be able to create them via groovy.init.d scripts but haven't found examples for this type of job.
We use the cloudbees Bitbucket Team/Project plugin that ends up creating jobs of type WorkflowMultibranchProject with additional configuration to connect to our on-prem Bitbucket instance.
Does anyone have samples of creating such a job via groovy? Am I better off trying to use JobDSL to create the job (am doing that already for a Mother Seed job)
[UPDATE] : with the help of the answer below came up with a full sample creating an entire Bitbucket Team/Project Job: https://github.com/redfive/jenkins-init/blob/master/init.groovy.d/core-jobs.groovy
Having used Job DSL, I'm 50/50 undecided if it is easier compared to using Groovy (as Job DSL lacks support for some of the config options).
An example for the similar OrganizationFolder can be found in #coderanger's article on https://coderanger.net/jenkins/:
// Create the top-level item if it doesn't exist already.
def folder = jenkins.items.isEmpty() ? jenkins.createProject(OrganizationFolder, 'MyName') : jenkins.items[0]
// Set up GitHub source.
def navigator = new GitHubSCMNavigator(githubOrg)
navigator.credentialsId = cred.id // Loaded above in the GitHub section.
navigator.traits = [
// Too many repos to scan everything. This trims to a svelte 265 repos at the time of writing.
new jenkins.scm.impl.trait.WildcardSCMSourceFilterTrait('*-cookbook', ''),
// We have a ton of old branches so try to limit to just master and PRs for now.
new jenkins.scm.impl.trait.RegexSCMHeadFilterTrait('^(master|PR-.*)'),
new BranchDiscoveryTrait(1), // Exclude branches that are also filed as PRs.
new OriginPullRequestDiscoveryTrait(1), // Merging the pull request with the current target branch revision.
]
folder.navigators.replace(navigator)
The next time when I set up an instance, I'd likely give that a try.

Jenkins Job DSL Plugin: How to Modify Parameters on other jobs

I want to create a job in Jenkins which modifies an existing parameter on another job.
I'm using the Job DSL Plugin. The code I'm using is:
job('jobname'){
using('jobname')
parameters {
choiceParam('PARAMETER1',['newValue1', 'newValue2'],'')
}
}
However, this only adds another parameter with the same name in the other job.
I'm trying the alternative to delete all parameters and start from scratch, but I haven't found the way to do that using Job DSL (not even with the Configure block).
Another alternative would be to define the other job completely and start from scratch, but that would make the job too complicated, specially if I want to apply this change to many jobs at a time.
¿Is there a way to edit or delete lines on the config.xml file using the Job DSL plugin?

Jenkins get/set data external to job

Does jenkins have any way to set global properties from a job? We have many such needs for this - but specifically - we have a number of slaves, across unix and windows, and various different permissions locations - so it's not easy to have a connected file system. We have various levels of maturity that we promote through - so for instance, we want to promote some build number to UAT - and then promote whatever number is in UAT to training and so on. So - really, in the "release to uat" - we want to store some idea of which build number was released - and read that from the "release to training" job. At the moment we are hacking it by restricting them to run from the same slave, and writing it to a file, which is very much not ideal.
I may not have totally understood your question but you can perform a lot of work with the built in groovy scripting function in jenkins, including reading parameters from other jobs, and rewriting or initializing the parameters in the current job. You can use parameters like this to record information that can be retrieved on demand by other jobs
For instance you can find the build number of the last successful build of a certain project:
import hudson.model.*
def hif = Hudson.instance
def a = hif.getItems(hudson.model.Project).find{it.displayName.toUpperCase()=='MY_PROJECTNAME'}.getBuilds().findAll{it.result==Result.SUCCESS }.first()
out.println a.number //build number
out.println a.buildVariableResolver.resolve('someVariable')// some parameter used to call a
(you could include any other criteria at this point)
If you want to save information to a parameter that can later be read by another bulid step or another job then you first create the parameter in the job config, then write to it in code like so:
import hudson.model.*
def hif = Hudson.instance
def buildMap = build.getBuildVariables()
buildMap['MySpecialVar']='SomeValue'
setBuildParameters(buildMap)
def setBuildParameters(map) {
def npl = new ArrayList<StringParameterValue>()
for (e in map) {
npl.add(new StringParameterValue(e.key.toString(), e.value.toString()))
}
def newPa = null
def oldPa = build.getAction(ParametersAction.class)
if (oldPa != null) {
build.actions.remove(oldPa)
newPa = oldPa.createUpdated(npl)
} else {
newPa = new ParametersAction(npl)
}
build.actions.add(newPa)
}
Combining these techniques you could for instance:
Save a bunch of information as 'output parameters' in job one
Find the most recent successful instance of job one and read its parameters
If necessary save those parameters to job2's parameter list so they are accessible from other build steps.
OR
If you are happy to use files then you may be able to use the archive plugin, where you would write to a file and then archive it as a post build action. The file would be saved to the master, and you could use the 'copy artifacts from another project' option in the second build to retrieve the file. You can use parameter filters and the techniques above to pick the right build.
Setting an environment variable permanently is entirely dependent on the underlying Operating System.
For example on Windows, the SetX command can be used, however note that SetX only takes affect after the next process is created by the system the inherits from global configuration. So, if you run SetX, and then run another job, it will not notice the change. However if you run SetX, and then restart Jenkins process (from which all child jobs inherit variables), then the other job will notice the change.
Not sure how to set permanent variables in Linux, but a quick Google search returns this answer: https://unix.stackexchange.com/questions/117467/how-to-permanently-set-environmental-variables

quartz grails multi-entity environment

I have a web-app (grails 2.3.5, quartz plugin) with multiple users. Now I want that my users can schedule jobs using quartz. I wonder what the best approach is to separate Triggers from one user from the triggers from another user.
e.g. provide a list of all scheduled tasks for a given user
Are there any recommendations how to make this differentiation?
Implementing some scheme for naming your triggers would likely be the best approach here. That way you can query the triggers for a job and filter them by some type of matching pattern.
It's really up to you to decide how you want to manage the visibility and management of the triggers. Using the trigger name seems to be the most logical approach in my own opinion.
Alternatively, you could build a framework (e.g. Domain model) that relates the triggers to a user.
Update
In light of the content of your comment I'd like to offer you a glimpse into how you can dynamically add a trigger to an existing job. This is only an example to help you get further down the path of accomplishing the goal you have.
import org.quartz.CronScheduleBuilder
import org.quartz.Trigger
import org.quartz.TriggerBuilder
...
def jobManagerService
String cronExpression = ... // whatever the expression is
Trigger trigger = TriggerBuilder.newTrigger()
.withIdentity("UniqueNameOfYourTriggerHere-UserId")
.withPriority(6)
.forJob("com.example.package.JobClassNameJob", "groupName")
.withSchedule(CronScheduleBuilder.cronSchedule(cronExpression))
.build()
// if you need job parameters
trigger.jobDataMap.putAll([param1: 'example'])
jobManagerService.getQuartzScheduler().scheduleJob(trigger)

Resources