I am automating the configuration of Jenkins masters to get to a one-click instantiation. We have 6 standard jobs we create for each instance and I'd like to be able to create them via groovy.init.d scripts but haven't found examples for this type of job.
We use the cloudbees Bitbucket Team/Project plugin that ends up creating jobs of type WorkflowMultibranchProject with additional configuration to connect to our on-prem Bitbucket instance.
Does anyone have samples of creating such a job via groovy? Am I better off trying to use JobDSL to create the job (am doing that already for a Mother Seed job)
[UPDATE] : with the help of the answer below came up with a full sample creating an entire Bitbucket Team/Project Job: https://github.com/redfive/jenkins-init/blob/master/init.groovy.d/core-jobs.groovy
Having used Job DSL, I'm 50/50 undecided if it is easier compared to using Groovy (as Job DSL lacks support for some of the config options).
An example for the similar OrganizationFolder can be found in #coderanger's article on https://coderanger.net/jenkins/:
// Create the top-level item if it doesn't exist already.
def folder = jenkins.items.isEmpty() ? jenkins.createProject(OrganizationFolder, 'MyName') : jenkins.items[0]
// Set up GitHub source.
def navigator = new GitHubSCMNavigator(githubOrg)
navigator.credentialsId = cred.id // Loaded above in the GitHub section.
navigator.traits = [
// Too many repos to scan everything. This trims to a svelte 265 repos at the time of writing.
new jenkins.scm.impl.trait.WildcardSCMSourceFilterTrait('*-cookbook', ''),
// We have a ton of old branches so try to limit to just master and PRs for now.
new jenkins.scm.impl.trait.RegexSCMHeadFilterTrait('^(master|PR-.*)'),
new BranchDiscoveryTrait(1), // Exclude branches that are also filed as PRs.
new OriginPullRequestDiscoveryTrait(1), // Merging the pull request with the current target branch revision.
]
folder.navigators.replace(navigator)
The next time when I set up an instance, I'd likely give that a try.
Related
On Mercurial I've implemented a hook in my hgrc file that activates when some sort of change occurs in Jenkins(i.e tagging or committing). Here is my hook code:
curl -X POST http://tokenusername:115d59a462df750d4f12347975b3d691cf#127.0.0.1:8080/job/pipelinejob/buildWithParameters/mercurial/notifyCommit?url=http://127.0.0.1:85/hg/experimentrepoistory?token=1247
So there's no issue with my hook notifying Jenkins that a change has occurred and the pipeline executes but for some reason I am having trouble getting the commit id or any or the author name's who made the commit etc. I went to the script console in jenkins and wrote the following code in groovy to see if the changeset data from Mercurial transferred over to Jenkins. Also all the libraries are imported
def job = hudson.model.Hudson.instance.getItem("pipelinejob")
def builds = job.getBuilds()
def thisBuild = builds[0]
println('Lets test Mercurial fields ' + thisBuild.getEnvironment()['MERCURIAL_REVISION']) //Lets test Mercurial fields null
It makes me think that MERCURIAL_REVISION for some reason wasnt defined even though I provided a job that has the changeset info. I was reading this documentation https://javadoc.jenkins.io/plugin/mercurial/hudson/plugins/mercurial/MercurialChangeSet.html#MercurialChangeSet-- that lists a bunch of functions that have alot of functions like getCommitId() getNode() etc that gets the information that I need. Problem is I'm not entirely sure how to instantiate MercurialChangeSet with the jenkin jobs pipelinejob that in theory should have the Mercurial commitId information. Thats why I wanted to know if I perhaps missed something obvious regarding accessing MERCURIAL_REVISION
So I found out that I need to enable the Pipeline script from SCM and that I need to put the Jenkinsfile with the pipeline code inside my workspace directory in order to get the changeset information. I am not entirely sure why this works since I would think the Jenkinsfile needs to be in the repo directory of the SCM
I'm using Pipeline Plugin under Jenkins
My job is basically using a file called "jenkinsFile" to get the divers steps to run.
-> My purpose is how to let the job use a different file name :
examples:
myJenkinsFile
build_JenkinsFile
deploy_JenkinsFile
buildSteps
...
Since it seems that "JenkinsFile" is a conventional format ,
is there any ways to change it if it's not verry clean ??
Suggestions ??
On the project section of the configuration page you just have to click Add > Pipeline Jenkins and then you can choose the custom name that jenkins will look for the pipeline.
If you want also a better level of customization you can also use Remote File Plugin, which allows you to put your pipeline in a repository and make it work with multiple repositories/branch (and of course you can still customize the name of the file)
I want to set up a pipeline that is triggered automatically by pull requests for a GitHub project and then builds all the repositories in it. I found this article and followed the instructions as it was similar to what I required, but I'm currently stuck in getting the pipeline to trigger and build multiple repositories in the same GitHub project every time a PR is made for even one of the repositories.
I've attached this diagram to bring more clarity into my issue.
So the goal is when a pull request is made on the Branch 3 of Repository 1 the pipeline is triggered which builds that branch and all the other repositories in a specified order, i.e. Repository 2, Repository 3, etc. of the Working Project.
Your help would be much appreciated and I think a solution for this would benefit the CI DevOps community very much. Thanks!
Give the following a try - can't promise the below will be exact but should get you in the right direction.
The first thing you want to do is have a consistent Jenkinsfile across each of the repositories, now you could do this by a number of different manners but one way to accomplish this would be to use external groovy pipelines so that the logic can be kept consistent across the repos. An example of this is located here.. Copying the Jenkinsfile across each of the repositories would also work, however a single source of truth is generally a better approach.
node{
deleteDir()
git env.flowScm
def flow = load 'pipeline.groovy'
stash includes: '**', name: 'flowFiles'
stage 'Checkout'
checkout scm // short hand for checking out the "from scm repository"
flow.runFlow()
}
Where the pipeline.groovy file would contain the actual pipeline would look like this:
def runFlow() {
// your pipeline code
}
// Has to exit with 'return this;' in order to be used as library
return this;
Once you've got each of your triggers using the same pipeline logic you can take advantage of the dir command to clone and work with repositories that are not the one that triggered the build. An example of this is located here.
node('ATLAS && Linux') {
dir('CalibrationResults') {
git url: 'https://github.com/AtlasBID/CalibrationResults.git'
}
dir('Combination') {
git url: 'https://github.com/AtlasBID/Combination.git'
}
dir('CombinationBuilder') {
git url: 'https://github.com/AtlasBID/CombinationBuilder.git'
}
sh('ls')
sh('. CombinationBuilder/build.sh')
}
Putting the two steps together should achieve what you are after in this instance.
How to create a new Jenkins job within a plugin?
I have a Jenkins plugin that listens to a message queue and, when a message arrives, fires a new event to create a new job (or start a run).
I'm looking for something like:
Job myJob = new Job(...);
I know I can use REST API or CLI but since I'm in the plugin I'd use java internal solution.
Use Job DSL Plugin.
From the plugin page:
Jenkins is a wonderful system for managing builds, and people love using its UI to configure jobs. Unfortunately, as the number of jobs grows, maintaining them becomes tedious, and the paradigm of using a UI falls apart. Additionally, the common pattern in this situation is to copy jobs to create new ones, these "children" have a habit of diverging from their original "template" and consequently it becomes difficult to maintain consistency between these jobs.
The Jenkins job-dsl-plugin attempts to solve this problem by allowing jobs to be defined with the absolute minimum necessary in a programmatic form, with the help of templates that are synced with the generated jobs. The goal is for your project to be able to define all the jobs they want to be related to their project, declaring their intent for the jobs, leaving the common stuff up to a template that were defined earlier or hidden behind the DSL.
You can create a new hudson/jenkins job by simply doing:
FreeStyleProject proj = Hudson.getInstance().createProject(FreeStyleProject.class, NAMEOFJOB);
If you want to be able to handle updates (and you already have the config.xml):
import hudson.model.AbstractItem
import javax.xml.transform.stream.StreamSource
import jenkins.model.Jenkins
final jenkins = Jenkins.getInstance()
final itemName = 'name-of-job-to-be-created-or-updated'
final configXml = new FileInputStream('/path/to/config.xml')
final item = jenkins.getItemByFullName(itemName, AbstractItem.class)
if (item != null) {
item.updateByXml(new StreamSource(configXml))
} else {
jenkins.createProjectFromXML(itemName, configXml)
}
Make sure though you have the core .jar file before doing this though.
Is there any Jenkins plugin that helps with the following:
if a directory <XXX*, is present in SVN folder <GoRoCo>
then the <GoRoCo>_<XXX> Jenkins job is called
?
Example:
In job "TEST" , I specify parameters like directory name (A, B , C) and folder name (G1R2) then job "TEST" should trigger the jobs "G1R2_A" , "G1R2_B" and "G1R2_C"
Use Parameterized Trigger Plugin. When specifying jobs to call in the plugin you can use tokens, as in JOB_${PARAM1}_${PARAM2}.
Take a look at that plugin, i think it does exactly what you are looking for:
https://wiki.jenkins-ci.org/display/JENKINS/Files+Found+Trigger
Use Build Flow plugin
With the help this plugin you can run as many jobs with or without parameter.
Use some scripts to create property file with the required parameters for each of the modified project and place them in the workspace directory.
Later you can use parameterised plugin to trigger downstream project like this.
Note: you might also have to delete those properties after triggering the down stream projects.