Is it possible to Scan a Multibranch Pipeline to detect the branches with a Jenkinsfile, but without the pipeline execution?
My projects have different branches and I don't want that all the children pipelines branches with a Jenkinsfile to start to execute when I launch a build scan from the parent pipeline multibranch.
In your Branch Sources section you can add a Property named Suppress automatic SCM triggering.
This prevents Jenkins from building everything with an Jenkinsfile.
Also, you can do it programatically
import jenkins.branch.*
import jenkins.model.Jenkins
for (f in Jenkins.instance.getAllItems(jenkins.branch.MultiBranchProject.class)) {
if (f.parent instanceof jenkins.branch.OrganizationFolder) {
continue;
}
for (s in f.sources) {
def prop = new jenkins.branch.NoTriggerBranchProperty();
def propList = [prop] as jenkins.branch.BranchProperty[];
def strategy = new jenkins.branch.DefaultBranchPropertyStrategy(propList);
s.setStrategy(strategy);
}
f.computation.run()
}
This is a Groovy snippet you can execute in Jenkins, it's gonna do the scanning but will not start new "builds" for all discovered branches.
If you are using job-dsl you could simply do this and it will scan everything without actually running the build the first time you index.
organizationFolder('Some folder name') {
buildStrategies {
skipInitialBuildOnFirstBranchIndexing()
}
}
To add to #Stqs's answer, you could also set noTriggerBranchProperty it using Job DSL plugin, e.g.:
multibranchPipelineJob('example') {
...
branchSources {
branchSource {
...
strategy {
defaultBranchPropertyStrategy {
props {
// Suppresses the normal SCM commit trigger coming from branch indexing
noTriggerBranchProperty()
...
}
}
}
}
}
...
}
organizationFolder('my-folder') {
buildStrategies {
buildRegularBranches()
buildChangeRequests {
ignoreTargetOnlyChanges true
ignoreUntrustedChanges false
}
}
}
Note: plugin basic-branch-build-strategies is required
REFERENCES:
https://issues.jenkins.io/browse/JENKINS-63799
http://jenkins-ci.361315.n4.nabble.com/JobDSL-an-example-of-configuring-a-bitbucket-source-trait-of-bitbucketForkDiscovery-in-the-multibrand-td5014968.html#a5015085
After much struggle I've found this solution, it should only avoid triggering the build when branch indexing and not disable automatic build after commit. Just add it in the first stage of your project:
when {
not {
expression {
def causes = currentBuild.getBuildCauses()
String causesClass = causes._class[0]
return causesClass.contains('BranchIndexingCause')
}
}
}
Related
Please bear with me the description might be long but it might give a clean picture of the intent and issue.
I have used Job DSL Plugin to create a seeder job, which in turns creates two new Jobs. I have 2 separate repositories
For maintaining jenkins pipeline scripts.
For actual code to build.
First I have created a pipeline job in jenkins which in turns creates view and 2 jobs. Config shown below:
The Jenkinsfile given below uses Job DSL plugin api, reads the groovy script and creates the required 2 jobs.
node('master') {
checkout scm
jobDsl targets: ['dsl/seedJobBuilder.groovy'].join('\n'),
removedJobAction: 'IGNORE',
removedViewAction: 'IGNORE',
lookupStrategy: 'SEED_JOB'
}
seedJobBuilder.groovy creates a dsl pipeline job whose task would be to build the actual codebase.
listView('Build Pipelines') {
description('All build and deploy jobs')
jobs {
names(
'build',
'deploy',
)
}
columns {
status()
weather()
name()
lastSuccess()
lastFailure()
lastDuration()
buildButton()
}
}
def buildCommerce = pipelineJob('build') {
properties {
githubProjectUrl("${projectRepo}") // url of actual code repo not the jenkins script repo
}
definition {
cpsScm {
scm {
git {
remote {
url("${pipelineRepo}") // jenkins script repo url
credentials("somecredentials")
}
branch('${JENKINS_SCRIPT_BRANCH}')
}
scriptPath('pipelines/pipelineBuildEveryDay.groovy')
lightweight(false)
}
}
}
triggers {
githubPush()
}
}
Config of the above job created by Job DSL:
This job reads the pipelineBuildEveryDay groovy script, checkout the actual codebase and build and deploy.
The place where I am struggling is how do we trigger build on this second job through github hook or through ghprb. Since I don't want to manipulate manually the second job and the git url of the job is the script repo URL not the codebase URL. Is it possible to do this even? If yes what am I missing?
I have the webhook configured
pipelineBuildEveryDay.groovy
pipeline {
libraries {
lib("shared-library#${params.JENKINS_SCRIPT_BRANCH}")
}
agent {
node {
label 'master'
}
}
options {
skipDefaultCheckout(true) // No more 'Declarative: Checkout' stage
}
stages {
stage('Crazy Build Pipeline') {
tools {
jdk 'java11'
}
stages {
stage('Prepare build name') {
steps{
script{
currentBuild.displayName = "${currentBuild.number}-build"
}
}
}
stage('Checkout') {
steps {
cleanWs()
script {
checkoutRepository("${projectDir}", "${params.PROJECT_TAG}", "${params.PROJECT_REPO}")
}
}
}
stage('Run Tests') {
steps {
echo "Running test coming soon..."
}
}
}
}
}
// post build actions
post {
success {
echo "success"
}
failure {
echo "failure"
}
}
}
Well the suffering comes to an end. Posting this answer for anyone struggling with similar sort of issues.
Make sure you uncheck all other types of trigger, the only checked one should be pull request builder.
The part which screwed me was the Project URL. In my case in SCM part the github url was of the Jenkins-scripts repository URL not the URL of the codebase I want to build. So I tried to use my codebase repository URL in Github Project URL textbox.
But the real problem was using repository URL in the format 'https://code-base-repo-url.git' instead it should be 'https://code-base-repo-url'. Sounds stupid? Yeah I know!
Finally the complete Job config pipeline script if it helps:
def pipelineRepo = 'https://jenkins-script-repo'
def projectRepo = 'https://code-base-repo-url'
def projectTag = '${GIT_BRANCH}'
def buildCommerce = pipelineJob('build') {
properties {
githubProjectUrl("${projectRepo}")
}
definition {
cpsScm {
scm {
git {
remote {
url("${pipelineRepo}")
credentials("use-your-own-user-pass-cred")
}
branch('${JENKINS_SCRIPT_BRANCH}')
}
scriptPath('pipelines/pipelineBuildEveryDay.groovy')
lightweight(false)
}
}
}
triggers {
githubPullRequest {
admin('use_your_own_admin')
triggerPhrase('build please')
useGitHubHooks()
permitAll()
displayBuildErrorsOnDownstreamBuilds()
extensions {
commitStatus {
context('Jenkins')
completedStatus('SUCCESS', 'All is well')
completedStatus('FAILURE', 'Something went wrong. Investigate!')
completedStatus('ERROR', 'Something went really wrong. Investigate!')
}
}
}
}
}
When using a Jenkins Multibranch Pipeline Job if you choose Suppress Automatic SCM trigger in the job it will stop the job from building after indexing branches (great functionality).
However for some reason this ALSO will kill the ability to trigger the build from SCM events!
Is there any way to stop builds from triggering after branch discovery (branch indexing) but still build normally by SCM events?
You can always add logic to your pipeline to abort on branch indexing causes. For example:
boolean isBranchIndexingCause() {
def isBranchIndexing = false
if (!currentBuild.rawBuild) {
return true
}
currentBuild.rawBuild.getCauses().each { cause ->
if (cause instanceof jenkins.branch.BranchIndexingCause) {
isBranchIndexing = true
}
}
return isBranchIndexing
}
Adjust the logic to suit your use-case.
EDIT: The Pipeline Syntax > Global Variables Reference embedded within the Jenkins UI (e.g.: <jenkins url>/job/<pipeline job name>/pipeline-syntax/globals) has information about the currentBuild global variable, which lead to some javadocs:
The currentBuild variable, which is of type RunWrapper, may be used to refer to the currently running build. It has the following readable properties:
...
rawBuild:
a hudson.model.Run with further APIs, only for trusted libraries or administrator-approved scripts outside the sandbox; the value will not be Serializable so you may only access it inside a method marked #NonCPS
...
See also: jenkins.branch.BranchIndexingCause
I know this post is very old, but maybe someone still has this issue, first you need to install the plugin basic-branch-build-strategies:
If you are using jenkins dsl:
buildStrategies {
buildAllBranches {
strategies {
skipInitialBuildOnFirstBranchIndexing()
}
}
}
That's not the functionality for this - https://issues.jenkins-ci.org/browse/JENKINS-41980
Suppressing SCM Triggers should suppress all builds triggered by a
detected change in the SCM irrespective of how the change was detected
There is a feature request for such functionality in the Jenkins backlog: JENKINS-63673 Allow configuring in NoTriggerBranchProperty which builds should be suppressed. I created a pull request today, so there is a chance it will be a part of the Branch API plugin in the future. In the meantime you may to use the custom version (see the build).
How to use the JobDSL plugin to configure it automatically:
multibranchPipelineJob {
// ...
branchSources {
branchSource {
source {
// ...
}
strategy {
allBranchesSame {
props {
suppressAutomaticTriggering {
strategyId(2)
}
}
}
}
}
}
// ...
}
For what I understand, this happens because the pipeline definition is not read when you set "Suppress Automatic SCM trigger".
And so, all triggers (SCM, upstream...) that you declared in the pipeline won't be known by Jenkins, until you run the job a first time.
So if you don't want builds to be triggered by branch indexing, set option "Suppress Automatic SCM trigger"
If you want you pipeline to be known by Jenkins, so that he can react to your triggers, you should not set "Suppress Automatic SCM trigger"
We can Modify branch-api-plugin accordingly. https://github.com/jenkinsci/branch-api-plugin
here src/main/java/jenkins/branch/OverrideIndexTriggersJobProperty.java
In this file it decides which Task to trigger Build
Here i have modified the function so that Feature branches are not triggered but will still be added to the list.
By Default it evaluates the checkbox Suppress Automatic SCM trigger
#Extension
public static class Dispatcher extends Queue.QueueDecisionHandler {
private static final Logger LOGGER = Logger.getLogger(Dispatcher.class.getName());
#SuppressWarnings("rawtypes") // untypable
#Override
public boolean shouldSchedule(Queue.Task p, List<Action> actions) {
LOGGER.log(Level.INFO, "[TARUN_DEBUG] TASK NAME : "+ p.getName());
if(p.getName().startsWith("feature")||p.getName().startsWith("bugfix")){
return false;
}
else if(p.getName().startsWith("release")||p.getName().equals("master")||p.getName().startsWith("develop")||p.getName().startsWith("part of")||p.getName().startsWith("PR-")){
}
else{
LOGGER.log(Level.INFO, "[TARUN_DEBUG] NOT TRIGGERED "+p.getName());
return false;
}
for (Action action : actions) {
if (action instanceof CauseAction) {
for (Cause c : ((CauseAction) action).getCauses()) {
if (c instanceof BranchIndexingCause) {
if (p instanceof Job) {
Job<?,?> j = (Job) p;
OverrideIndexTriggersJobProperty overrideProp = j.getProperty(OverrideIndexTriggersJobProperty.class);
if (overrideProp != null) {
return overrideProp.getEnableTriggers();
} else {
return true;
}
}
}
}
}
}
return true;
}
To skip a build triggered by build indexing you can place the following code snippet in your pipeline. No extra libraries required.
when {
// Run pipeline/stage only if not triggered by branch indexing.
not {
triggeredBy 'BranchIndexingCause'
}
}
I am having issues with multibranch pipeline for job DSL plugin to automate the creation of multibranch pipeline job.
The piece am having issues with is how to let set the path to the Jenkinsfile on the repo. I have looked online for documentation but found nothing to help. I have even tried to get example scripts but multibranch job DSL scripts are rare on the internet. Matter of fact could not find any that has Jenkinsfile set in it
jobs.groovy
folderName = "${JENKINS_PATH}"
folder(folderName)
multibranchPipelineJob("${folderName}/jenkins_multibranch_devops") {
branchSources {
git {
remote("https://gitlab.com/${REPO_PATH}")
credentialsId('gitlab_credentials')
includes('*')
}
}
configure { project ->
project / factory {
scriptPath('jenkins/Jenkinsfile')
}
}
orphanedItemStrategy {
discardOldItems {
numToKeep(14)
}
}
}
Here is what i have and its failing because i am obviously missing some stuffs which is why am looking for help
What am i missing and where can i get documentation if i plan on adding more and more to this jobs.groovy file and want to know how to know what stuffs to add because current doc page doesn't help at all
You can set it using this:
multibranchPipelineJob("${folderName}/jenkins_multibranch_devops") {
branchSources {
git {
remote("https://gitlab.com/${REPO_PATH}")
credentialsId('gitlab_credentials')
includes('*')
}
}
factory {
workflowBranchProjectFactory {
scriptPath('jenkins/Jenkinsfile')
}
}
orphanedItemStrategy {
discardOldItems {
numToKeep(14)
}
}
}
Documentation is available through the Job DSL API viewer in your jenkins installation: https://{your-jenkins}/plugin/job-dsl/api-viewer/index.html
I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.
I have installed Pipeline Plugin which used to be called as Workflow Plugin earlier.
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin
I want to know how can i use Job Dsl to create and configure a job which is of type Pipeline
You should use pipelineJob:
pipelineJob('job-name') {
definition {
cps {
script('logic-here')
sandbox()
}
}
}
You can define the logic by inlining it:
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
echo 'logic'
}
}
stage('Stage 2') {
steps {
echo 'logic'
}
}
}
}
}
'''.stripIndent())
sandbox()
}
}
}
or load it from a file located in workspace:
pipelineJob('job-name') {
definition {
cps {
script(readFileFromWorkspace('file-seedjob-in-workspace.jenkinsfile'))
sandbox()
}
}
}
Example:
Seed-job file structure:
jobs
\- productJob.groovy
logic
\- productPipeline.jenkinsfile
then productJob.groovy content:
pipelineJob('product-job') {
definition {
cps {
script(readFileFromWorkspace('logic/productPipeline.jenkinsfile'))
sandbox()
}
}
}
I believe this question is asking something how to use the Job DSL to create a pipeline job which references the Jenkinsfile for the project, and doesn't combine the job creation with the detailed step definitions as has been given in the answers to date. This makes sense: the Jenkins job creation and metadata configuration (description, triggers, etc) could belong to Jenkins admins, but the dev team should have control over what the job actually does.
#meallhour, is the below what you're after? (works as at Job DSL 1.64)
pipelineJob('DSL_Pipeline') {
def repo = 'https://github.com/path/to/your/repo.git'
triggers {
scm('H/5 * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master', '**/feature*')
scriptPath('misc/Jenkinsfile.v2')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
// the single line below also works, but it
// only covers the 'master' branch and may not give you
// enough control.
// git(repo, 'master', { node -> node / 'extensions' << '' } )
}
}
}
}
Ref the Job DSL pipelineJob: https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob, and hack away at it on http://job-dsl.herokuapp.com/ to see the generated config.
This example worked for me. Here's another example based on what worked for me:
pipelineJob('Your App Pipeline') {
def repo = 'https://github.com/user/yourApp.git'
def sshRepo = 'git#git.company.com:user/yourApp.git'
description("Your App Pipeline")
keepDependencies(false)
properties{
githubProjectUrl (repo)
rebuild {
autoRebuild(false)
}
}
definition {
cpsScm {
scm {
git {
remote { url(sshRepo) }
branches('master')
scriptPath('Jenkinsfile')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
}
}
}
If you build the pipeline first through the UI, you can use the config.xml file and the Jenkins documentation https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob to create your pipeline job.
In Job DSL, pipeline is still called workflow, see workflowJob.
The next Job DSL release will contain some enhancements for pipelines, e.g. JENKINS-32678.
First you need to install Job DSL plugin and then create a freestyle project in jenkins and select Process job DSLs from the dropdown in the build section.
Select Use the provided DSL script and provide following script.
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage name 1') {
steps {
// your logic here
}
}
stage('Stage name 2') {
steps {
// your logic here
}
}
}
}
}
''')
}
}
}
Or you can create your job by pointing the jenkinsfile located in remote git repository.
pipelineJob("job-name") {
definition {
cpsScm {
scm {
git {
remote {
url("<REPO_URL>")
credentials("<CREDENTIAL_ID>")
}
branch('<BRANCH>')
}
}
scriptPath("<JENKINS_FILE_PATH>")
}
}
}
If you are using a git repo, add a file called Jenkinsfile at the root directory of your repo. This should contain your job dsl.