I am able to load Jenkinsfile automatically through multi branch pipeline plugin with a limitation of only one jenkinsfile per branch.
I have multiple Jenkinsfiles per branch which I want to load, I have tried with below method by creating master Jenkins file and loading specific files. In below code it merges 1.Jenkinsfile and 2.Jenkinsfile as one pipeline.
node {
git url: 'git#bitbucket.org:xxxxxxxxx/pipeline.git', branch: 'B1P1'
sh "ls -latr"
load '1.Jenkinsfile'
load '2.Jenkinsfile'
}
Is there a way I can load multiple Jenkins pipeline code separately from one branch?
I did this writing a share library (ref https://jenkins.io/doc/book/pipeline/shared-libraries/) containing the following file (in vars/generateJobsForJenkinsfiles.groovy):
/**
* Creates jenkins pipeline jobs from pipeline script files
* #param gitRepoName name of github repo, e.g. <organisation>/<repository>
* #param filepattern ant style pattern for pipeline script files for which we want to create jobs
* #param jobPath closure of type (relativePathToPipelineScript -> jobPath) where jobPath is a string of formated as '<foldername>/../<jobname>' (i.e. jenkins job path)
*/
def call(String gitRepoName, String filepattern, def jobPath) {
def pipelineJobs = []
def base = env.WORKSPACE
def pipelineFiles = new FileNameFinder().getFileNames(base, filepattern)
for (pipelineFil in pipelineFiles) {
def relativeScriptPath = (pipelineFil - base).substring(1)
def _jobPath = jobPath(relativeScriptPath).split('/')
def jobfolderpath = _jobPath[0..-2]
def jobname = _jobPath[-1]
echo "Create jenkins job ${jobfolderpath.join('/')}:${jobname} for $pipelineFil"
def dslScript = []
//create folders
for (i=0; i<jobfolderpath.size(); i++)
dslScript << "folder('${jobfolderpath[0..i].join('/')}')"
//create job
dslScript << """
pipelineJob('${jobfolderpath.join('/')}/${jobname}') {
definition {
cpsScm {
scm {
git {
remote {
github('$gitRepoName', 'https')
credentials('github-credentials')
}
branch('master')
}
}
scriptPath("$relativeScriptPath")
}
}
configure { d ->
d / definition / lightweight(true)
}
}
"""
pipelineJobs << dslScript.join('\n')
//println dslScript
}
if (!pipelineJobs.empty)
jobDsl sandbox: true, scriptText: pipelineJobs.join('\n'), removedJobAction: 'DELETE', removedViewAction: 'DELETE'
}
Most likely you want to map old Jenkins' jobs (pre pipeline) operating on single branch of some project to a single multi branch pipeline. The appropriate approach would be to create stages that are input dependent (like a question user if he/she wants to deploy to staging / live).
Alternatively you could just create a new separate Pipeline jenkins job that actually references an your project's SCM and points to your other Jenkinsfile (then one pipeline job per every other jenkinsfile).
Related
My Jenkins pipeline is as follow:
pipeline {
triggers {
cron('H */5 * * *')
}
stages {
stage('Foo') {
...
}
}
}
The repository is part of a Github Organization on Jenkins - every branch or PR pushed results in a Jenkins job being created for that branch or PR.
I would like the trigger to only be run on the "main" branch because we don't need all branches and PRs to be run on a cron schedule; we only need them to be run on new commits which they already do.
Is it possible?
yes - it's possible. To schedule cron trigger only for a specific branch you can do it like this in your Jenkinsfile:
String cron_string = (scm.branches[0].name == "main") ? 'H */5 * * *' : ''
pipeline {
triggers {
cron(cron_string)
}
// whatever other code, options, stages etc. is in your pipeline ...
}
What it does:
Initialize a variable based on a branch name. For main branch it sets requested cron configuration, otherwise there's no scheduling (empty string is set).
Use this variable within pipeline
Further comments:
it's possible to use it also with parameterizedCron (in a case you'd want / need to).
you can use also some other variables for getting branch name, e.g: env.BRANCH_NAME instead of scm.branches[0].name. Whatever fits your needs...
This topic and solution is discussed also in Jenkins community: https://issues.jenkins.io/browse/JENKINS-42643?focusedCommentId=293221#comment-293221
EDIT: actually a similar question that leads to the same configuration - here on Stack: "Build Periodically" with a Multi-branch Pipeline in Jenkins
You can simply add a when condition to your pipeline.
when { branch 'main' }
I am currently facing a problem, I have about 90 jenkinsfile, we recently updated one of the Jenkins agent and it has a new label now, which means that we have to go and update every jenkinsfile with the new label of that agent, you agree that this is a bit of a pain, especially since we will have to do it every time we update the agent.I was thinking if we can define all of the agents is a single file (variable=value) than we reference the variable in our jenkinsfile, so next time we upgrade the agent we do the changes in that particular file instead of 90 jenkinsfile
Yes, you can do this. I'm assuming you have the agent details in the same SCM repo you have the Pipelines. In this case, you can do something like the below.
pipeline {
agent {label getAgentFromFile()}
stages {
stage('Hello6') {
steps {
script {
echo "Hello Something"
}
}
}
}
}
def getAgentFromFile(){
def agent = "default"
node {
agent = new File( pwd() + '/agent.txt').text.trim()
println agent
}
return agent
}
I am using the below Configure block for the scan by webhook functionality in Jenkins Job DSL.
Environment: Jenkins and Bitbucket
traits << 'com.igalg.jenkins.plugins.mswt.trigger.ComputedFolderWebHookTrigger' {
token("TEST_HOOK")
}
The above block is not working.
But the below periodic trigger syntax is working with out any issues.
it / 'triggers' << 'com.cloudbees.hudson.plugins.folder.computed.PeriodicFolderTrigger'{
spec '* * * * *'
interval "60000"
}
As we want to use the scan by webhook functionality. Kindly correct my scan by webhook syntax
The Job DSL Pipeline provides some declarative Parts for triggering a job either on changes or scheduled.
Another case can be the Multibranch Pipeline, where any branch is configured by the included Jenkinsfile. In order to create Jobs for a new branch, the Task "Scan Multibranch Pipeline Now" has to be executed either manually or scheduled. This can be programmatically(schedule) done via the configure block inside the Multibranch Pipeline Job:
multibranchPipelineJob("JobName") {
...
configure { node ->
def periodicFolderTrigger = node / triggers / 'com.cloudbees.hudson.plugins.folder.computed.PeriodicFolderTrigger' {
spec('H H * * *')
//4 hours (60000(Milliseconds)*60(Minutes)*4(hours)
interval(60000*60*4)
}
}
}
The webhook Solution, is more elegant, but you need the Jenkins plugin https://plugins.jenkins.io/multibranch-scan-webhook-trigger/ to be installed. Programmatically you can activate this via the following way:
multibranchPipelineJob("JobName") {
...
configure { node ->
def webhookTrigger = node / triggers / 'com.igalg.jenkins.plugins.mswt.trigger.ComputedFolderWebHookTrigger' {
spec('')
token("TESTTOKEN")
}
}
}
I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
this is my situation: one of my projects consists of multiple subprojects, roughly separated as frontend and backend, which are at different locations in a subversion repository.
I extracted the checkout plugin into a function, that is already properly parameterized for the checkout:
def svn(String url, String dir = '.') {
checkout([
$class: 'SubversionSCM',
locations: [[
remote: url,
credentialsId: '...'
local: dir,
]],
workspaceUpdater: [$class: 'UpdateUpdater']
])
}
That way, I was able to do the checkout by this means (simplified):
stage "1. Build"
parallel (
"Backend": { node {
svn('https://svn.acme.com/Backend/trunk')
sh 'gradle build'
}},
"Frontend": { node {
svn('https://svn.acme.com/Frontend/trunk')
sh 'gradle build'
}}
)
Checking out at the very same time lead to Jenkins having troubles with changeset xml files, as far as I could guess from the stacktraces.
Since I also want to reuse both the projects name and its svn url, I moved on to iterate over a map and checking out consecutively and just stashing the files in the first stage for the following parallel build-only stage:
stage "1. Checkout"
node {
[
'Backend': 'https://svn.acme.com/Backend/trunk',
'Frontend': 'https://svn.acme.com/Frontend/trunk',
].each { name, url ->
// Checkout in subdirectory
svn(url, name)
// Unstash by project name
dir(name) { stash name }
}
}
stage "2. Build"
// ...
Somehow Jenkins' pipeline does not support this, so I used a simple for-in loop instead:
node {
def projects = [
'Backend': '..'
// ...
]
for ( project in projects ) {
def name = project.getKey()
def url = project.getValue()
svn(url, name)
dir(name) { stash name }
}
project = projects = name = url = null
}
That doesn't work as well and exits the build with an Exception: java.io.NotSerializableException: java.util.LinkedHashMap$Entry. As you can see, I set every property to null, because I read somewhere, that this prevents that behaviour. Can you help me fix this issue and explain, what's exactly going on here?
Thanks!
I think it is a known Jenkins bug of the for in-loop:
https://issues.jenkins-ci.org/browse/JENKINS-27421
But there is also a known bug for .each style loops
https://issues.jenkins-ci.org/browse/JENKINS-26481
So currently it seems like you cannot iterate over Maps in Jenkins Pipelines. I suggest creating a list as a workaround and iterate over it with the "classic loop" style:
def myList = ["Backend|https://svn.acme.com/Backend/trunk", "Frontend|https://svn.acme.com/Frontend/trunk"]
for (i = 0; i < myList.size(); i++) {
//get current list item : myList[i] and split at pipe | ->escape pipe with \\
def (name, url) = myList[i].tokenize( '\\|' )
//do operations
svn(url, name)
dir(name) { stash name }
}