Stop reoccurring Jenkins Job automatically after some time - jenkins

I'd like to start a pipeline job manually. This job should then run daily and after seven days stop automatically. Is there any way to do this?

AFAIK There is no OOB solution for this. But you can implement something with Groovy to achieve what you need. For example, check the following pipeline. In the below Pipeline, I'm adding a Cron expression to run every day if manually triggered and then removing the corn expression after a predefined number of runs elapse. You should be able to fine-tune the below and achieve what you need.
def expression = getCron()
pipeline {
agent any
triggers{ cron(expression) }
stages {
stage('Example') {
steps {
script {
echo "Build"
}
}
}
}
}
def getCron() {
def runEveryDayCron = "0 9 * * *" //Runs everyday at 9
def numberOfRunsToCheck = 7 // Will run 7 times
def currentBuildNumber = currentBuild.getNumber()
def job = Jenkins.getInstance().getItemByFullName(env.JOB_NAME)
for(int i=currentBuildNumber; i > currentBuildNumber - numberOfRunsToCheck; i--) {
def build = job.getBuildByNumber(i)
if(build.getCause(hudson.model.Cause$UserIdCause) != null) { //This is a manually triggered Build
return runEveryDayCron
}
}
return ""
}

Related

Getting the same output from parallel stages in jenkins scripted pipelines

I'm trying to create parallel stages in jenkins pipeline for say with this example
node {
stage('CI') {
script {
doDynamicParallelSteps()
}
}
}
def doDynamicParallelSteps(){
tests = [:]
for (f in ["Branch_1", "Branch_2", "Branch_3"]) {
tests["${f}"] = {
node {
stage("${f}") {
echo "${f}"
}
}
}
}
parallel tests
}
I'm expecting to see "Branch_1", "Branch_2", "Branch_3" and instead I'm getting "Branch_3", "Branch_3", "Branch_3"
I don't understand why. Can you please help ?
Short answer: On the classic view, the stage names are displaying the last value of the variable ${f}. Also, all the echo are echoing the same value. You need to change the loop.
Long Answer: Jenkins does not allow to have multiple stages with the same name so this could never happen successfully :)
On your example, you can see it fine on Blue Ocean:
Also, on console output, the names are right too.
On Jenkins classic view, the stage names have the last value of the variable ${f}. The last value is being printed on the classic view for the stage name, and all the echo are the same.
Solution: Change your loop. This worked fine for me.
node {
stage('CI') {
script {
doDynamicParallelSteps()
}
}
}
def void doDynamicParallelSteps(){
def branches = [:]
for (int i = 0; i < 3 ; i++) {
int index=i, branch = i+1
branches["branch_${branch}"] = {
stage ("Branch_${branch}"){
node {
sh "echo branch_${branch}"
}
}
}
}
parallel branches
}
This has to do with closures and iteration, but in the end this might fix it:
for (f in ["Branch_1", "Branch_2", "Branch_3"]) {
def definitive_name = f
tests[definitive_name] = {

Get all pipeline Jobs in Jenkins Groovy Script

I want to trigger several different pipeline jobs, depending on the input parameters of a Controller Pipeline job.
Within this job I build the names of the other pipelines, I want to trigger from a list, given back from a python script.
node {
stage('Get_Clusters_to_Build') {
copyArtifacts filter: params.file_name_var_mapping, fingerprintArtifacts: true, projectName: 'UpdateConfig', selector: lastSuccessful()
script {
cmd_string = 'determine_ci_builds --jobname ' + env.JOB_NAME
clusters = bat(script: cmd_string, returnStdout: true)
output_array = clusters.split('\n')
cluster_array = output_array[2].split(',')
}
echo "${clusters}"
}
jobs = Hudson.instance.getAllItems(AbstractProject.class)
echo "$jobs"
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
stage ("TriggerTests") {
echo "Done"
}
}
My problem is, it might be the case, that a couple of jobs with the names I get from the Stage Get_Clusters_to_Build do not exist. Therefore they cannot be triggered and my job fails.
Now to my question, is there a way to get the names of all pipeline jobs, and how can I use them to check if I can trigger a build?
I tried by jobs = Hudson.instance.getAllItems(AbstractProject.class) but this gives me only the "normal" FreeStyleProject-Jobs.
I want to do something like this in the loop:
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
// This part I only want to be executed if job_to_build is found in the jobs list, somehow like:
if job_to_build in jobs: // I know, this is not proper groovy syntax
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
All pipeline jobs are instantces of org.jenkinsci.plugins.workflow.job.WorkflowJob. So you can get names of all Pipeline jobs using the following function
#NonCPS
def getPipelineJobNames() {
Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)*.fullName
}
Then you can use it this way
//...
def jobs = getPipelineJobNames()
if (job_to_build in jobs) {
//....
}
try this syntax to get standard and pipeline jobs:
def jobs = Hudson.instance.getAllItems(hudson.model.Job.class)
As #Vitalii Vitrenko wrote, that is working fine
for (job in Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)) {
println job.fullName
}

Jenkins pipeline script - wait for running build

I have jenkins groovy pipeline which triggers other builds. It is done in following script:
for (int i = 0; i < projectsPath.size(); i++) {
stepsForParallel[jenkinsPath] = {
stage("build-${jenkinsPath}") {
def absoluteJenkinsPath = "/${jenkinsPath}/BUILD"
build job: absoluteJenkinsPath, parameters: [[$class: 'StringParameterValue', name: 'GIT_BRANCH', value: branch],
[$class: 'StringParameterValue', name: 'ROOT_EXECUTOR', value: rootExecutor]]
}
}
}
parallel stepsForParallel
The problem is that my jobs depend on other common job, i.e. job X triggers job Y and job Z triggers job Y. What I'd like to achieve is that the job X triggers job Y and job Z waits for result of Y triggered by X.
I suppose I need to iterate over all running builds and check if any build of the same type is running. If yes then wait for it. Following code could wait for build to be done:
def busyExecutors = Jenkins.instance.computers
.collect {
c -> c.executors.findAll { it.isBusy() }
}
.flatten()
busyExecutors.each { e ->
e.getCurrentWorkUnit().context.future.get()
}
My problem is that I need to tell which running job I need to wait. To do so I need to check:
build parameters
build environments variables
job name
How can i retreive this kind of data?
I know that jenkins have silent period feature but after period expires new job will be triggered.
EDIT1
Just to clarify why I need this function. I have jobs which builds applications and libs. Applications depend on libs and libs depend on other libs. When build is triggered then it triggers downstream jobs (libs on which it depends).
Sample dependency tree:
A -> B,C,D,E
B -> F
C -> F
D -> F
E -> F
So when I trigger A then B,C,D,E are triggered and F is also triggered (4 times). I'd like to trigger F only once.
I have beta/PoC solution (below) which almost work. Right now I have following problems with this code:
echo with text "found already running job" is not flushed to the screen until job.future.get() ends
I have this ugly "wait" (for(i = 0; i < 1000; ++i){}). It is because result field isn't set when get method returns
import hudson.model.*
def getMatchingJob(projectName, branchName, rootExecutor){
result = null
def busyExecutors = []
for(i = 0; i < Jenkins.instance.computers.size(); ++i){
def computer = Jenkins.instance.computers[i]
for(j = 0; j < computer.getExecutors().size(); ++j){
def executor = computer.executors[j]
if(executor.isBusy()){
busyExecutors.add(executor)
}
}
}
for(i = 0; i < busyExecutors.size(); ++i){
def workUnit = busyExecutors[i].getCurrentWorkUnit()
if(!projectName.equals(workUnit.work.context.executionRef.job)){
continue
}
def context = workUnit.context
context.future.waitForStart()
def parameters
def env
for(action in context.task.context.executionRef.run.getAllActions()){
if(action instanceof hudson.model.ParametersAction){
parameters = action
} else if(action instanceof org.jenkinsci.plugins.workflow.cps.EnvActionImpl){
env = action
}
}
def gitBranchParam = parameters.getParameter("GIT_BRANCH")
def rootExecutorParam = parameters.getParameter("ROOT_EXECUTOR")
gitBranchParam = gitBranchParam ? gitBranchParam.getValue() : null
rootExecutorParam = rootExecutorParam ? rootExecutorParam.getValue() : null
println rootExecutorParam
println gitBranchParam
if(
branchName.equals(gitBranchParam)
&& (rootExecutor == null || rootExecutor.equals(rootExecutorParam))
){
result = [
"future" : context.future,
"run" : context.task.context.executionRef.run,
"url" : busyExecutors[i].getCurrentExecutable().getUrl()
]
}
}
result
}
job = getMatchingJob('project/module/BUILD', 'branch', null)
if(job != null){
echo "found already running job"
println job
def done = job.future.get()
for(i = 0; i < 1000; ++i){}
result = done.getParent().context.executionRef.run.result
println done.toString()
if(!"SUCCESS".equals(result)){
error 'project/module/BUILD: ' + result
}
println job.run.result
}
I have a similar problem to solve. What I am doing, though, is iterating over the jobs (since an active job might not be executed on an executor yet).
The triggering works like this in my solution:
if a job has been triggered manually or by VCS, it triggers all its (recursive) downstream jobs
if a job has been triggered by another upstream job, it does not trigger anything
This way, the jobs are grouped by their trigger cause, which can be retrieved with
#NonCPS
def getTriggerBuild(currentBuild)
{
def triggerBuild = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
if (triggerBuild) {
return [triggerBuild.getUpstreamProject(), triggerBuild.getUpstreamBuild()]
}
return null
}
I give each job the list of direct upstream jobs it has. The job can then check whether the upstream jobs have finished the build in the same group with
#NonCPS
def findBuildTriggeredBy(job, triggerJob, triggerBuild)
{
def jobBuilds = job.getBuilds()
for (buildIndex = 0; buildIndex < jobBuilds.size(); ++buildIndex)
{
def build = jobBuilds[buildIndex]
def buildCause = build.getCause(hudson.model.Cause$UpstreamCause)
if (buildCause)
{
def causeJob = buildCause.getUpstreamProject()
def causeBuild = buildCause.getUpstreamBuild()
if (causeJob == triggerJob && causeBuild == triggerBuild)
{
return build.getNumber()
}
}
}
return null
}
From there, once the list of upstream builds have been made, I wait on them.
def waitForUpstreamBuilds(upstreamBuilds)
{
// Iterate list -- NOTE: we cannot use groovy style or even modern java style iteration
for (upstreamBuildIndex = 0; upstreamBuildIndex < upstreamBuilds.size(); ++upstreamBuildIndex)
{
def entry = upstreamBuilds[upstreamBuildIndex]
def upstreamJobName = entry[0]
def upstreamBuildId = entry[1]
while (true)
{
def status = isUpstreamOK(upstreamJobName, upstreamBuildId)
if (status == 'OK')
{
break
}
else if (status == 'IN_PROGRESS')
{
echo "waiting for job ${upstreamJobName}#${upstreamBuildId} to finish"
sleep 10
}
else if (status == 'FAILED')
{
echo "${upstreamJobName}#${upstreamBuildId} did not finish successfully, aborting this build"
return false
}
}
}
return true
}
And abort the current build if one of the upstream builds failed (which nicely translates as a "aborted build" instead of a "failed build").
The full code is there: https://github.com/doudou/autoproj-jenkins/blob/use_autoproj_to_bootstrap_in_packages/lib/autoproj/jenkins/templates/library.pipeline.erb
The major downside of my solution is that the wait is expensive CPU-wise when there are a lot of builds waiting. There's the built-in waitUntil, but it led to deadlocks (I haven't tried on the last version of the pipeline plugins, might have been solved). I'm looking for ways to fix that right now - that's how I found your question.

How to handle nightly build in Jenkins declarative pipeline

I have a multibranch pipeline with a Jenkinsfile in my repo and I am able to have my CI workflow (build & unit tests -> deploy-dev -> approval -> deploy-QA -> approval -> deploy-prod) on every commit.
What I would like to do is add SonarQube Analysis on nightly builds in the first phase build & unit tests.
Since my build is triggerd by Gitlab I have defined my pipeline triggers as follow :
pipeline {
...
triggers {
gitlab(triggerOnPush: true, triggerOnMergeRequest: true, branchFilterType: 'All')
}
...
}
To setup my nightly build I have added
triggers {
...
cron('H H * * *')
}
But now, how to execute analysis step if we are only building the job triggered by the cron expression at night ?
My simplified build stage looks as follow :
stage('Build & Tests & Analysis') {
// HERE THE BEGIN SONAR ANALYSIS (to be executed on nightly builds)
bat 'msbuild.exe ...'
bat 'mstest.exe ...'
// HERE THE END SONAR ANALYSIS (to be executed on nightly builds)
}
There is the way how to get build trigger information. It is described here:
https://jenkins.io/doc/pipeline/examples/#get-build-cause
It is good for you to check this as well:
how to get $CAUSE in workflow
Very good reference for your case is https://hopstorawpointers.blogspot.com/2016/10/performing-nightly-build-steps-with.html. Here is the function from that source that exactly matches your need:
// check if the job was started by a timer
#NonCPS
def isJobStartedByTimer() {
def startedByTimer = false
try {
def buildCauses = currentBuild.rawBuild.getCauses()
for ( buildCause in buildCauses ) {
if (buildCause != null) {
def causeDescription = buildCause.getShortDescription()
echo "shortDescription: ${causeDescription}"
if (causeDescription.contains("Started by timer")) {
startedByTimer = true
}
}
}
} catch(theError) {
echo "Error getting build cause"
}
return startedByTimer
}
This works in declarative pipeline
when {
triggeredBy 'TimerTrigger'
}
For me the easiest way is to define a cron in build trigger and verify the hour on the nightly stage using a when expression:
pipeline {
agent any
triggers {
pollSCM('* * * * *') //runs this pipeline on every commit
cron('30 23 * * *') //run at 23:30:00
}
stages {
stage('nightly') {
when {//runs only when the expression evaluates to true
expression {//will return true when the build runs via cron trigger (also when there is a commit at night between 23:00 and 23:59)
return Calendar.instance.get(Calendar.HOUR_OF_DAY) in 23
}
}
steps {
echo "Running the nightly stage only at night..."
}
}
}
}
You could check the build cause like so:
stage('Build & Tests & Analysis') {
when {
expression {
for (Object currentBuildCause : script.currentBuild.rawBuild.getCauses()) {
return currentBuildCause.class.getName().contains('TimerTriggerCause')
}
}
steps {
bat 'msbuild.exe ...'
bat 'mstest.exe ...'
}
}
}
However, this requires the following entries in script-approval.xml:
<approvedSignatures>
<string>method hudson.model.Run getCauses</string>
<string>method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild</string>
</approvedSignatures>
This can also be approved via https://YOURJENKINS/scriptApproval/.
Hopefully, this won't be necessary after JENKINS-41272 is fixed.
Until then, a workaround could be to check the hour of day in the when expression (keep in mind that these times refer to to the timezone of Jenkins)
when { expression { return Calendar.instance.get(Calendar.HOUR_OF_DAY) in 0..3 } }
I've found a way, which does not use "currentBuild.rawBuild" which is restricted. Begin your pipeline with:
startedByTimer = false
def buildCauses = "${currentBuild.buildCauses}"
if (buildCauses != null) {
if (buildCauses.contains("Started by timer")) {
startedByTimer = true
}
}
Test the boolean where you need it, for example:
stage('Clean') {
when {
anyOf {
environment name: 'clean_build', value: 'Yes'
expression { (startedByTimer == true) }
}
}
steps {
echo "Cleaning..."
...
Thanks to this you can now do this without needing to the the use the non-whitelisted currentBuild.getRawBuild().getCauses() function which can give you Scripts not permitted to use method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild depending on your setup:
#NonCPS
def isJobStartedByTimer() {
def startedByTimer = false
try {
def buildCauses = currentBuild.getBuildCauses()
for ( buildCause in buildCauses ) {
if (buildCause != null) {
def causeDescription = buildCause.shortDescription
echo "shortDescription: ${causeDescription}"
if (causeDescription.contains("Started by timer")) {
startedByTimer = true
}
}
}
} catch(theError) {
echo "Error getting build cause"
}
return startedByTimer
}

How to tell Jenkins "Build every project in folder X"?

I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.

Resources