Jenkins pipeline script - wait for running build - jenkins
I have jenkins groovy pipeline which triggers other builds. It is done in following script:
for (int i = 0; i < projectsPath.size(); i++) {
stepsForParallel[jenkinsPath] = {
stage("build-${jenkinsPath}") {
def absoluteJenkinsPath = "/${jenkinsPath}/BUILD"
build job: absoluteJenkinsPath, parameters: [[$class: 'StringParameterValue', name: 'GIT_BRANCH', value: branch],
[$class: 'StringParameterValue', name: 'ROOT_EXECUTOR', value: rootExecutor]]
}
}
}
parallel stepsForParallel
The problem is that my jobs depend on other common job, i.e. job X triggers job Y and job Z triggers job Y. What I'd like to achieve is that the job X triggers job Y and job Z waits for result of Y triggered by X.
I suppose I need to iterate over all running builds and check if any build of the same type is running. If yes then wait for it. Following code could wait for build to be done:
def busyExecutors = Jenkins.instance.computers
.collect {
c -> c.executors.findAll { it.isBusy() }
}
.flatten()
busyExecutors.each { e ->
e.getCurrentWorkUnit().context.future.get()
}
My problem is that I need to tell which running job I need to wait. To do so I need to check:
build parameters
build environments variables
job name
How can i retreive this kind of data?
I know that jenkins have silent period feature but after period expires new job will be triggered.
EDIT1
Just to clarify why I need this function. I have jobs which builds applications and libs. Applications depend on libs and libs depend on other libs. When build is triggered then it triggers downstream jobs (libs on which it depends).
Sample dependency tree:
A -> B,C,D,E
B -> F
C -> F
D -> F
E -> F
So when I trigger A then B,C,D,E are triggered and F is also triggered (4 times). I'd like to trigger F only once.
I have beta/PoC solution (below) which almost work. Right now I have following problems with this code:
echo with text "found already running job" is not flushed to the screen until job.future.get() ends
I have this ugly "wait" (for(i = 0; i < 1000; ++i){}). It is because result field isn't set when get method returns
import hudson.model.*
def getMatchingJob(projectName, branchName, rootExecutor){
result = null
def busyExecutors = []
for(i = 0; i < Jenkins.instance.computers.size(); ++i){
def computer = Jenkins.instance.computers[i]
for(j = 0; j < computer.getExecutors().size(); ++j){
def executor = computer.executors[j]
if(executor.isBusy()){
busyExecutors.add(executor)
}
}
}
for(i = 0; i < busyExecutors.size(); ++i){
def workUnit = busyExecutors[i].getCurrentWorkUnit()
if(!projectName.equals(workUnit.work.context.executionRef.job)){
continue
}
def context = workUnit.context
context.future.waitForStart()
def parameters
def env
for(action in context.task.context.executionRef.run.getAllActions()){
if(action instanceof hudson.model.ParametersAction){
parameters = action
} else if(action instanceof org.jenkinsci.plugins.workflow.cps.EnvActionImpl){
env = action
}
}
def gitBranchParam = parameters.getParameter("GIT_BRANCH")
def rootExecutorParam = parameters.getParameter("ROOT_EXECUTOR")
gitBranchParam = gitBranchParam ? gitBranchParam.getValue() : null
rootExecutorParam = rootExecutorParam ? rootExecutorParam.getValue() : null
println rootExecutorParam
println gitBranchParam
if(
branchName.equals(gitBranchParam)
&& (rootExecutor == null || rootExecutor.equals(rootExecutorParam))
){
result = [
"future" : context.future,
"run" : context.task.context.executionRef.run,
"url" : busyExecutors[i].getCurrentExecutable().getUrl()
]
}
}
result
}
job = getMatchingJob('project/module/BUILD', 'branch', null)
if(job != null){
echo "found already running job"
println job
def done = job.future.get()
for(i = 0; i < 1000; ++i){}
result = done.getParent().context.executionRef.run.result
println done.toString()
if(!"SUCCESS".equals(result)){
error 'project/module/BUILD: ' + result
}
println job.run.result
}
I have a similar problem to solve. What I am doing, though, is iterating over the jobs (since an active job might not be executed on an executor yet).
The triggering works like this in my solution:
if a job has been triggered manually or by VCS, it triggers all its (recursive) downstream jobs
if a job has been triggered by another upstream job, it does not trigger anything
This way, the jobs are grouped by their trigger cause, which can be retrieved with
#NonCPS
def getTriggerBuild(currentBuild)
{
def triggerBuild = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
if (triggerBuild) {
return [triggerBuild.getUpstreamProject(), triggerBuild.getUpstreamBuild()]
}
return null
}
I give each job the list of direct upstream jobs it has. The job can then check whether the upstream jobs have finished the build in the same group with
#NonCPS
def findBuildTriggeredBy(job, triggerJob, triggerBuild)
{
def jobBuilds = job.getBuilds()
for (buildIndex = 0; buildIndex < jobBuilds.size(); ++buildIndex)
{
def build = jobBuilds[buildIndex]
def buildCause = build.getCause(hudson.model.Cause$UpstreamCause)
if (buildCause)
{
def causeJob = buildCause.getUpstreamProject()
def causeBuild = buildCause.getUpstreamBuild()
if (causeJob == triggerJob && causeBuild == triggerBuild)
{
return build.getNumber()
}
}
}
return null
}
From there, once the list of upstream builds have been made, I wait on them.
def waitForUpstreamBuilds(upstreamBuilds)
{
// Iterate list -- NOTE: we cannot use groovy style or even modern java style iteration
for (upstreamBuildIndex = 0; upstreamBuildIndex < upstreamBuilds.size(); ++upstreamBuildIndex)
{
def entry = upstreamBuilds[upstreamBuildIndex]
def upstreamJobName = entry[0]
def upstreamBuildId = entry[1]
while (true)
{
def status = isUpstreamOK(upstreamJobName, upstreamBuildId)
if (status == 'OK')
{
break
}
else if (status == 'IN_PROGRESS')
{
echo "waiting for job ${upstreamJobName}#${upstreamBuildId} to finish"
sleep 10
}
else if (status == 'FAILED')
{
echo "${upstreamJobName}#${upstreamBuildId} did not finish successfully, aborting this build"
return false
}
}
}
return true
}
And abort the current build if one of the upstream builds failed (which nicely translates as a "aborted build" instead of a "failed build").
The full code is there: https://github.com/doudou/autoproj-jenkins/blob/use_autoproj_to_bootstrap_in_packages/lib/autoproj/jenkins/templates/library.pipeline.erb
The major downside of my solution is that the wait is expensive CPU-wise when there are a lot of builds waiting. There's the built-in waitUntil, but it led to deadlocks (I haven't tried on the last version of the pipeline plugins, might have been solved). I'm looking for ways to fix that right now - that's how I found your question.
Related
Stop reoccurring Jenkins Job automatically after some time
I'd like to start a pipeline job manually. This job should then run daily and after seven days stop automatically. Is there any way to do this?
AFAIK There is no OOB solution for this. But you can implement something with Groovy to achieve what you need. For example, check the following pipeline. In the below Pipeline, I'm adding a Cron expression to run every day if manually triggered and then removing the corn expression after a predefined number of runs elapse. You should be able to fine-tune the below and achieve what you need. def expression = getCron() pipeline { agent any triggers{ cron(expression) } stages { stage('Example') { steps { script { echo "Build" } } } } } def getCron() { def runEveryDayCron = "0 9 * * *" //Runs everyday at 9 def numberOfRunsToCheck = 7 // Will run 7 times def currentBuildNumber = currentBuild.getNumber() def job = Jenkins.getInstance().getItemByFullName(env.JOB_NAME) for(int i=currentBuildNumber; i > currentBuildNumber - numberOfRunsToCheck; i--) { def build = job.getBuildByNumber(i) if(build.getCause(hudson.model.Cause$UserIdCause) != null) { //This is a manually triggered Build return runEveryDayCron } } return "" }
Cancel previous Jenkins job builds only if it's the same branch
I have the following code to cancel a previous job build if a new one is started: def cancelPreviousBuilds() { def jobName = env.JOB_NAME def buildNumber = env.BUILD_NUMBER.toInteger() /* Get job name */ def currentJob = Jenkins.instance.getItemByFullName(jobName) for (def build : currentJob.builds) { if (build.isBuilding() && build.number.toInteger() != buildNumber) { build.doStop() } } } However, I would like this to not cancel previous job builds if it's a different branch. For instance, if a job build on develop were kicked off and then another one on master, it would not cancel any job builds.
You can access the build parameters of each build and compare the relevant values. You can use build.getAction(hudson.model.ParametersAction) to get the ParametersAction object from where you can search for parameters in the build object using the getParameter function. Something like: #NonCPS def cancelPreviousBuilds() { def buildNumber = env.BUILD_NUMBER.toInteger() def currentJob = Jenkins.instance.getItemByFullName(env.JOB_NAME) def currentBranch = <BRANCH_PARAM_NAME> // Branch value of the current build for (def build : currentJob.builds) { def param = build.getAction(hudson.model.ParametersAction).getParameter('<BRANCH_PARAM_NAME>') if (build.isBuilding() && build.number.toInteger() > buildNumber && currentBranch == param.value) { build.doStop() } } Another small thing, in the build number comparison it is better to use build.number.toInteger() > buildNumber then build.number.toInteger() != buildNumber to prevent the case in which old builds cancel new started builds, as you usually want each build to affect only previous ones.
Jenkins API - Get current pipeline stage of the build
I'm trying to make my build pipeline more useful and I need a way to terminate previous builds if they are not finished yet. I have the next Job definition: pipeline { stages { stage('A'){...} stage('B'){...} stage('C'){...} } } And I need to terminate all previous builds if they are not in stage'C'. I use Jenkins API to get previous builds for a particular job: #NonCPS def cancelPreviousBuilds() { def buildNumber = env.BUILD_NUMBER.toInteger() def currentJob = Jenkins.getInstance().getItemByFullName(env.JOB_NAME) currentJob.builds .find{ build -> build.isBuilding() && build.number.toInteger() < buildNumber && currentStageName(build) != 'C' } .each{ build -> build.doStop() } } So my current stopper is the implementation of currentStageName function. I'm not able to get the name of the stage. I've already found some code but it does not work well for me: #NonCPS def currentStageName(currentBuild) { FlowGraphWalker walker = new FlowGraphWalker(currentBuild.getExecution()) for (FlowNode flowNode: walker) { if(flowNode.isActive()) { return flowNode.getDisplayName(); } } } FlowNode object does not contain stage name it contains more narrow flow step inside the build. So the question is: How to get the current stage of previous build for particular Jenkins job?
Given a FlowNode, you can check if it is the start of a stage by checking if node instanceof StepEndNode. If it is, you can use its LabelAction class to get the name of the stage: static String getLabel(FlowNode node) { LabelAction labelAction = node.getAction(LabelAction.class); if (labelAction != null) { return labelAction.getDisplayName(); } return null; } I don't think it's useful for your case, but you can also get it from the node that marks the end of a stage (a StepEndNode) by looking up the corresponding start node: FlowNode startNode = ((StepEndNode) node).getStartNode();
How to tell Jenkins "Build every project in folder X"?
I have set up some folders (Using Cloudbees Folder Plugin). It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X. I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them. I'm not finding a plugin that lets me do that. I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder. How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after. The best I've managed is: I have a "run every job" job (which contains a comma-separated list of all the jobs you want). Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them. The version below requires the sandbox to be disabled (so it can access Jenkins.instance). def names = jobNames() for (i = 0; i < names.size(); i++) { build job: names[i], wait: false } #NonCPS def jobNames() { def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName) def childItems = project.parent.items def targets = [] for (i = 0; i < childItems.size(); i++) { def childItem = childItems[i] if (!childItem instanceof AbstractProject) continue; if (childItem.fullName == project.fullName) continue; targets.add(childItem.fullName) } return targets } If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape: Add the following to your library: package myorg; public String runAllSiblings(jobName) { def names = siblingProjects(jobName) for (def i = 0; i < names.size(); i++) { build job: names[i], wait: false } } #NonCPS private List siblingProjects(jobName) { def project = Jenkins.instance.getItemByFullName(jobName) def childItems = project.parent.items def targets = [] for (def i = 0; i < childItems.size(); i++) { def childItem = childItems[i] if (!childItem instanceof AbstractProject) continue; if (childItem.fullName == jobName) continue; targets.add(childItem.fullName) } return targets } And then create a pipeline with the following code: (new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName) Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us. One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking. These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects. import com.cloudbees.hudson.plugins.folder.Folder import javax.xml.transform.stream.StreamSource import hudson.model.AbstractItem import hudson.XmlFile import jenkins.model.Jenkins Folder findFolder(String folderName) { for (folder in Jenkins.instance.items) { if (folder.name == folderName) { return folder } } return null } AbstractItem findItem(Folder folder, String itemName) { for (item in folder.items) { if (item.name == itemName) { return item } } null } AbstractItem findItem(String folderName, String itemName) { Folder folder = findFolder(folderName) folder ? findItem(folder, itemName) : null } String listProjectItems() { Folder projectFolder = findFolder('Projects') StringBuilder b = new StringBuilder() if (projectFolder) { for (job in projectFolder.items.sort{it.name.toUpperCase()}) { b.append(',').append(job.fullName) } return b.substring(1) // dump the initial comma } return b.toString() } File backupConfig(XmlFile config) { File backup = new File("${config.file.absolutePath}.bak") FileWriter fw = new FileWriter(backup) config.writeRawTo(fw) fw.close() backup } boolean updateMultiBuildXmlConfigFile() { AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects') XmlFile oldConfig = buildItemsJob.getConfigFile() String latestProjectItems = listProjectItems() String oldXml = oldConfig.asString() String newXml = oldXml; println latestProjectItems println oldXml def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>' if (mat){ println mat.group(1) if (mat.group(1) == latestProjectItems) { println 'no Change' return false; } else { // there's a change File backup = backupConfig(oldConfig) def newProjects = "<projects>${latestProjectItems}</projects>" newXml = mat.replaceFirst(newProjects) XmlFile newConfig = new XmlFile(oldConfig.file) FileWriter nw = new FileWriter(newConfig.file) nw.write(newXml) nw.close() println newXml println 'file updated' return true } } false } void reloadMultiBuildConfig() { AbstractItem job = findItem('MultiBuild', 'buildAllProjects') def configXMLFile = job.getConfigFile(); def file = configXMLFile.getFile(); InputStream is = new FileInputStream(file); job.updateByXml(new StreamSource(is)); job.save(); println "MultiBuild Job updated" } if (updateMultiBuildXmlConfigFile()) { reloadMultiBuildConfig() }
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format. The advantage being I can maintain my job configuration in version control. e.g. job('myfolder/build-all'){ publishers { downstream('myfolder/job1') downstream('myfolder/job2') downstream('myfolder/job2') } }
Pipeline Job When running as a Pipeline job you may use something like: echo jobNames.join('\n') jobNames.each { build job: it, wait: false } #NonCPS def getJobNames() { def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName) project.parent.items.findAll { it.fullName != project.fullName && it instanceof hudson.model.Job }.collect { it.fullName } } Script Console Following code snippet can be used from the script console to schedule all jobs in some folder: import hudson.model.AbstractProject Jenkins.instance.getAllItems(AbstractProject.class).each { if(it.fullName =~ 'path/to/folder') { (it as AbstractProject).scheduleBuild2(0) } } With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like: import hudson.model.AbstractProject #NonCPS def triggerItemsInFolder(String folderPath) { Jenkins.instance.getAllItems(AbstractProject.class).each { if(it.fullName =~ folderPath) { (it as AbstractProject).scheduleBuild2(0) } } }
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth pipeline { agent any stages { stage('Parallel Stage') { parallel { stage('Parallel 1') { steps { build(job: "jenkins_job_1") } } stage('Parallel 2') { steps { build(job: "jenkins_job_2") } } } } }
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins). The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc. A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName): import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject import hudson.model.Cause.UserIdCause Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll { return it.fullName =~ '^folder/subfolder/' }.each { it.scheduleBuild(0, new UserIdCause()) } The script was tested against Jenkins 2.324.
With Jenkins Pipeline scripts, is it safe to access a global variable from a parallel step?
When writing Jenkins Pipeline scripts, is it safe to access variables from parallel steps? The documentation isn't clear on this. For example, this pipeline code modifies a common counter and queue from parallel branches: def donecount = 0; def work = [6,5,4,3,2,1,0] def branches = [:] for (int i = 0; i < 3; i++) { branches["worker-${i}"] = { while (true) { def item = null try { item = work.remove(0) } catch (java.lang.IndexOutOfBoundsException e) { break } echo "Working for ${item} seconds" sleep time:item donecount += 1 } } } branches.failFast = true parallel branches echo "Completed ${donecount} tasks"
In the current implementation, this is probably safe, in that Pipeline execution uses coöperative multitasking (otherwise known as “green threads”). But I am not sure that, for example, += is an atomic operation in Groovy at the granularity that matters here. Better to play it safe and use standard Java concurrency utilities: ConcurrentLinkedQueue, AtomicInteger, etc.