How to set job order for monitor view in jenkins job DSL groovy script? - jenkins

I'm creating Build Monitor view with DSL Script, but there is no method in API to set the job order. I can set the order manually in configuration after view is created, but I need to do that within the script.
I'm using https://jenkinsci.github.io/job-dsl-plugin/#path/buildMonitorView as a reference. The only way I suspect it could be possible is configure(Closure) method, but I would still have the same question of how to do it.
My current code:
biuldMonitorView("name-of-the-view") {
jobs {
regex("some regex to include jobs")
recurse()
}
// I would expect something like:
view {
orderByFullName()
}
}

After some trial and error and println calls everywhere I came to this solution:
biuldMonitorView("name-of-the-view") {
jobs { // This part is as before
regex("some regex to include jobs")
recurse()
}
// The solution:
view.remove(view / order)
view / order(class: "com.smartcodeltd.jenkinsci.plugins.buildmonitor.order.ByFullName")
}
Above solution sets job order to "Full name" instead of default "Name".
I found the remove idea at Configure SVN section of job-dsl-plugin, fully qualified names of job order options can be found in the source of jenkins-build-monitor-plugin.

I had the same question today and managed to get Aivaras's proposal to work in the following way:
buildMonitorView("name-of-the-view") {
// Set properties like jobs
jobs {
regex("some regex to include jobs")
recurse()
}
// Directly manipulate the config to set the ordering
configure { view ->
view.remove(view / order)
view / order(class: "com.smartcodeltd.jenkinsci.plugins.buildmonitor.order.ByFullName")
}

Related

Jenkins: Parameters disappear from pipeline job after running the job

I've been trying to construct multiple jobs from a list and everything seems to be working as expected. But as soon as I execute the first build (which works correctly) the parameters in the job disappears. This is how I've constructed the pipelineJob for the project.
import javaposse.jobdsl.dsl.DslFactory
def repositories = [
[
id : 'jenkins-test',
name : 'jenkins-test',
displayName: 'Jenkins Test',
repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git'
]
]
DslFactory dslFactory = this as DslFactory
repositories.each { repository ->
pipelineJob(repository.name) {
parameters {
stringParam("BRANCH", "master", "")
}
logRotator{
numToKeep(30)
}
authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>')
displayName(repository.displayName)
description("Builds deploy pipelines for ${repository.displayName}")
definition {
cpsScm {
scm {
git {
branch('${BRANCH}')
remote {
url(repository.repo)
credentials('<CREDENTIAL_NAME>')
}
extensions {
localBranch('${BRANCH}')
wipeOutWorkspace()
cloneOptions {
noTags(false)
}
}
}
scriptPath('Jenkinsfile)
}
}
}
}
}
After running the above script, all the required jobs are created successfully. But then once I build any job, the parameters disappear.
After that when I run the seed job again, the job starts showing the parameter. I'm having a hard time figuring out where the problem is.
I've tried many things but nothing works. Would appreciate any help. Thanks.
This comment helped me to figure out similar issue with my .groovy file:
I called parameters property twice (one at the node start and then tried to set other parameters in if block), so the latter has overwritten the initial parameters.
BTW, as per the comments in the linked ticket, it is an issue with both scripted and declarative pipelines.
Fixed by providing all job parameters in each parameters call - for the case with ifs.
Though I don't see repeated calls in the code you've provided, please check the full groovy files for your jobs and add all parameters to all parameters {} blocks.

How to set number of columns for build monitor view in jenkins job DSL groovy script?

Creating Build Monitor view with DSL Script, but there is no detail onto how to set the number of columns.
Using https://jenkinsci.github.io/job-dsl-plugin/#path/buildMonitorView documents for some insight. Thinking the configure function may allow but I still have the same question of how to do it.
Assumed it may have been like list view and add a column to it but this does not work.
My current code so far:
buildMonitorView('Automation Wall') {
description('All QA Test Suites ')
recurse(true)
configure()
columns(1)
jobs {
regex(".*.Tests.*")
}
}
buildMonitorView('Automation Wall') {
description('All QA Test Suites ')
recurse(true)
configure { project ->
(project / columns ).value = 1
}
jobs {
regex(".*.Tests.*")
}
}

How do I use `when` condition on a file's last modified time in Jenkins pipeline syntax

I am creating a Jenkins pipeline, I want certain stage to be triggered only when a particular log file's(log file is located in the server node where all the stages are going to run) last modified date is updated after the initiation of pipeline job, I understand we need to use "When" condition but not really sure how to implement it.
Tried referring some of the pipeline related portals but could not able to find an answer
Can some please help me through this?
Thanks in advance!
To get data about file is quite tricky in a Jenkins pipeline when using the Groovy sandbox since you're not allowed to do new File(...).lastModified. However there is the findFiles step, which basically returns a list of wrapped File objects with a getter for last modified time in millis, so we can use findFiles(glob: "...")[0].lastModified.
The returned array may be empty, so we should rather check on that (see full example below).
The current build start time in millis is accessible via currentBuild.currentBuild.startTimeInMillis.
Now that we git both, we can use them in an expression:
pipeline {
agent any
stages {
stage("create file") {
steps {
touch "testfile.log"
}
}
stage("when file") {
when {
expression {
def files = findFiles(glob: "testfile.log")
files && files[0].lastModified < currentBuild.startTimeInMillis
}
}
steps {
echo "i ran"
}
}
}
}

Jenkins DSL new view

I have a DSL plugin file which creates couple of jobs like pipeline, freshly jobs. I wanted to know what would be syntax that i can create different views in this file, Like 5 jobs in 1 view, 5 another jobs in 2 view, I know how to do it using console, but looking forward to update file and it would be automatically created.
worked fine as used groovy syntax
listView('testlist') {
description('All new jobs for testlist')
filterBuildQueue()
filterExecutors()
jobs {
name('fruit')
name('cake')
}
columns {
status()
weather()
name()
lastSuccess()
lastFailure()
lastDuration()
buildButton()
}
}

Ideas to implement dynamic parallel build using jenkins pipeline plugin

I have a requirement to run a set of tasks for a build in parallel, The tasks for the build are dynamic it may change. I need some help in the implementation of that below are the details of it.
I tasks details for a build will be generated dynamically in an xml which will have information of which tasks has to be executed in parallel/serial
example:
say there is a build A.
Which had below task and the order of execution , first task 1 has to be executed next task2 and task3 will be executed in parallel and next is task 4
task1
task2,task3
task4
These details will be in an xml dynamically generated , how can i parse that xml and schedule task accordingly using pipeline plugin. I need some idea to start of with.
You can use Groovy to read the file from the workspace (readFile) and then generate the map containing the different closures, similar to the following:
parallel(
task2: {
node {
unstash('my-workspace')
sh('...')
}
},
task3: {
node {
unstash('my-workspace')
sh('...')
}
}
}
In order to generate such data structure, you simply iterate over the task data read using XML parsing in Groovy over the file contents you read previously.
By occasion, I gave a talk about pipelines yesterday and included very similar example (presentation, slide 34ff.). In contrast, I read the list of "tasks" from another command output. The complete code can be found here (I avoid pasting all of this here and instead refer to this off-site resource).
The kind of magic bit is the following:
def parallelConverge(ArrayList<String> instanceNames) {
def parallelNodes = [:]
for (int i = 0; i < instanceNames.size(); i++) {
def instanceName = instanceNames.get(i)
parallelNodes[instanceName] = this.getNodeForInstance(instanceName)
}
parallel parallelNodes
}
def Closure getNodeForInstance(String instanceName) {
return {
// this node (one per instance) is later executed in parallel
node {
// restore workspace
unstash('my-workspace')
sh('kitchen test --destroy always ' + instanceName)
}
}
}

Resources