How to set number of columns for build monitor view in jenkins job DSL groovy script? - jenkins

Creating Build Monitor view with DSL Script, but there is no detail onto how to set the number of columns.
Using https://jenkinsci.github.io/job-dsl-plugin/#path/buildMonitorView documents for some insight. Thinking the configure function may allow but I still have the same question of how to do it.
Assumed it may have been like list view and add a column to it but this does not work.
My current code so far:
buildMonitorView('Automation Wall') {
description('All QA Test Suites ')
recurse(true)
configure()
columns(1)
jobs {
regex(".*.Tests.*")
}
}

buildMonitorView('Automation Wall') {
description('All QA Test Suites ')
recurse(true)
configure { project ->
(project / columns ).value = 1
}
jobs {
regex(".*.Tests.*")
}
}

Related

Create variable in shared library for Jenkinsfile

I'm new to shared libraries in Jenkins, and fairly new to Groovy as well.
I have several multibranch pipelines for different projects. I have setup email notifications for each job using an environmental variable containing a list of email addresses, which works just fine. However, several jobs share the same email addresses (depending on the project it's for) and I'd like to create a shared library for a master email list, so I don't have to update the list in each job individually if say I want to add or remove someone. I'm having trouble defining a variable in a library that can be used later in the Jenkinsfile. This is a simplified version of what I've been trying:
shared library (basically a copy paste of the environmental variables I was originally using in the individual Jenkinsfiles/jobs, which works):
Jenkinsfile-shared-libraries\vars\masterEmailList
def call () {
environment {
project1EmailList = "user1#xyz.com, user2#xyz.com, user3#xyz.com"
project2EmailList = "user2#xyz.com, user4#xyz.com, user5#xyz.com"
}
}
Jenkinsfile
#Library('Jenkinsfile-shared-libraries') _
pipeline {
agent any
stages {
stage ('email list for project 1') {
steps {
masterEmailList()
echo env.project1EmailList
}
}
}
}
The echo returns "null" rather than the email list of the project like I would expect.
Any guidance would be much appreciated!
Cheers.
The "Defining global variables" section of https://www.jenkins.io/doc/book/pipeline/shared-libraries/#defining-global-variables helped solve this one.
shared library:
Jenkinsfile-shared-libraries\vars\masterEmailList
def project1EmailList() {
"user1#xyz.com, user2#xyz.com, user3#xyz.com"
}
def project2EmailList() {
"user2#xyz.com, user4#xyz.com, user5#xyz.com"
}
Jenkinsfile:
#Library('Jenkinsfile-shared-libraries') _
pipeline {
agent any
stages {
stage ('email list for project 1') {
steps {
script {
echo masterEmailList.project1EmailList
}
}
}
}
}

Jenkins - how to run a single stage using 2 agents

I have a script that acts as a "test driver" (TD). That is, it drives test operations on a "system under test" (SUT). When I run my test framework script (tfs.sh) on my TD, it takes a SUT as an argument. The manual workflow looks like this:
TD ~ $ ./tfs.sh --sut=<IP of SUT>
I want to have a cluster of SUTs (they will have different OSes, and each will repeat a few times), and a few TDs (like, 4 or 5, so driving tests won't be a bottleneck, actually executing them will be).
I don't know the Jenkins primitive with which to accomplish this. I would like it if a Jenkins stage could simply be invoked with 2 agents. One would obviously be the TD, that's what would actually run the script. And the other would be the SUT. Jenkins would manage locking & resource contention like this.
As a workaround, I could simply have all my SUTs entirely unmanaged by Jenkins, and manually implement locking of the SUTs so 2 different TDs don't try to grab the same one. But why re-invent the wheel? And besides, I'd rather work on a Jenkins plugin to accomplish this than on a manual solution.
How can I run a single Jenkins stage on 2 (or more) agents?
If I understand your requirement correctly, you have a static list of SUTs and you want Jenkins to start the TDs by allocating SUTs for each TD. I'm assuming TDs and SUTs have a one-to-one relationship. Following is a very simple example of how you can achieve what you need.
pipeline {
agent any
stages {
stage('parallel-run') {
steps {
script {
try {
def tests = getTestExecutionMap()
parallel tests
} catch (e) {
currentBuild.result = "FAILURE"
}
}
}
}
}
}
def getTestExecutionMap() {
def tests = [:]
def sutList = ["IP1", "IP2" , "IP3"]
int count = 0
for(String ip : sutList) {
tests["TEST${count}"] = {
node {
stage("TD with SUT ${ip}") {
script {
sh "./tfs.sh --sut=${ip}"
}
}
}
}
count++
}
return tests
}
The above pipeline will result in the following.
Further if you wan to select the agent you want to run the TD. You can specify the name of the agent in the node block. node(NAME) {...} . You can improve the Agent selection criteria accordingly. For example you can check how many Jenkins executors are idling for a given Agent and then decide how many TDs you will start there.

How to set job order for monitor view in jenkins job DSL groovy script?

I'm creating Build Monitor view with DSL Script, but there is no method in API to set the job order. I can set the order manually in configuration after view is created, but I need to do that within the script.
I'm using https://jenkinsci.github.io/job-dsl-plugin/#path/buildMonitorView as a reference. The only way I suspect it could be possible is configure(Closure) method, but I would still have the same question of how to do it.
My current code:
biuldMonitorView("name-of-the-view") {
jobs {
regex("some regex to include jobs")
recurse()
}
// I would expect something like:
view {
orderByFullName()
}
}
After some trial and error and println calls everywhere I came to this solution:
biuldMonitorView("name-of-the-view") {
jobs { // This part is as before
regex("some regex to include jobs")
recurse()
}
// The solution:
view.remove(view / order)
view / order(class: "com.smartcodeltd.jenkinsci.plugins.buildmonitor.order.ByFullName")
}
Above solution sets job order to "Full name" instead of default "Name".
I found the remove idea at Configure SVN section of job-dsl-plugin, fully qualified names of job order options can be found in the source of jenkins-build-monitor-plugin.
I had the same question today and managed to get Aivaras's proposal to work in the following way:
buildMonitorView("name-of-the-view") {
// Set properties like jobs
jobs {
regex("some regex to include jobs")
recurse()
}
// Directly manipulate the config to set the ordering
configure { view ->
view.remove(view / order)
view / order(class: "com.smartcodeltd.jenkinsci.plugins.buildmonitor.order.ByFullName")
}

Job DSL Restrict jobs to selected nodes

I am struggling to restrict Jenkins job to specific nodes using Job DSL plugin.
I tried something like :
job("campaign") {
parameters {
stringParam("ARTIFACT_NUMBER", "","")
nodeParam('TEST_HOST') {
defaultNodes(["Slave"])
}
}
steps {
shell('''#!/bin/bash
ARTIFACT_DIR=daily_${ARTIFACT_NUMBER}
echo ${ARTIFACT_DIR}
''')
}
}
but no success. Basically, I want to set the property Restrict where this project can run through Job DSL plugin
The label method sets the Restrict where this project can run on job level:
job('example') {
label('agentA agentB')
}
See the API viewer for details: https://jenkinsci.github.io/job-dsl-plugin/#path/job-label

Ideas to implement dynamic parallel build using jenkins pipeline plugin

I have a requirement to run a set of tasks for a build in parallel, The tasks for the build are dynamic it may change. I need some help in the implementation of that below are the details of it.
I tasks details for a build will be generated dynamically in an xml which will have information of which tasks has to be executed in parallel/serial
example:
say there is a build A.
Which had below task and the order of execution , first task 1 has to be executed next task2 and task3 will be executed in parallel and next is task 4
task1
task2,task3
task4
These details will be in an xml dynamically generated , how can i parse that xml and schedule task accordingly using pipeline plugin. I need some idea to start of with.
You can use Groovy to read the file from the workspace (readFile) and then generate the map containing the different closures, similar to the following:
parallel(
task2: {
node {
unstash('my-workspace')
sh('...')
}
},
task3: {
node {
unstash('my-workspace')
sh('...')
}
}
}
In order to generate such data structure, you simply iterate over the task data read using XML parsing in Groovy over the file contents you read previously.
By occasion, I gave a talk about pipelines yesterday and included very similar example (presentation, slide 34ff.). In contrast, I read the list of "tasks" from another command output. The complete code can be found here (I avoid pasting all of this here and instead refer to this off-site resource).
The kind of magic bit is the following:
def parallelConverge(ArrayList<String> instanceNames) {
def parallelNodes = [:]
for (int i = 0; i < instanceNames.size(); i++) {
def instanceName = instanceNames.get(i)
parallelNodes[instanceName] = this.getNodeForInstance(instanceName)
}
parallel parallelNodes
}
def Closure getNodeForInstance(String instanceName) {
return {
// this node (one per instance) is later executed in parallel
node {
// restore workspace
unstash('my-workspace')
sh('kitchen test --destroy always ' + instanceName)
}
}
}

Resources