Get Build Number from another job in Jenkins as a variable - jenkins

I am running a Build Flow job that executes multiple builds in parallel and then use the Post-build Action to Publish HTML reports.
How can I get the build number of each of the individual jobs as a variable so I can use when fetching the HTML report?
EDIT
This is what my parallel code looks like:
parallel (
{ uarr = build("Baseline - Secure - UARR", param1: build.properties.get("number")) },
{ login = build("Baseline - Secure - Login", param2: build.properties.get("number")) }
)
And this is what i tried using when using the Publish HTML reports for the Index page[s], but it's not seeing ${param1} as a variable and trying to find it literally:
*Secure Baseline*Secure_UARR-${param1}.html
This is what I'm using in the Maven build job and it is working great at finding the report with the correct filename that contains the build number:
*Secure Baseline*Secure_UARR-${BUILD_NUMBER}.html
The problem is, if I use that same logic in the Build Flow parallel job, it uses the build number of that job, not the Maven job that creates the report. (I hope that makes sense)

You can store the job references in a variable
parallel(
job1: { def n = build("JOB_NAME", PARAM_1: "value-1", PARAM_2: true, ...) }
...
)
or even store them in an array if you like
def jobs = [:]
parallel(
job1: {
def n1 = build("job1", param1: "value1", ...)
jobs["job1"] = n1.number
},
job1: {
def n2 = build("job2", param1: "value1", ...)
jobs["job2"] = n2.number
},
job1: {
def n3 = build("job3", param1: "value1", ...)
jobs["job3"] = n3.number
},
jobm: {
def nm = build("jobm", param1: "value1", ...)
jobs["jobm"] = n1.number
}
)
And then you can read the map when you want
The following answer shows how to export this as environment variables.
def buildEnv = build.getEnvVars();
buildEnv.putAll(jobs)
import org.jenkinsci.plugins.envinject.EnvInjectPluginAction
def envInjectAction = build.getAction(EnvInjectPluginAction.class);
envInjectAction.overrideAll(buildEnv)
You can the use it in your postbuild step(s) as $job1, $job2, ..., $jobm.
This resolved bug suggests that you then can use it in the HTML-publisher plugin (I am not verry familiar with the plugin)

Related

How do I reassign parameters in a Jenkins pipeline script?

I'm new to Jenkins and groovy scripting. I'm trying to reassign the parameters in the Jenkins script.
I tried the following
def reasignParams() {
if(params.B == '') {
params.B = params.A
}
}
pipeline{
parameters {
string(name: 'A', defaultValue: '1.1', description: "Master Value")
string(name: 'B', defaultValue: '', description: "Slave value")
}
}
After running the above Jenkins pipeline script (groovy), I ran into the following error
java.lang.UnsupportedOperationException
The alternative that I thought to this is as below
def reasignParams() {
if(params.B == '') {
def temp = params.A
# use temp variable instead of params.B; But this is inconvenient
}
}
I would like to learn if there is a way to reassign parameters in the Jenkins pipeline script? Any help would be greatly appreciated, Thanks in advance!
The params object in Jenkins Pipeline does not support write operations on its member variables. You can only initially assign them in the parameters directive (think of it like a constructor in that sense). If you want to reassign parameter values, then you do indeed need to make a deep copy like the following:
newParams = [:]
newParams.A = params.A

Jenkins - Populate choice parameter with options from a list variable in a different groovy file

How can I declare a choice parameter for a declarative pipeline, the choices for which are read from a list in another groovy file?
l.groovy
opts = ['a','b','c','d']
main.groovy
pipeline {
parameters {
choice (
name: 'CHOICE_LIST',
choices: config.opts.keySet() as String[],
description: 'Make a choice'
)
...
}
...
}
Hoi,
just join your list with .join('\n') should do the trick.
choice (
name: 'CHOICE_LIST',
choices: config.opts.keySet().join('\n'),
description: 'Make a choice'
)
Why ?
ChoiceParameterDefinition requires a delimited string.
https://issues.jenkins-ci.org/browse/JENKINS-26143
UPDATE
It's the problem of importing the config that isn't working. How should I import from another groovy file? That's the bigger issue. cyberbeast
Add the other groovy.file as a shared library to the pipeline under the job-configuration.
Create a reference to the pipeline in your job. In my example the Groovy-file is called Prebuild which contains a funtion getBranchNames() where I get all branches from a svn-repro.
pipeline {
agent any
libraries {
lib('PreBuild')
}
stages {
stage('Set Parameters') {
steps {
timeout(time: 30, unit: 'SECONDS') {
script {
def INPUT_PARAMS = input message: 'Please Provide Parameters', ok: 'Next', parameters: [choice(name: 'Branch_Choice', choices: PreBuild.getBranchNames(), description: 'Which Branch?')]
}
}
...
The corrospending Prebuild.groovy file looks like this:
import groovy.util.XmlSlurper
def getBranchNames(){
def svn = bat(returnStdout: true, script: 'svn ls https://svn-repro --xml --username John --password Doe --non-interactive --trust-server-cert').trim()
def result = svn.readLines().drop(1).join(" ")
def slurper = new XmlSlurper()
def xml = slurper.parseText(result)
def name = new ArrayList()
name.addAll(xml.'*'.'*'.'name')
return name.join('\n')
}
I parse the svn-command output into an arraylist and return it as a joined string back to my pipeline job.
Be aware that your other Groovy-file has to be in a SCM too. The Library repro needs a special folder structure, find more information here:https://devopscube.com/jenkins-shared-library-tutorial/

Parse Data Using Jenkins Groovy Pipeline Script

I am retrieving JSON object from a URL using httpRequest in a groovy script.
pipeline {
agent any
stages {
stage ('Extract Data') {
steps {
script {
def response = httpRequest \
authentication: 'user', \
httpMode: 'GET', \
url: "https://example.com/data"
writeFile file: 'output.json', text: response.content
def data = readFile(file: 'output.json')
def details = new groovy.json.JsonSlurperClassic().parseText(data)
echo "Data: ${details.fields.customfield}"
}
}
}
}
}
I am interested in the customfieldstring. The format of the string is:
Application!01.01.01 TestSuite1,TestSuite2,TestSuite3,TestSuite4 Product!01.01.01,Product2!01.01.02
I would like to parse the string into 3 data sets:
Map of Applications [Application: version] (there will always be one Appliction)
List of TestSuites [TestSuite1,...,TestSuite]
Map of Prodcts [Product1: version,..., ProductN: version].
However, I am not sure how to do this.
Are there any Jenkins Groovy libraries that I can use to do this in a declarative pipeline?
EDIT
Based on the answer below I can see that I can make a map in the following way:
def applications = groups[0].split(',').collect { it.split('!') }.collectEntries { [(it):it] }
In the example I have:
application = [Application: Application]
How do I get:
application = [Application: 01.01.01]
EDIT2
Note the following output:
def applications = groups[0].split(',').collect { it.split('!') }
[[Application, 01.01.01]]
There're no libraries I'm aware of that will have functionality to parse the data but, since you know the format of the data it's easy to parse them manually.
There are 3 groups in the input (applications, suites, products) separated by a character. To get the groups you need:
def input = "Application!01.01.01 TestSuite1,TestSuite2,TestSuite3,TestSuite4 Product!01.01.01,Product2!01.01.02"
def groups = input.split(' ')
To process the applications you need to split group 0 with , character (just in case there are many applications). You got a list of pairs in format: name!version. Every pair must be splitted with !, so you get a list of lists in format: [[name, version]]. From the last structure it's easy to create a map. All steps together:
def applications = groups[0].split(',').collect { it.split('!') }.collectEntries { [(it[0]):it[1]] }
Getting the list of suites is easy, just split group 1 with , character:
def suites = groups[1].split(',')
Finally, products are analogical to the list of applications but this time group 2 should be used:
def products = groups[2].split(',').collect { it.split('!') }.collectEntries { [(it[0]):it[1]] }
You can simplifier your issue by using pipeline utility step: readJSON
def data = readJSON(file: 'output.json')
echo data.fields.customfield
I found a method. Groovy can convert the values of an Object array and convert them into a map with the toSpreadMap(). However, the array must have an even number of elements.
def appList = ['DevOpsApplication', '01.01.01']
def appMap = appList.toSpreadMap()
For some better answers please refer to this

how to get the trigger information in Jenkins programmatically

I need to add the next build time scheduled in a build email notification after a build in Jenkins.
The trigger can be "Build periodically" or "Poll SCM", or anything with schedule time.
I know the trigger info is in the config.xml file e.g.
<triggers>
<hudson.triggers.SCMTrigger>
<spec>8 */2 * * 1-5</spec>
<ignorePostCommitHooks>false</ignorePostCommitHooks>
</hudson.triggers.SCMTrigger>
</triggers>
and I also know how to get the trigger type and spec with custom scripting from the config.xml file, and calculate the next build time.
I wonder if Jenkins has the API to expose this information out-of-the-box. I have done the search, but not found anything.
I realise you probably no longer need help with this, but I just had to solve the same problem, so here is a script you can use in the Jenkins console to output all trigger configurations:
#!groovy
Jenkins.instance.getAllItems().each { it ->
if (!(it instanceof jenkins.triggers.SCMTriggerItem)) {
return
}
def itTrigger = (jenkins.triggers.SCMTriggerItem)it
def triggers = itTrigger.getSCMTrigger()
println("Job ${it.name}:")
triggers.each { t->
println("\t${t.getSpec()}")
println("\t${t.isIgnorePostCommitHooks()}")
}
}
This will output all your jobs that use SCM configuration, along with their specification (cron-like expression regarding when to run) and whether post-commit hooks are set to be ignored.
You can modify this script to get the data as JSON like this:
#!groovy
import groovy.json.*
def result = [:]
Jenkins.instance.getAllItems().each { it ->
if (!(it instanceof jenkins.triggers.SCMTriggerItem)) {
return
}
def itTrigger = (jenkins.triggers.SCMTriggerItem)it
def triggers = itTrigger.getSCMTrigger()
triggers.each { t->
def builder = new JsonBuilder()
result[it.name] = builder {
spec "${t.getSpec()}"
ignorePostCommitHooks "${t.isIgnorePostCommitHooks()}"
}
}
}
return new JsonBuilder(result).toPrettyString()
And then you can use the Jenkins Script Console web API to get this from an HTTP client.
For example, in curl, you can do this by saving your script as a text file and then running:
curl --data-urlencode "script=$(<./script.groovy)" <YOUR SERVER>/scriptText
If Jenkins is using basic authentication, you can supply that with the -u <USERNAME>:<PASSWORD> argument.
Ultimately, the request will result in something like this:
{
"Build Project 1": {
"spec": "H/30 * * * *",
"ignorePostCommitHooks": "false"
},
"Test Something": {
"spec": "#hourly",
"ignorePostCommitHooks": "false"
},
"Deploy ABC": {
"spec": "H/20 * * * *",
"ignorePostCommitHooks": "false"
}
}
You should be able to tailor these examples to fit your specific use case. It seems you won't need to access this remotely but just from a job, but I also included the remoting part as it might come in handy for someone else.

Why does this Jenkins Pipeline code not succeed?

this is my situation: one of my projects consists of multiple subprojects, roughly separated as frontend and backend, which are at different locations in a subversion repository.
I extracted the checkout plugin into a function, that is already properly parameterized for the checkout:
def svn(String url, String dir = '.') {
checkout([
$class: 'SubversionSCM',
locations: [[
remote: url,
credentialsId: '...'
local: dir,
]],
workspaceUpdater: [$class: 'UpdateUpdater']
])
}
That way, I was able to do the checkout by this means (simplified):
stage "1. Build"
parallel (
"Backend": { node {
svn('https://svn.acme.com/Backend/trunk')
sh 'gradle build'
}},
"Frontend": { node {
svn('https://svn.acme.com/Frontend/trunk')
sh 'gradle build'
}}
)
Checking out at the very same time lead to Jenkins having troubles with changeset xml files, as far as I could guess from the stacktraces.
Since I also want to reuse both the projects name and its svn url, I moved on to iterate over a map and checking out consecutively and just stashing the files in the first stage for the following parallel build-only stage:
stage "1. Checkout"
node {
[
'Backend': 'https://svn.acme.com/Backend/trunk',
'Frontend': 'https://svn.acme.com/Frontend/trunk',
].each { name, url ->
// Checkout in subdirectory
svn(url, name)
// Unstash by project name
dir(name) { stash name }
}
}
stage "2. Build"
// ...
Somehow Jenkins' pipeline does not support this, so I used a simple for-in loop instead:
node {
def projects = [
'Backend': '..'
// ...
]
for ( project in projects ) {
def name = project.getKey()
def url = project.getValue()
svn(url, name)
dir(name) { stash name }
}
project = projects = name = url = null
}
That doesn't work as well and exits the build with an Exception: java.io.NotSerializableException: java.util.LinkedHashMap$Entry. As you can see, I set every property to null, because I read somewhere, that this prevents that behaviour. Can you help me fix this issue and explain, what's exactly going on here?
Thanks!
I think it is a known Jenkins bug of the for in-loop:
https://issues.jenkins-ci.org/browse/JENKINS-27421
But there is also a known bug for .each style loops
https://issues.jenkins-ci.org/browse/JENKINS-26481
So currently it seems like you cannot iterate over Maps in Jenkins Pipelines. I suggest creating a list as a workaround and iterate over it with the "classic loop" style:
def myList = ["Backend|https://svn.acme.com/Backend/trunk", "Frontend|https://svn.acme.com/Frontend/trunk"]
for (i = 0; i < myList.size(); i++) {
//get current list item : myList[i] and split at pipe | ->escape pipe with \\
def (name, url) = myList[i].tokenize( '\\|' )
//do operations
svn(url, name)
dir(name) { stash name }
}

Resources