Invoking jenkins job from pipeline script - jenkins

In my master pipeline job i have set of values , and with each values I need to trigger another job
def map = [FRA1: "192.168.1.1", DEL: "192.168.1.2", NYC: "192.168.1.3"]
for (element in map) {
echo "${element.key} ${element.value}"
stage("Triggering another job- ${element.key}")
build job: 'testjobcheck', parameters: [string(name: 'DC-NAME', value:
"${element.value}")
]
}
but getting the below exception
an exception which occurred:
in field com.cloudbees.groovy.cps.impl.BlockScopeEnv.locals
in object com.cloudbees.groovy.cps.impl.LoopBlockScopeEnv#71f4bf38
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#2a24a0f
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#5863a030
in field com.cloudbees.groovy.cps.impl.CpsClosureDef.capture
in object com.cloudbees.groovy.cps.impl.CpsClosureDef#df1925c
in field com.cloudbees.groovy.cps.impl.CpsClosure.def
in object org.jenkinsci.plugins.workflow.cps.CpsClosure2#4ab0695
in field org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.closures
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#107c4dba
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#107c4dba
Caused: java.io.NotSerializableException: java.util.LinkedHashMap$Entry
can anyone help here?

See JENKINS-49732.
There is a serialization failure for java.util.LinkedHashMap$Entry when the pipeline is trying to save (serialize) the current state.
As a workaround you can use:
def map = [FRA1: "192.168.1.1", DEL: "192.168.1.2", NYC: "192.168.1.3"]
map.each { key, value ->
echo "${key} ${value}"
stage("Triggering another job- ${key}"){
build job: 'testjobcheck', parameters: [string(name: 'DC-NAME',value: "${value}")]
}
}
Small thing: when using the stage block use it with {} as using the ‘stage’ step without a block argument is deprecated.

Related

Active Choices Reactive Parameter does not gets selected when the Reference parameter is set from a parent job

I have 2 Jenkins Job, say ParentJob and ChildJob.
The ParentJob has a Active Choice Parameter, say ENV, with the below groovy script:
return[
'A','B','C',
]
The ChildJob also has a similar Active Choice Parameter, say ENV, with the same groovy script. Additionally, there is also an Active Choices Reactive Parameter, say ENV_URL with ENV as the Reference Parameter and with following groovy script:
if(ENV.equals("A")){
return ["https://a.com"]
}else if(ENV.equals("B")){
return ["https://b.com"]
} else {
return ["https://c.com"]
Now, I'm calling ChildJob from my ParentJob using a pipeline script. When I set ENV as "A" in my ParentJob, which internally calls ChildJob,
ParentJob pipeline code:
pipeline {
agent {
node {
}
}
stages {
stage('ChildJob') {
steps {
script {
JOB_NAME="ChildJob"
def myJob=build job: "${JOB_NAME}", parameters: [
string(name: 'ENV', value:"${ENV}")
]
}
The Active Choices Parameter for ENV in the ChildJob is set to A
However, the Active Choice Reactive Parameter ENV_URL is empty and IS NOT SET with the value "http://a.com"
Basically, would want the Active Choices Reactive Parameter to set a value based on the Reference Parameter which is set from a Parent job.
Any suggestions on how this can be achieved?
I don't think this is possible. The best option for you is to Pass the parameter from the parent Job.
JOB_NAME="ChildJob"
URL = getURLByEnv("$ENV") // Retrive the URL based on the Same logic in your child job.
build job: '$JOB_NAME', parameters:[
string(name: 'ENV', value: "$ENV"),
string(name: 'url', value: "$URL")
]

How to redirect shell output to variable with iteration in Jenkins pipeline

I want to iterate over each elements in array called envContentKeys = ["Address", "Name", "Date"]to create new environment variables for each element in the array by redirect the value in json file config.json.
here's the config.json
{
"dev":{
"Name": "Apple",
"Address": "somewhere",
"Date": "10Sep2021"
},
"prod":{
"Name": "Orange",
"Address": "somewhere2",
"Date": "15Sep2021"
}
}
What I did in the pipeline is calling .each and assign element in envContentKeys as an env
envContentKeys.each {
env."${it}" = sh(
script: "jq -r '.${params.environment}.${it}' config.json",
returnStdout: true
).trim()
}
${params.environment} will be whether dev or prod depends on user's selection.
the final result should looks like this
for dev
env.Address = somewhere
env.Name = Apple
env.Date = 10Sep2021
however, the pipeline gives the error
an exception which occurred:
in field com.cloudbees.groovy.cps.impl.FunctionCallEnv.locals
in object com.cloudbees.groovy.cps.impl.FunctionCallEnv#63b12f64
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#6c55c234
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.LoopBlockScopeEnv#498d2924
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#748e1123
in field com.cloudbees.groovy.cps.impl.CallEnv.caller
in object com.cloudbees.groovy.cps.impl.ClosureCallEnv#16dc3a96
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#29d584b3
in field com.cloudbees.groovy.cps.impl.ProxyEnv.parent
in object com.cloudbees.groovy.cps.impl.BlockScopeEnv#691424c3
in field com.cloudbees.groovy.cps.impl.CallEnv.caller
in object com.cloudbees.groovy.cps.impl.FunctionCallEnv#1769e279
in field com.cloudbees.groovy.cps.Continuable.e
in object org.jenkinsci.plugins.workflow.cps.SandboxContinuable#2ec694c5
in field org.jenkinsci.plugins.workflow.cps.CpsThread.program
in object org.jenkinsci.plugins.workflow.cps.CpsThread#767fe543
in field org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.threads
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#2bd4e7db
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#2bd4e7db
Caused: java.io.NotSerializableException: java.util.ArrayList$Itr
something like this should work without using shell.
params.environment = 'prod'
...
def cfg = readJSON file:"config.json"
env.putAll( cfg[ params.environment ] )
if you still want to use shell to evaluate json-path...
jenkins pipeline tries to serialize-and-store all variables between steps because each next step could be running on different node.
The error like java.io.NotSerializableException means you have such variable that is not serializable.
java.io.NotSerializableException: java.util.ArrayList$Itr should give you a hint that it happens somewhere around array. and Itr - that's about iterator (guess).
actually array.each {} is using iterator..
2 main options to fix it
1. try to change non-serializable to serializable objects
java.util.ArrayList$Itr is not serializable but ArrayList itself is serializable - look for All Implemented Interfaces: Serializable in corresponding javadoc
so, array itself is fine, you could try to change the way you are iterating the array to a standard for
for (int i=0; i<envContentKeys.size(); i++){
def it = envContentKeys[i]
env[it] = sh(...)
}
can't check this now but i think it should work because all variables used in loop are serializable.
2. use #NonCPS annotation around code that causing the issue
move your code into function and add #NonCPS annotation to this function. this will force pipeline to execute it as one non-breakable step.
#NonCPS
def myFunction() {
envContentKeys.each {
env[it] = sh(...)
}
}

How do I pass a PersistentStringParameter to a parametrized Jenkins job in a build step?

I have a parametrized Jenkins pipeline job that takes a PersistentStringParameter (from the Persistent Parameter Plugin). I am starting this job from another job, using a build step that looks like this:
build(wait: true, propagate: false, job: 'my_job', parameters: job_params)
Here job_params is a List of the form
job_params = [string(name: 'PARAM_NAME', value: '42')]
Jenkins complains a bit about the fact that a PersistentStringParameter is expected, but I am passing a string:
The parameter 'PARAM_NAME' did not have the type expected by my_job. Converting to Persistent String Parameter.
Is there any way I can avoid this warning? How do I put a PersistentStringParameter in job_params?

Groovy throws MissingPropertyException when propertyMissing is defined to return null

A particular jenkinsfile is being used by 2 different Jenkins job. Job1 needs to pass PARAM1 and PARAM2 to the job. This is done through the use of string parameters in the job configuration. job2 only needs to pass PARAM1. This is also done through a string parameter.
So job1 has 2 strings parameter, PARAM1 and PARAM2.
job2 has 1 string parameter, PARAM1.
To avoid the use of try { ... } catch (MissingPropertyException) { ... }, I have tried to use the workaround described here. My code is as follows:
def propertyMissing(name) {
println "*** something could not be resolved to a parameter - will be set to NULL ***"
}
PARAM1 = PARAM1 ?: 'default value'
PARAM2 = PARAM2 ?: 'default value'
On job2, this fails at PARAM2 = PARAM2 ?: 'default value' and throws an exception: groovy.lang.MissingPropertyException: No such property: PARAM2 for class: groovy.lang.Binding.
On job1, this fails at another stage when trying to set an environment variable (`env.SOMETHING = 'value') and I get the following stack trace:
*** env could not be resolved to a parameter - will be set to NULL ***
*** currentBuild could not be resolved to a parameter - will be set to NULL ***
[Pipeline] End of Pipeline
java.lang.NullPointerException: Cannot set property 'result' on null object
Is there a way to avoid the use of try / catch statement in this scenario?

How to use a FileParameterValue in a jenkins 2 pipeline

How can a file from the current project workspace be passed as a parameter to another project.
e.g. something like:
build job: 'otherproject', parameters: [[$class: 'FileParameterValue', name: 'output.tar.gz', value: ??? ]], wait: false
The java.File object only can recover files from the master node.
So to load the files as a java.File objects we use the master node to unstash the required files, then we wrap them as file objects and finally we send them as a FileParameterValue objects.
node("myNode") {
sh " my-commands -f myFile.any " // This command create a new file.
stash includes: "*.any", name: "my-custom-name", useDefaultExcludes: true
}
node("master") {
unstash "my-custom-name"
def myFile = new File("${WORKSPACE}/myFile.any")
def myJob = build(job: "my-job", parameters:
[ string(name: 'required-param-1', value: "myValue1"),
new FileParameterValue("myFile.any", myFile, "myFile.any")
], propagate: false)
print "The Job execution status is: ${myJob.result}."
if(myJob.result == "FAILURE") {
error("The Job execution has failed.")
}
else {
print "The Job was executed successfully."
}
}
You could skip the master node If the file that you need to send contain only text.
def myFileContent = readFile("myFile.txt")
FilePath fp = new FilePath(new File("${WORKSPACE}","myFile.txt"))
if(fp!=null){
fp.write(myFileContent, null)
}
def file = new File("${WORKSPACE}/myFile.txt")
Then use the file on the FileParameterValue object as usual.
Don't forget to import the FilePath object -> import hudson.FilePath
I've tried this myself recently with little success. There seems to be a problem with this. According to the documentation for class FileParameterValue there is a constructor which accepts a java.io.File like so:
#DataBoundConstructor
FileParameterValue(String name,
org.apache.commons.fileupload.FileItem file)
There is another wich expects a FileItem like so:
FileParameterValue(String name,
File file,
String originalFileName)
But since only the former is annotated with #DataBoundConstructor even when I try to use the latter in a script:
file = new File(pwd(), 'test.txt');
build(
job: 'jobB',
parameters: [
[$class: "FileParameterValue", name: "TEST_FILE", file: file, originalFileName: 'test.txt']
]
)
Note that this requires script approval for instantiating java.io.File
... I get the following error:
java.lang.ClassCastException: hudson.model.FileParameterValue.file expects interface org.apache.commons.fileupload.FileItem but received class java.io.File
I understand that only a file uploaded by the user as interactive runtime input provides an object of type org.apache.commons.fileupload.FileItem so in the end I resorted to archiving the file in the first job and unarchiving it in the downstream job, and got around the problem. It's not ideal of course but if you're in a jam it's the quickest way to sort it out.
You can't. Here is the jenkins bug. Update this thread once the bug is fixed. In the meantime, login and vote for this issue and ask for them to add documentation for pipeline build job parameters.
https://issues.jenkins-ci.org/browse/JENKINS-27413
Linked to from here: http://jenkins-ci.361315.n4.nabble.com/pipeline-build-job-with-FileParameterValue-td4861199.html
Here is the documentation for different parameter types (Link to FileParameterValue)
http://javadoc.jenkins.io/hudson/model/FileParameterValue.html
Try to pass an instance of FileParameterValue to parameters (it worked for me):
import hudson.model.*
def param_file = new File("path/to/file")
build job: 'otherproject', parameters: [new FileParameterValue('file_param_name', param_file, 'original_file_name')], wait: false
Using jenkins file parameter plugin, it supports (i) base 64 file and (ii) stash file.
The following is an "example" of caller and callee pipeline jenkins scripts on windows agent.
Caller
pipeline {
stages {
stage ('Call Callee Job') {
steps {
script {
def callee_job = build(job: 'test-callee', parameters: [
base64File(name: 'smallfile', base64: Base64.encoder.encodeToString('small file 123'.bytes)),
stashedFile(name: 'largefile', file: getFileItem())
], propagate: true)
}
}
}
}
}
// Read file and convert from java file io object to apache commons disk file item object
#NonCPS
def getFileItem() {
def largeFileObject = new File(pwd(), "filename.apk")
def diskFileItem = new org.apache.commons.fileupload.disk.DiskFileItem("fieldNameFile", "application/vnd.android.package-archive", false, largeFileObject.getName(), (int) largeFileObject.length() , largeFileObject.getParentFile())
def inputStream = new FileInputStream(largeFileObject)
def outputStream = diskFileItem.getOutputStream()
org.apache.commons.io.IOUtils.copy(inputStream, outputStream)
inputStream.close()
outputStream.close()
return diskFileItem
}
Callee
pipeline {
parameters {
base64File(name: 'smallfile')
stashedFile(name: 'largefile')
}
stages {
stage ('Print params') {
steps {
echo "params.smallfile: ${params.smallfile}" // gives base64 encoded value
echo "params.largefile: ${params.largefile}" // gives null
withFileParameter('smallfile') {
echo "$smallfile" // gives tmp file path in callee job workspace
bat "more $smallfile" // reads tmp file to give content value
}
unstash 'largefile'
bat 'dir largefile' // shows largefile in callee job workspace directory
}
}
}
}

Resources