How can a file from the current project workspace be passed as a parameter to another project.
e.g. something like:
build job: 'otherproject', parameters: [[$class: 'FileParameterValue', name: 'output.tar.gz', value: ??? ]], wait: false
The java.File object only can recover files from the master node.
So to load the files as a java.File objects we use the master node to unstash the required files, then we wrap them as file objects and finally we send them as a FileParameterValue objects.
node("myNode") {
sh " my-commands -f myFile.any " // This command create a new file.
stash includes: "*.any", name: "my-custom-name", useDefaultExcludes: true
}
node("master") {
unstash "my-custom-name"
def myFile = new File("${WORKSPACE}/myFile.any")
def myJob = build(job: "my-job", parameters:
[ string(name: 'required-param-1', value: "myValue1"),
new FileParameterValue("myFile.any", myFile, "myFile.any")
], propagate: false)
print "The Job execution status is: ${myJob.result}."
if(myJob.result == "FAILURE") {
error("The Job execution has failed.")
}
else {
print "The Job was executed successfully."
}
}
You could skip the master node If the file that you need to send contain only text.
def myFileContent = readFile("myFile.txt")
FilePath fp = new FilePath(new File("${WORKSPACE}","myFile.txt"))
if(fp!=null){
fp.write(myFileContent, null)
}
def file = new File("${WORKSPACE}/myFile.txt")
Then use the file on the FileParameterValue object as usual.
Don't forget to import the FilePath object -> import hudson.FilePath
I've tried this myself recently with little success. There seems to be a problem with this. According to the documentation for class FileParameterValue there is a constructor which accepts a java.io.File like so:
#DataBoundConstructor
FileParameterValue(String name,
org.apache.commons.fileupload.FileItem file)
There is another wich expects a FileItem like so:
FileParameterValue(String name,
File file,
String originalFileName)
But since only the former is annotated with #DataBoundConstructor even when I try to use the latter in a script:
file = new File(pwd(), 'test.txt');
build(
job: 'jobB',
parameters: [
[$class: "FileParameterValue", name: "TEST_FILE", file: file, originalFileName: 'test.txt']
]
)
Note that this requires script approval for instantiating java.io.File
... I get the following error:
java.lang.ClassCastException: hudson.model.FileParameterValue.file expects interface org.apache.commons.fileupload.FileItem but received class java.io.File
I understand that only a file uploaded by the user as interactive runtime input provides an object of type org.apache.commons.fileupload.FileItem so in the end I resorted to archiving the file in the first job and unarchiving it in the downstream job, and got around the problem. It's not ideal of course but if you're in a jam it's the quickest way to sort it out.
You can't. Here is the jenkins bug. Update this thread once the bug is fixed. In the meantime, login and vote for this issue and ask for them to add documentation for pipeline build job parameters.
https://issues.jenkins-ci.org/browse/JENKINS-27413
Linked to from here: http://jenkins-ci.361315.n4.nabble.com/pipeline-build-job-with-FileParameterValue-td4861199.html
Here is the documentation for different parameter types (Link to FileParameterValue)
http://javadoc.jenkins.io/hudson/model/FileParameterValue.html
Try to pass an instance of FileParameterValue to parameters (it worked for me):
import hudson.model.*
def param_file = new File("path/to/file")
build job: 'otherproject', parameters: [new FileParameterValue('file_param_name', param_file, 'original_file_name')], wait: false
Using jenkins file parameter plugin, it supports (i) base 64 file and (ii) stash file.
The following is an "example" of caller and callee pipeline jenkins scripts on windows agent.
Caller
pipeline {
stages {
stage ('Call Callee Job') {
steps {
script {
def callee_job = build(job: 'test-callee', parameters: [
base64File(name: 'smallfile', base64: Base64.encoder.encodeToString('small file 123'.bytes)),
stashedFile(name: 'largefile', file: getFileItem())
], propagate: true)
}
}
}
}
}
// Read file and convert from java file io object to apache commons disk file item object
#NonCPS
def getFileItem() {
def largeFileObject = new File(pwd(), "filename.apk")
def diskFileItem = new org.apache.commons.fileupload.disk.DiskFileItem("fieldNameFile", "application/vnd.android.package-archive", false, largeFileObject.getName(), (int) largeFileObject.length() , largeFileObject.getParentFile())
def inputStream = new FileInputStream(largeFileObject)
def outputStream = diskFileItem.getOutputStream()
org.apache.commons.io.IOUtils.copy(inputStream, outputStream)
inputStream.close()
outputStream.close()
return diskFileItem
}
Callee
pipeline {
parameters {
base64File(name: 'smallfile')
stashedFile(name: 'largefile')
}
stages {
stage ('Print params') {
steps {
echo "params.smallfile: ${params.smallfile}" // gives base64 encoded value
echo "params.largefile: ${params.largefile}" // gives null
withFileParameter('smallfile') {
echo "$smallfile" // gives tmp file path in callee job workspace
bat "more $smallfile" // reads tmp file to give content value
}
unstash 'largefile'
bat 'dir largefile' // shows largefile in callee job workspace directory
}
}
}
}
Related
I'm trying to read properties from my pom.xml file. I tried the following and it worked:
steps {
script {
def xmlfile = readFile "pom.xml"
def xml = new XmlSlurper().parseText(xmlfile)
def version = "${xml.version}"
echo version
}
}
When I tried to do something like this:
steps {
script {
def xmlfile = readFile "pom.xml"
def xml = new XmlSlurper().parseText(xmlfile)
def version = "${xml.version}"
def mystring = "blabhalbhab-${version}"
echo mystring
}
}
the pipeline suddenly fails with the error:
Caused: java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild
What might be the problem here?
EDIT: just adding this for others finding it with the same use case. My specific question was about how to avoid the CPS related error with XmlSlurper(). BUT for anyone else trying to parse POMs, afraid of that PR that supposedly will deprecate readMavenPom, the safest most maveny way of doing this is probably something like:
def version = sh script: "mvn help:evaluate -f 'pom.xml' -Dexpression=project.version -q -DforceStdout", returnStdout: true trim()
This way your using maven itself to tell you what the version is and not grepping or sedding all over the damn place. How to get Maven project version to the bash command line
In general, using groovy.util.slurpersupport.NodeChild (the type of your xml variable) or groovy.util.slurpersupport.NodeChildren (the type of xml.version) inside CPS pipeline is a bad idea. Both classes are not serializable, so you can't predicate their behavior in the Groovy CPS. For instance, I run successfully your second example in my Jenkins Pipeline. Most probably because the example you gave is not complete or something like that.
groovy:000> xml = new XmlSlurper().parseText("<tag></tag>")
===>
groovy:000> xml instanceof Serializable
===> false
groovy:000> xml.tag instanceof Serializable
===> false
groovy:000> xml.dump()
===> <groovy.util.slurpersupport.NodeChild#0 node=groovy.util.slurpersupport.Node#5b1f29fa parent= name=tag namespacePrefix=* namespaceMap=[xml:http://www.w3.org/XML/1998/namespace] namespaceTagHints=[xml:http://www.w3.org/XML/1998/namespace]>
groovy:000> xml.tag.dump()
===> <groovy.util.slurpersupport.NodeChildren#0 size=-1 parent= name=tag namespacePrefix=* namespaceMap=[xml:http://www.w3.org/XML/1998/namespace] namespaceTagHints=[xml:http://www.w3.org/XML/1998/namespace]>
groovy:000>
If you want to read pom.xml file, use the readMavenPom pipeline step. It is dedicated to read pom files and what is most important - it is safe to do it without applying any workarounds. This step comes with the pipeline-utility-steps plugin.
However, if you want to use XmlSlurper for some reason, you need to use it inside the method that is annotated with #NonCPS. That way you can access "pure" Groovy and avoid problems you have faced. (Yet still using readMavenPom is the safest way to achieve what you are trying to do.) The point here is to use any non-serializable objects inside a #NonCPS scope so the pipeline does not try to serialize it.
Below you can find a simple example of the pipeline that shows both approaches.
pipeline {
agent any
stages {
stage("Using readMavenPom") {
steps {
script {
def xmlfile = readMavenPom file: "pom.xml"
def version = xmlfile.version
echo "version = ${version}"
}
}
}
stage("Using XmlSlurper") {
steps {
script {
def xmlfile = readFile "pom.xml"
def version = extractFromXml(xmlfile) { xml -> xml.version }
echo "version = ${version}"
}
}
}
}
}
#NonCPS
String extractFromXml(String xml, Closure closure) {
def node = new XmlSlurper().parseText(xml)
return closure.call(node)?.text()
}
PS: not to mention that using XmlSlurper requires at least script 3 approvals before you can start using it.
How can I declare a choice parameter for a declarative pipeline, the choices for which are read from a list in another groovy file?
l.groovy
opts = ['a','b','c','d']
main.groovy
pipeline {
parameters {
choice (
name: 'CHOICE_LIST',
choices: config.opts.keySet() as String[],
description: 'Make a choice'
)
...
}
...
}
Hoi,
just join your list with .join('\n') should do the trick.
choice (
name: 'CHOICE_LIST',
choices: config.opts.keySet().join('\n'),
description: 'Make a choice'
)
Why ?
ChoiceParameterDefinition requires a delimited string.
https://issues.jenkins-ci.org/browse/JENKINS-26143
UPDATE
It's the problem of importing the config that isn't working. How should I import from another groovy file? That's the bigger issue. cyberbeast
Add the other groovy.file as a shared library to the pipeline under the job-configuration.
Create a reference to the pipeline in your job. In my example the Groovy-file is called Prebuild which contains a funtion getBranchNames() where I get all branches from a svn-repro.
pipeline {
agent any
libraries {
lib('PreBuild')
}
stages {
stage('Set Parameters') {
steps {
timeout(time: 30, unit: 'SECONDS') {
script {
def INPUT_PARAMS = input message: 'Please Provide Parameters', ok: 'Next', parameters: [choice(name: 'Branch_Choice', choices: PreBuild.getBranchNames(), description: 'Which Branch?')]
}
}
...
The corrospending Prebuild.groovy file looks like this:
import groovy.util.XmlSlurper
def getBranchNames(){
def svn = bat(returnStdout: true, script: 'svn ls https://svn-repro --xml --username John --password Doe --non-interactive --trust-server-cert').trim()
def result = svn.readLines().drop(1).join(" ")
def slurper = new XmlSlurper()
def xml = slurper.parseText(result)
def name = new ArrayList()
name.addAll(xml.'*'.'*'.'name')
return name.join('\n')
}
I parse the svn-command output into an arraylist and return it as a joined string back to my pipeline job.
Be aware that your other Groovy-file has to be in a SCM too. The Library repro needs a special folder structure, find more information here:https://devopscube.com/jenkins-shared-library-tutorial/
How can I archive (as a file) the console output of a jenkins pipeline step such as docker.build(someTag)?
Background:
I use jenkins pipelines to build a whole bunch of microservices.
I want to extract all the relevant information from the jenkins console into archived files, so that devs don't have to look at the console which confuses them. This works fine for sh steps were I can redirect stdout and stderr but how can I do something similar for a jenkins pipeline step?
As a workaround we use following LogRecorder class:
class LogRecorder implements Serializable {
def logStart
def logEnd
def currentBuild
def steps
LogRecorder(currentBuild) {
this.currentBuild = currentBuild
}
void start() {
logStart = currentBuild.getRawBuild().getLog(2000)
}
String stop() {
logEnd = currentBuild.getRawBuild().getLog(2000)
getLog()
}
String getLog() {
def logDiff = logEnd - logStart
return logDiff.join('\n')
}
}
Depending on your needs you may want to adjust the number of log lines in the calls to getLog().
Possible Usage:
LogRecorder logRecorder = new LogRecorder(currentBuild)
logRecorder.start()
docker.build(someTag)
testResult.stdOut = logRecorder.stop()
Please be aware that it may happen - most probably due to caching issues - that the very last line(s) of the log are sometimes missing. Maybe a sleep would help here. But so far this was not required here.
Here's what I'm using to capture the sha256 of the built docker image
docker.build(someTag)
def dockerSha256 = sh(returnStdout: true, script: "docker image inspect $someTag | jq .[0].Id").trim()
I'm using 'jq' to parse the json response
or the groovy way
def json = sh(returnStdout: true, script: "docker image inspect $someTag").trim()
def obj = new JsonSlurper().parseText(json)
println "raw json: " + obj
println "groovy docker sha256: " + obj[0].Id
Following on from my question How to trigger parameterized build on successful build in Jenkins?
I would like the invoke a downstream project but only if a boolean parameter is set to true. Is this possible? My pipeline looks like this:
node {
try {
echo "ConfigFilePath: ${ConfigFilePath}"
echo "Delete VM on Successful Build: ${DeleteOnSuccess}"
stage('checkout') {
deleteDir()
git 'http://my.git.lab/repo.git'
}
stage('deploy') {
bat 'powershell -nologo -file BuildMyVM.ps1 -ConfigFilePath "%ConfigFilePath%" -Verbose'
}
}
stage('test') {
// functional tests go here
}
}
catch (e) {
// exception code
} finally {
// finally code
}
} //node
stage('delete') {
if(DeleteOnSuccess)
{
bat 'SET /p VM_NAME=<DeleteVM.txt'
echo "Deleting VM_NAME: %VM_NAME%"
def job = build job: 'remove-vm', parameters: [[$class: 'StringParameterValue', name: 'VM_NAME', value: '${VM_NAME}']]
}
}
I get this error on the delete stage
Required context class hudson.FilePath is missing.
Perhaps you forgot to surround the code with a step that provides this, such as: node
If I wrap the above in a node, then the parameter values are lost. If I put the delete stage in the main node, then I take up two executors, which I'm trying to avoid because it will result in some deadlock conditions.
The problem is that the running of a script actually needs a node to run on, so in your case the cause of the error is that you try to run a bat command outside of a node context
node {
...
}
stage('delete') {
if(DeleteOnSuccess)
{
bat 'SET /p VM_NAME=<DeleteVM.txt' // <- this is actually causing the error
echo "Deleting VM_NAME: %VM_NAME%"
def job = build job: 'remove-vm', parameters: [[$class: 'StringParameterValue', name: 'VM_NAME', value: '${VM_NAME}']]
}
}
You can fix this by wrapping this part also inside a node by either putting it inside the first node or add a new one, depending on what you want
Besides that, if the DeleteOnSuccess variable is a build parameter, it will be a string. I am not sure, but I think this is because it is injected as an environment variable, which are also strings (even if it is of type BooleanParameter. I guess that is only a UI thing so it will show up as checkbox).
You can check that by echoing DeleteOnSuccess.class. This will tell you its class
if(DeleteOnSuccess) { ... }
will always run the conditional block. You can fix this by either converting it to a bool using the toBoolean() extension method, or checking it against the the string true: DeleteOnSuccess == "true", like you did.
The extension method has the advantage that it will also allow values "1" and "True" as true
this is my situation: one of my projects consists of multiple subprojects, roughly separated as frontend and backend, which are at different locations in a subversion repository.
I extracted the checkout plugin into a function, that is already properly parameterized for the checkout:
def svn(String url, String dir = '.') {
checkout([
$class: 'SubversionSCM',
locations: [[
remote: url,
credentialsId: '...'
local: dir,
]],
workspaceUpdater: [$class: 'UpdateUpdater']
])
}
That way, I was able to do the checkout by this means (simplified):
stage "1. Build"
parallel (
"Backend": { node {
svn('https://svn.acme.com/Backend/trunk')
sh 'gradle build'
}},
"Frontend": { node {
svn('https://svn.acme.com/Frontend/trunk')
sh 'gradle build'
}}
)
Checking out at the very same time lead to Jenkins having troubles with changeset xml files, as far as I could guess from the stacktraces.
Since I also want to reuse both the projects name and its svn url, I moved on to iterate over a map and checking out consecutively and just stashing the files in the first stage for the following parallel build-only stage:
stage "1. Checkout"
node {
[
'Backend': 'https://svn.acme.com/Backend/trunk',
'Frontend': 'https://svn.acme.com/Frontend/trunk',
].each { name, url ->
// Checkout in subdirectory
svn(url, name)
// Unstash by project name
dir(name) { stash name }
}
}
stage "2. Build"
// ...
Somehow Jenkins' pipeline does not support this, so I used a simple for-in loop instead:
node {
def projects = [
'Backend': '..'
// ...
]
for ( project in projects ) {
def name = project.getKey()
def url = project.getValue()
svn(url, name)
dir(name) { stash name }
}
project = projects = name = url = null
}
That doesn't work as well and exits the build with an Exception: java.io.NotSerializableException: java.util.LinkedHashMap$Entry. As you can see, I set every property to null, because I read somewhere, that this prevents that behaviour. Can you help me fix this issue and explain, what's exactly going on here?
Thanks!
I think it is a known Jenkins bug of the for in-loop:
https://issues.jenkins-ci.org/browse/JENKINS-27421
But there is also a known bug for .each style loops
https://issues.jenkins-ci.org/browse/JENKINS-26481
So currently it seems like you cannot iterate over Maps in Jenkins Pipelines. I suggest creating a list as a workaround and iterate over it with the "classic loop" style:
def myList = ["Backend|https://svn.acme.com/Backend/trunk", "Frontend|https://svn.acme.com/Frontend/trunk"]
for (i = 0; i < myList.size(); i++) {
//get current list item : myList[i] and split at pipe | ->escape pipe with \\
def (name, url) = myList[i].tokenize( '\\|' )
//do operations
svn(url, name)
dir(name) { stash name }
}