Jenkins pipeline capture output of jenkins steps - jenkins

How can I archive (as a file) the console output of a jenkins pipeline step such as docker.build(someTag)?
Background:
I use jenkins pipelines to build a whole bunch of microservices.
I want to extract all the relevant information from the jenkins console into archived files, so that devs don't have to look at the console which confuses them. This works fine for sh steps were I can redirect stdout and stderr but how can I do something similar for a jenkins pipeline step?

As a workaround we use following LogRecorder class:
class LogRecorder implements Serializable {
def logStart
def logEnd
def currentBuild
def steps
LogRecorder(currentBuild) {
this.currentBuild = currentBuild
}
void start() {
logStart = currentBuild.getRawBuild().getLog(2000)
}
String stop() {
logEnd = currentBuild.getRawBuild().getLog(2000)
getLog()
}
String getLog() {
def logDiff = logEnd - logStart
return logDiff.join('\n')
}
}
Depending on your needs you may want to adjust the number of log lines in the calls to getLog().
Possible Usage:
LogRecorder logRecorder = new LogRecorder(currentBuild)
logRecorder.start()
docker.build(someTag)
testResult.stdOut = logRecorder.stop()
Please be aware that it may happen - most probably due to caching issues - that the very last line(s) of the log are sometimes missing. Maybe a sleep would help here. But so far this was not required here.

Here's what I'm using to capture the sha256 of the built docker image
docker.build(someTag)
def dockerSha256 = sh(returnStdout: true, script: "docker image inspect $someTag | jq .[0].Id").trim()
I'm using 'jq' to parse the json response
or the groovy way
def json = sh(returnStdout: true, script: "docker image inspect $someTag").trim()
def obj = new JsonSlurper().parseText(json)
println "raw json: " + obj
println "groovy docker sha256: " + obj[0].Id

Related

XmlSlurper() in jenkins pipeline. how to avoid java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild

I'm trying to read properties from my pom.xml file. I tried the following and it worked:
steps {
script {
def xmlfile = readFile "pom.xml"
def xml = new XmlSlurper().parseText(xmlfile)
def version = "${xml.version}"
echo version
}
}
When I tried to do something like this:
steps {
script {
def xmlfile = readFile "pom.xml"
def xml = new XmlSlurper().parseText(xmlfile)
def version = "${xml.version}"
def mystring = "blabhalbhab-${version}"
echo mystring
}
}
the pipeline suddenly fails with the error:
Caused: java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild
What might be the problem here?
EDIT: just adding this for others finding it with the same use case. My specific question was about how to avoid the CPS related error with XmlSlurper(). BUT for anyone else trying to parse POMs, afraid of that PR that supposedly will deprecate readMavenPom, the safest most maveny way of doing this is probably something like:
def version = sh script: "mvn help:evaluate -f 'pom.xml' -Dexpression=project.version -q -DforceStdout", returnStdout: true trim()
This way your using maven itself to tell you what the version is and not grepping or sedding all over the damn place. How to get Maven project version to the bash command line
In general, using groovy.util.slurpersupport.NodeChild (the type of your xml variable) or groovy.util.slurpersupport.NodeChildren (the type of xml.version) inside CPS pipeline is a bad idea. Both classes are not serializable, so you can't predicate their behavior in the Groovy CPS. For instance, I run successfully your second example in my Jenkins Pipeline. Most probably because the example you gave is not complete or something like that.
groovy:000> xml = new XmlSlurper().parseText("<tag></tag>")
===>
groovy:000> xml instanceof Serializable
===> false
groovy:000> xml.tag instanceof Serializable
===> false
groovy:000> xml.dump()
===> <groovy.util.slurpersupport.NodeChild#0 node=groovy.util.slurpersupport.Node#5b1f29fa parent= name=tag namespacePrefix=* namespaceMap=[xml:http://www.w3.org/XML/1998/namespace] namespaceTagHints=[xml:http://www.w3.org/XML/1998/namespace]>
groovy:000> xml.tag.dump()
===> <groovy.util.slurpersupport.NodeChildren#0 size=-1 parent= name=tag namespacePrefix=* namespaceMap=[xml:http://www.w3.org/XML/1998/namespace] namespaceTagHints=[xml:http://www.w3.org/XML/1998/namespace]>
groovy:000>
If you want to read pom.xml file, use the readMavenPom pipeline step. It is dedicated to read pom files and what is most important - it is safe to do it without applying any workarounds. This step comes with the pipeline-utility-steps plugin.
However, if you want to use XmlSlurper for some reason, you need to use it inside the method that is annotated with #NonCPS. That way you can access "pure" Groovy and avoid problems you have faced. (Yet still using readMavenPom is the safest way to achieve what you are trying to do.) The point here is to use any non-serializable objects inside a #NonCPS scope so the pipeline does not try to serialize it.
Below you can find a simple example of the pipeline that shows both approaches.
pipeline {
agent any
stages {
stage("Using readMavenPom") {
steps {
script {
def xmlfile = readMavenPom file: "pom.xml"
def version = xmlfile.version
echo "version = ${version}"
}
}
}
stage("Using XmlSlurper") {
steps {
script {
def xmlfile = readFile "pom.xml"
def version = extractFromXml(xmlfile) { xml -> xml.version }
echo "version = ${version}"
}
}
}
}
}
#NonCPS
String extractFromXml(String xml, Closure closure) {
def node = new XmlSlurper().parseText(xml)
return closure.call(node)?.text()
}
PS: not to mention that using XmlSlurper requires at least script 3 approvals before you can start using it.

How to get the current Jenkins pipeline StepContext

I have a step in a pipeline that pulls objects from the context and uses them. However, I need to access those objects outside of the steps to feed into different steps, and the second step doesn't expose it.
stage() {
steps {
script {
def status = waitForQualityGate()
// Use the taskId
}
}
}
}
The waitForQualityGate() call only returns a boolean, so I can't access it there.
I could instead manually initialize the step, like so:
script {
def qualityGate = new WaitForQualityGateStep()
def taskId = qualityGate.getTaskId()
}
but the taskId is null. If I try to run the start methods manually on the step:
script {
def qualityGate = new WaitForQualityGateStep()
qualityGate.start().start()
def taskId = qualityGate.getTaskId()
}
It fails with the message:
java.lang.IllegalStateException: you must either pass in a StepContext to the StepExecution constructor, or have the StepExecution be created automatically
The WaitForQualityGateStep has the info I need, but I can't initialize it without having a StepContext (which is an Abstract class). How can I get one from the pipeline?
You can define the variable before the pipeline and in the step just set its value. This way the variable is visible across the pipeline.
I still have no idea how to manually get a step context to manually execute a step, but in case anyone else finds this by trying to get information out of the Sonar plugin, this is how I got the task ID that I needed.
def output = sh(script: "mvn sonar:sonar", returnStdout: true)
echo output // The capture prevents printing to console
def taskUri = output.find(~'/api/ce/task\\?id=[\\w-]*')

how to get the trigger information in Jenkins programmatically

I need to add the next build time scheduled in a build email notification after a build in Jenkins.
The trigger can be "Build periodically" or "Poll SCM", or anything with schedule time.
I know the trigger info is in the config.xml file e.g.
<triggers>
<hudson.triggers.SCMTrigger>
<spec>8 */2 * * 1-5</spec>
<ignorePostCommitHooks>false</ignorePostCommitHooks>
</hudson.triggers.SCMTrigger>
</triggers>
and I also know how to get the trigger type and spec with custom scripting from the config.xml file, and calculate the next build time.
I wonder if Jenkins has the API to expose this information out-of-the-box. I have done the search, but not found anything.
I realise you probably no longer need help with this, but I just had to solve the same problem, so here is a script you can use in the Jenkins console to output all trigger configurations:
#!groovy
Jenkins.instance.getAllItems().each { it ->
if (!(it instanceof jenkins.triggers.SCMTriggerItem)) {
return
}
def itTrigger = (jenkins.triggers.SCMTriggerItem)it
def triggers = itTrigger.getSCMTrigger()
println("Job ${it.name}:")
triggers.each { t->
println("\t${t.getSpec()}")
println("\t${t.isIgnorePostCommitHooks()}")
}
}
This will output all your jobs that use SCM configuration, along with their specification (cron-like expression regarding when to run) and whether post-commit hooks are set to be ignored.
You can modify this script to get the data as JSON like this:
#!groovy
import groovy.json.*
def result = [:]
Jenkins.instance.getAllItems().each { it ->
if (!(it instanceof jenkins.triggers.SCMTriggerItem)) {
return
}
def itTrigger = (jenkins.triggers.SCMTriggerItem)it
def triggers = itTrigger.getSCMTrigger()
triggers.each { t->
def builder = new JsonBuilder()
result[it.name] = builder {
spec "${t.getSpec()}"
ignorePostCommitHooks "${t.isIgnorePostCommitHooks()}"
}
}
}
return new JsonBuilder(result).toPrettyString()
And then you can use the Jenkins Script Console web API to get this from an HTTP client.
For example, in curl, you can do this by saving your script as a text file and then running:
curl --data-urlencode "script=$(<./script.groovy)" <YOUR SERVER>/scriptText
If Jenkins is using basic authentication, you can supply that with the -u <USERNAME>:<PASSWORD> argument.
Ultimately, the request will result in something like this:
{
"Build Project 1": {
"spec": "H/30 * * * *",
"ignorePostCommitHooks": "false"
},
"Test Something": {
"spec": "#hourly",
"ignorePostCommitHooks": "false"
},
"Deploy ABC": {
"spec": "H/20 * * * *",
"ignorePostCommitHooks": "false"
}
}
You should be able to tailor these examples to fit your specific use case. It seems you won't need to access this remotely but just from a job, but I also included the remoting part as it might come in handy for someone else.

Custom changelog in Jenkins Pipelines

I was wondering if it is possible to have a custom changelog appear for Jenkins Pipelines. Ideally, I'd like to propagate the downstream changelogs, but failing that I've tried to create a custom changelog derived from the downstream builds. However, it doesn't appear to work (with no option for viewing the pipeline's workspace either).
I was wondering if this is something that I'm just getting wrong or whether it's actually supported or not.
This is the sample code I'm testing with
node('master')
{
stage('Source')
{
build 'SourceBuild'
def rootDir = currentBuild.rawBuild.getRootDir().toString()
echo rootDir
def changelog = new File(rootDir, "changelog.xml")
PrintWriter writer = new PrintWriter(new FileWriter(changelog));
writer.println("<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
writer.println("<changelog>");
writer.println(String.format("\t\t<user>%s</user>", 'User'));
writer.println(String.format("\t\t<comment>Change</comment>", 'Comment'));
writer.println("\t</changeset>");
writer.println("</changelog>");
writer.close();
}
}
Many thanks
In Jenkins pipeline, I noticed that there is a global variable named currentBuild. It has a readable property called changeSets. I would rather take this approach with pipeline instead of playing around with changelog.xml
stage('some name') {
def gitChangeSetList = currentBuild.changeSets
formatGitChangeLog(gitChangeSetList)
}
def formatGitChangeLog(GitChangeSetList changeSetList) {
def formatStr = ""
for (setItem in changeSetList) {
for (change in setItem.getLogs()) {
formatStr += "${change.getAuthor().getDisplayName()}: ${change.getMsg()}\n"
}
}
return formatStr
}
currentBuild.changeSets is of type GitChangeSetList. From the javadoc, we can derive various methods involved in GitChangeSet.

How to use a FileParameterValue in a jenkins 2 pipeline

How can a file from the current project workspace be passed as a parameter to another project.
e.g. something like:
build job: 'otherproject', parameters: [[$class: 'FileParameterValue', name: 'output.tar.gz', value: ??? ]], wait: false
The java.File object only can recover files from the master node.
So to load the files as a java.File objects we use the master node to unstash the required files, then we wrap them as file objects and finally we send them as a FileParameterValue objects.
node("myNode") {
sh " my-commands -f myFile.any " // This command create a new file.
stash includes: "*.any", name: "my-custom-name", useDefaultExcludes: true
}
node("master") {
unstash "my-custom-name"
def myFile = new File("${WORKSPACE}/myFile.any")
def myJob = build(job: "my-job", parameters:
[ string(name: 'required-param-1', value: "myValue1"),
new FileParameterValue("myFile.any", myFile, "myFile.any")
], propagate: false)
print "The Job execution status is: ${myJob.result}."
if(myJob.result == "FAILURE") {
error("The Job execution has failed.")
}
else {
print "The Job was executed successfully."
}
}
You could skip the master node If the file that you need to send contain only text.
def myFileContent = readFile("myFile.txt")
FilePath fp = new FilePath(new File("${WORKSPACE}","myFile.txt"))
if(fp!=null){
fp.write(myFileContent, null)
}
def file = new File("${WORKSPACE}/myFile.txt")
Then use the file on the FileParameterValue object as usual.
Don't forget to import the FilePath object -> import hudson.FilePath
I've tried this myself recently with little success. There seems to be a problem with this. According to the documentation for class FileParameterValue there is a constructor which accepts a java.io.File like so:
#DataBoundConstructor
FileParameterValue(String name,
org.apache.commons.fileupload.FileItem file)
There is another wich expects a FileItem like so:
FileParameterValue(String name,
File file,
String originalFileName)
But since only the former is annotated with #DataBoundConstructor even when I try to use the latter in a script:
file = new File(pwd(), 'test.txt');
build(
job: 'jobB',
parameters: [
[$class: "FileParameterValue", name: "TEST_FILE", file: file, originalFileName: 'test.txt']
]
)
Note that this requires script approval for instantiating java.io.File
... I get the following error:
java.lang.ClassCastException: hudson.model.FileParameterValue.file expects interface org.apache.commons.fileupload.FileItem but received class java.io.File
I understand that only a file uploaded by the user as interactive runtime input provides an object of type org.apache.commons.fileupload.FileItem so in the end I resorted to archiving the file in the first job and unarchiving it in the downstream job, and got around the problem. It's not ideal of course but if you're in a jam it's the quickest way to sort it out.
You can't. Here is the jenkins bug. Update this thread once the bug is fixed. In the meantime, login and vote for this issue and ask for them to add documentation for pipeline build job parameters.
https://issues.jenkins-ci.org/browse/JENKINS-27413
Linked to from here: http://jenkins-ci.361315.n4.nabble.com/pipeline-build-job-with-FileParameterValue-td4861199.html
Here is the documentation for different parameter types (Link to FileParameterValue)
http://javadoc.jenkins.io/hudson/model/FileParameterValue.html
Try to pass an instance of FileParameterValue to parameters (it worked for me):
import hudson.model.*
def param_file = new File("path/to/file")
build job: 'otherproject', parameters: [new FileParameterValue('file_param_name', param_file, 'original_file_name')], wait: false
Using jenkins file parameter plugin, it supports (i) base 64 file and (ii) stash file.
The following is an "example" of caller and callee pipeline jenkins scripts on windows agent.
Caller
pipeline {
stages {
stage ('Call Callee Job') {
steps {
script {
def callee_job = build(job: 'test-callee', parameters: [
base64File(name: 'smallfile', base64: Base64.encoder.encodeToString('small file 123'.bytes)),
stashedFile(name: 'largefile', file: getFileItem())
], propagate: true)
}
}
}
}
}
// Read file and convert from java file io object to apache commons disk file item object
#NonCPS
def getFileItem() {
def largeFileObject = new File(pwd(), "filename.apk")
def diskFileItem = new org.apache.commons.fileupload.disk.DiskFileItem("fieldNameFile", "application/vnd.android.package-archive", false, largeFileObject.getName(), (int) largeFileObject.length() , largeFileObject.getParentFile())
def inputStream = new FileInputStream(largeFileObject)
def outputStream = diskFileItem.getOutputStream()
org.apache.commons.io.IOUtils.copy(inputStream, outputStream)
inputStream.close()
outputStream.close()
return diskFileItem
}
Callee
pipeline {
parameters {
base64File(name: 'smallfile')
stashedFile(name: 'largefile')
}
stages {
stage ('Print params') {
steps {
echo "params.smallfile: ${params.smallfile}" // gives base64 encoded value
echo "params.largefile: ${params.largefile}" // gives null
withFileParameter('smallfile') {
echo "$smallfile" // gives tmp file path in callee job workspace
bat "more $smallfile" // reads tmp file to give content value
}
unstash 'largefile'
bat 'dir largefile' // shows largefile in callee job workspace directory
}
}
}
}

Resources