Jenkins pipeline and Robot Framework results - jenkins

I had to implement a Pipeline and trying to find a way, how to publish Robot Framework results in Jenkins Pipeline.
I found multiple questions about implementation of Robot Framework plugin into Pipeline and also found this question which seems to be solution. However I have tried this approach and results are still missing.
Is there any workaround or functional example?

[Edited to reflect successful workaround]
This comment on the issue tracker shows a workaround that seems to work:
step([
$class : 'RobotPublisher',
outputPath : outputDirectory,
outputFileName : "*.xml",
disableArchiveOutput : false,
passThreshold : 100,
unstableThreshold: 95.0,
otherFiles : "*.png",
])
However, the Robot Framework Plugin currently does not seem to be fully compatible with Pipeline right now: https://issues.jenkins-ci.org/browse/JENKINS-34469
This is common with many plugins in the Jenkins ecosystem right now that have not been updated yet to be compatible with the new Jenkins Pipeline. You could potentially create the full compatibility yourself though, if you're motivated enough.

I used the workaround mentioned in the other answer but it wouldn't display the results with the job like in non pipline jobs, so i made freestyle project that is triggered by the pipline job and just copies the results files across then runs the analysis. This is crufty and won't be portable across nodes, the job numbers might get confusing over time so the correlations might be tricky. At the point i will investigate using generic artifact storage or just getting rid of robot altogether.

I had trouble using the answer given above, resulting in errors; but I was able to figure it out and add it to the Pipeline. Here is how I fixed it in case anyone else has come across the same issues:
stage('Tests') {
steps {
echo 'Testing...'
script {
step(
[
$class : 'RobotPublisher',
outputPath : '<insert/the/output/path>',
outputFileName : "*.xml",
reportFileName : "report.html",
logFileName : "log.html",
disableArchiveOutput : false,
passThreshold : 100,
unstableThreshold : 95.0,
otherFiles : "*.png"
]
)
}
}
}

Related

Jenkins Log Parser

I was using https://plugins.jenkins.io/log-parser/ plugin with freestyle Jenkins Jobs. But since moving to Jenkins Pipeline, I have not been able to integrate the log parser into the Declaratinve Pipeline syntax.
How can this be done? I also didn't find info in their docs. Also, what would be a good log parsing rule and where to specify it? In the Jenkinsfile also? Could you give an example? Thanks.
I don't user log-parser, but a quick glance at the issues suggests it is not presently compatible;
JENKINS-27208: Make Log Parser Plugin compatible with Workflow
JENKINS-32866: Log Parser Plugin does not parse Pipeline console outputs
Update:
This old response by Jesse Glick (Cloudbees; Jenkins sponsor) to similar question suggests it does in fact work now and suggests how to generate syntax, but OP complains DSL and documentation is weak.
gdemengin wrote pipeline-logparser to work around another issue JENKINS-54304
Build Failure Analyzer may also be of use to you.
YMMV
You can try something like the following:
stage('check') {
steps {
echo 'checking logs from previous stages...'
logParser failBuildOnError: true, parsingRulesPath: '/path/to/rules', useProjectRule: false, projectRulePath: ''
}
}
The pipeline syntax section in Jenkins allows you to get snippets for your Jenkinsfile

How can I use Jenkins' ExportParametersBuilder in a Pipeline?

I'm migrating a Free Style job to a Pipeline on Jenkins. The Freestyle Job uses the ExportParametersBuilder (Export Parameters to File) plug-in. This is important for our workflow because the application expects the parameters as a JSON file.
I have tried with a Basic Step, as documented in Pipeline: Basic Steps - Jenkins documentation (search for ExportParametersBuilder):
step([
$class: 'ExportParametersBuilder',
filePath: 'config/parameters',
fileFormat: 'json',
keyPattern: '',
useRegexp: 'false'
])
But when I try to run the Pipeline I get the following error:
No known implementation of interface jenkins.tasks.SimpleBuildStep is named ExportParametersBuilder
The Pipeline Job is running on the same Jenkins instance as the Freestyle Job (which is currently working). So, the Plug-in is installed and working. I'm not sure why this is happening.
Does anyone knows if this plug-in can be used in Pipeline Jobs? And if so, how? What am I missing?
If it cannot be used, my apologies, Jenkins' documentation is often misleading.
I couldn't find a way to use the plug-in but I found an alternative. I'm leaving it here in case it results useful for someone else.
// Import the JsonOutput class at the top of your Jenkinsfile
import groovy.json.JsonOutput
...
stage('Environment Setup') {
steps {
writeFile(file: 'config/parameters.json', text: JsonOutput.toJson(params))
}
}
This is probably not the cleanest, or the most elegant way to do it but it works. The params are all written to the JSON file and the JsonOutput class takes care of all the escaping magic and so on.
Do keep in mind that the format of the JSON file is a little different from the one ExportParametersBuilder created, so you'll need to adapt for it:
ExportParametersBuilder format:
[
...
{
"key": "target_node",
"value": "c3po"
}
...
]
JsonOutput format:
{
...
"target_node": "c3po"
...
}

Jenkins declarative pipeline : How to configure the klocwork result display on the job page

I am creating a pipeline using the declarative pipeline flavour, with clockwork steps enclosed within a klockwork wrapper where I can define the klocwork setup :
klocworkWrapper(installConfig: 'My Klocwork', ltoken: "${HOME}/.klocwork/ltoken", serverConfig: 'Klocwork#XYZ', serverProject: 'S3cr3TPr0j3ct') {
klocworkBuildSpecGeneration([additionalOpts: '', buildCommand: 'make', ignoreErrors: true, output: 'kwinject.out', tool: 'kwinject'])
klocworkIntegrationStep1([additionalOpts: '', buildSpec: 'kwinject.out', disableKwdeploy: false, ignoreCompileErrors: true, importConfig: '', incrementalAnalysis: false, tablesDir: 'kwtables'])
klocworkIntegrationStep2([additionalOpts: '', buildName: "${JOB_BASE_NAME}_${BUILD_NUMBER}", tablesDir: 'kwtables'])
}
Ok, analysis is launched, and I can see the results on the Klocwork server web interface.
But I cannot find a way to retrieve resulting diagrams on the Jenkins web interface, even when using the pipeline script generator.
Unless I am totally wrong, I think that I should use klocworkQualityGateway, but the generated script snippet is not correct.
Once copied within the wrapper, it fails lacking for some enableXYGateway or gatewayXYConfig property.
For example this line :
klocworkQualityGateway([enableCiGateway: false, enableServerGateway: true, gatewayServerConfigs: [[conditionName: 'Issues', jobResult: 'failure', query: 'state:+Status,Fix', threshold: '1']]])
fails with an error message :
WorkflowScript: 92: Missing required parameter: "gatewayCiConfig" # line 92, column 1.
klocworkQualityGateway([enableCiGateway: false, enableServerGateway: true, gatewayServerConfigs: [[conditionName: 'Issues', jobResult: 'failure', query: 'state:+Status,Fix', threshold: '1']]])
I really cannot find a way to make it work, and I guess I can take a wrong turn... so any help would be appreciate.
Thanks for your help and best regards
J-L
Well, after a fruitful discussion with the plugin maintainer (M. Baron) it appears that there is currently no simple and direct solution to display Klocwork result on a pipeline job page.
He said :
This step doesn't have a native pipeline interface and a few people
have tried, but haven't had much success with workarounds to use this
in a pipeline.
The simplest thing to do seems to trigger a freestyle job that will only do that.
As far as I have understood, a new plugin version with full pipeline support will replace the current one.
So, I think this discussion can be closed.

Matrix configuration with Jenkins pipelines

The Jenkins Pipeline plugin (aka Workflow) can be extended with other Multibranch plugins to build branches and pull requests automatically.
What would be the preferred way to run multiple configurations? For example, building with Java 7 and Java 8. This is often called matrix configuration (because of the multiple combinations such as language version, framework version, ...) or build variants.
I tried:
executing them serially as separate stage steps. Good, but takes more time than necessary.
executing them inside a parallel step, with or without nodes allocated inside them. Works but I cannot use the stage step inside parallel for known limitations on how it would be visualized.
Is there a recommended way to do this?
TLDR: Jenkins.io wants you to use nodes for each build.
Jenkins.io: In pipeline coding contexts, a "node" is a step that does two things, typically by enlisting help from available executors on agents:
Schedules the steps contained within it to run by adding them to the Jenkins build queue (so that as soon as an executor slot is free on a node, the appropriate steps run)
It is a best practice to do all material work, such as building or running shell scripts, within nodes, because node blocks in a stage tell Jenkins that the steps within them are resource-intensive enough to be scheduled, request help from the agent pool, and lock a workspace only as long as they need it.
Vanilla Jenkins Node blocks within a stage would look like:
stage 'build' {
node('java7-build'){ ... }
node('java8-build'){ ... }
}
Further extending this notion Cloudbees writes about parallelism and distributed builds with Jenkins. Cloudbees workflow for you might look like:
stage 'build' {
parallel 'java7-build':{
node('mvn-java7'){ ... }
}, 'java8-build':{
node('mvn-java8'){ ... }
}
}
Your requirements of visualizing the different builds in the pipeline would could be satisfied with either workflow, but I trust the Jenkins documentation for best practice.
EDIT
To address the visualization #Stephen would like to see, He's right - it doesn't work! The issue has been raised with Jenkins and is documented here, the resolution of involving the use of 'labelled blocks' is still in progress :-(
Q: Is there documentation letting pipeline users not to put stages inside of parallel steps?
A: No, and this is considered to be an incorrect usage if it is done; stages are only valid as top-level constructs in the pipeline, which is why the notion of labelled blocks as a separate construct has come to be ... And by that, I mean remove stages from parallel steps within my pipeline.
If you try to use a stage in a parallel job, you're going to have a bad time.
ERROR: The ‘stage’ step must not be used inside a ‘parallel’ block.
I would suggest Declarative Matrix as a preferred way to run multiple configurations in Jenkins. It allows you to execute the defined stages for every configuration without code duplication.
Example:
pipeline {
agent none
stages {
stage('Test') {
matrix {
agent {
label "${NODENAME}"
}
axes {
axis {
name 'NODENAME'
values 'java7node', 'java8node'
}
}
stages {
stage('Test') {
steps {
echo "Do Test for ${NODENAME}"
}
}
}
}
}
}
}
Note that declarative Matrix is a native declarative Pipeline feature, so no additional Plugin installation needed.
Jenkins blog post about the matrix directive.
As noted by #StephenKing, Blue Ocean will show parallel branches better than the current stage view. A planned upcoming version of the stage view will be able to show all the branches, though it will not visually indicate any nesting structure (would look the same as if you ran the configurations serially).
In any event, the deeper issue is that you will essentially only get a pass/fail status for the build overall, pending a resolution to JENKINS-27395 and related requests.
In order to test each commit on several platforms, I've used this base Jenkinsfile skeleton:
def test_platform(label, with_stages = false)
{
node(label)
{
// Checkout
if (with_stages) stage label + ' Checkout'
...
// Build
if (with_stages) stage label + ' Build'
...
// Tests
if (with_stages) stage label + ' Tests'
...
}
}
/*
parallel ( failFast: false,
Windows: { test_platform("Windows") },
Linux: { test_platform("Linux") },
Mac: { test_platform("Mac") },
)
*/
test_platform("Windows", true)
test_platform("Mac", true)
test_platform("Linux", true)
With this it's relatively easy to switch from a sequential to a parallel execution, each of them having their pros and cons:
Parallel execution runs much faster, but it doesn't contain the stages labelling
Sequential execution is much slower, but you get a detailed report thanks to stages, labelled as "Windows Checkout", "Windows Build", "Windows Tests", "Mac Checkout", etc.)
I'm using the sequential execution for the time being, until I find a better solution.
It seems like there is relief coming at least with the BlueOcean UI. Here is what I got (the tk-* nodes are the parallel steps):

Jenkins, how to check regressions against another job

When you set up a Jenkins job various test result plugins will show regressions if the latest build is worse than the previous one.
We have many jobs for many projects on our Jenkins and we wanted to avoid having a 'job per branch' set up. So currently we are using a parameterized build to build eg different development branches using a single job.
But that means when I build a new branch any regressions are measured against the previous build, which may be for a different branch. What I really want is to measure regressions in a feature branch against the latest build of the master branch.
I thought we should probably set up a separate 'master' build alongside the parameterized 'branches' build. But I still can't see how I would compare results between jobs. Is there any plugin that can help?
UPDATE
I have started experimenting in the Script Console to see if I could write a post-build script... I have managed to get the latest build of master branch in my parameterized job... I can't work out how to get to the test results from the build object though.
The data I need is available in JSON at
http://<jenkins server>/job/<job name>/<build number>/testReport/api/json?pretty=true
...if I could just get at this data structure it would be great!
I tried using JsonSlurper to load the json via HTTP but I get 403, I guess because my script has no auth session.
I guess I could load the xml test results from disk and parse them in my script, it just seems a bit stupid when Jenkins has already done this.
I eventually managed to achieve everything I wanted, using a Groovy script in the Groovy Postbuild Plugin
I did a lot of exploring using the script console http://<jenkins>/script and also the Jenkins API class docs are handy.
Everyone's use is going to be a bit different as you have to dig down into the build plugins to get the info you need, but here's some bits of my code which may help.
First get the build you want:
def getProject(projectName) {
// in a postbuild action use `manager.hudson`
// in the script web console use `Jenkins.instance`
def project = manager.hudson.getItemByFullName(projectName)
if (!project) {
throw new RuntimeException("Project not found: $projectName")
}
project
}
// CloudBees folder plugin is supported, you can use natural paths:
project = getProject('MyFolder/TestJob')
build = project.getLastCompletedBuild()
The main test results (jUnit etc) seem to be available directly on the build as:
result = build.getTestResultAction()
// eg
failedTestNames = result.getFailedTests().collect{ test ->
test.getFullName()
}
To get the more specialised results from eg Violations plugin or Cobertura code coverage you have to look for a specific build action.
// have a look what's available:
build.getActions()
You'll see a list of stuff like:
[hudson.plugins.git.GitTagAction#2b4b8a1c,
hudson.scm.SCMRevisionState$None#40d6dce2,
hudson.tasks.junit.TestResultAction#39c99826,
jenkins.plugins.show_build_parameters.ShowParametersBuildAction#4291d1a5]
These are instances, the part in front of the # sign is the class name so I used that to make this method for getting a specific action:
def final VIOLATIONS_ACTION = hudson.plugins.violations.ViolationsBuildAction
def final COVERAGE_ACTION = hudson.plugins.cobertura.CoberturaBuildAction
def getAction(build, actionCls) {
def action = build.getActions().findResult { act ->
actionCls.isInstance(act) ? act : null
}
if (!action) {
throw new RuntimeException("Action not found in ${build.getFullDisplayName()}: ${actionCls.getSimpleName()}")
}
action
}
violations = getAction(build, VIOLATIONS_ACTION)
// you have to explore a bit more to find what you're interested in:
pylint_count = violations?.getReport()?.getViolations()?."pylint"
coverage = getAction(build, COVERAGE_ACTION)?.getResults()
// if you println it looks like a map but it's really an Enum of Ratio objects
// convert to something nicer to work with:
coverage_map = coverage.collectEntries { key, val -> [key.name(), val.getPercentageFloat()] }
With these building blocks I was able to put together a post-build script which compared the results for two 'unrelated' build jobs, then using the Groovy Postbuild plugin's helper methods to set the build status.
Hope this helps someone else.

Resources