Set Gatling report name in Jenkins pipeline througt taurus - jenkins

I'm writing a declarative jenkins pipeline and I'm facing problems with gatling reports:
Mean response time trend is not correct, Is there a way to replace following cloud of dots by a curve ?
Extracts from my Jenkinsfile:
stage('perf') {
steps {
bzt params: './taurus/scenario.yml', generatePerformanceTrend: false, printDebugOutput: true
perfReport configType: 'PRT', graphType: 'PRT', ignoreFailedBuilds: true, modePerformancePerTestCase: true, modeThroughput: true, sourceDataFiles: 'results.xml'
dir ("taurus/results") {
gatlingArchive()
}
}
}
Extract from my scenario.yml:
modules:
gatling:
path: ./bin/gatling.sh
java-opts: -Dgatling.core.directory.data=./data
In scenario.yml, I tried to set gatling.core.outputDirectoryBaseName :
java-opts: -Dgatling.core.directory.data=./data -Dgatling.core.outputDirectoryBaseName=./my_scenario
In this case it replace only gatling by my_scenario, but huge number is already present.

I finally found a solution to solve this problem, but it's not simple since it involves an extension of the taurus code.
The problem is here, at line 309 of the file gatling.py in taurus repo. It explicitly add a prefix 'gatling-' to find a gatling report.
However, parameter -Dgatling.core.outputDirectoryBaseName=./my_scenario in file scenario.yml change this prefix by my_scenario. What I will describe below is a way to extend taurus in order to quickly extends.
Create a file ./extensions/gatling.py with this code to extend class GatlingExecutor:
from bzt.modules.gatling import GatlingExecutor, DataLogReader
class GatlingExecutorExtension(GatlingExecutor):
def __init__(self):
GatlingExecutor.__init__(self)
def prepare(self):
# From method bzt.modules.gatling.GatlingExecutor:prepare, copy code before famous line 309
# Replace line 309 by
self.dir_prefix = self.settings.get('dir_prefix', 'gatling-%s' % id(self))
# From method bzt.modules.gatling.GatlingExecutor:prepare, copy code after famous line 309
Create a file ./bztx.py to wrap command bzt:
import signal
import logging
from bzt.cli import main, signal_handler
if __name__ == "__main__":
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
main()
Update file scenario.yml by using new setting property dir_prefix and define new executor class:
modules:
gatling:
path: ./bin/gatling.sh
class: extensions.GatlingExecutorExtension
dir_prefix: my_scenario
java-opts: -Dgatling.core.directory.data=./data -Dgatling.core.outputDirectoryBaseName=./my_scenario
Finaly, update your Jenkinsfile by replacing bzt call with a call to your new file bztx.py:
stage('perf') {
steps {
sh 'python bztx.py ./taurus/scenario.yml'
perfReport configType: 'PRT', graphType: 'PRT', ignoreFailedBuilds: true, modePerformancePerTestCase: true, modeThroughput: true, sourceDataFiles: 'results.xml'
dir ("taurus/results") {
gatlingArchive()
}
}
}
That's all and it works for me. Bonus: this solution gives a way to easily extends taurus with your owns plugins ;-)

Related

Reduce the pipeline script for running multiple jobs parallel

I have below snippet for getting the matching job name then trigger all of them run parallelly.
Shared library file CommonPipelineMethods.groovy
import jenkins.instance.*
import jenkins.model.*
import hudson.model.Result
import hudson.model.*
import org.jenkinsci.plugins.workflow.support.steps.*
class PipelineMethods {
def buildSingleJob(downstreamJob) {
return {
result = build job: downstreamJob.fullName, propagate: false
echo "${downstreamJob.fullName} finished: ${result.rawBuild.result}"
}
}
}
return new PipelineMethods();
The main Jenkinsfile script:
def commonPipelineMethods;
pipeline {
stages {
stage('Load Common Methods into Pipeline') {
def JenkinsFilePath = '/config/jenkins/jobs'
commonPipelineMethods = load "${WORKSPACE}${JenkinsFilePath}/CommonPipelineMethods.groovy"
}
stage('Integration Test Run') {
steps {
script {
matchingJobs = commonPipelineMethods.getIntegrationTestJobs(venture_to_test, testAgainst)
parallel matchingJobs.collectEntries{downstreamJob-> [downstreamJob.name, commonPipelineMethods.buildSingleJob(downstreamJob)]}
}
}
}
}
}
The script works fine but looking at from Map .... parallel step for parallel the script is a bit busy and not easy to get it. The main purpose of this is I want to reduce the pipeline script to be cleaner and easy for others to help maintain. Something simple like calling the external methods as matchingJobs = commonMethods.getIntegrationTestJobs(venture, environment), so others can understand it right away and know what the code does in this context.
I tried several ways to improve it but seem if it put part of them into building the single job outside the pipeline itself but into the external library, for example
def buildSingleJobParallel (jobFullName) {
String tempPipelineResult = 'SUCCESS'
result = build job: jobFullName, propagate: false
echo "${jobFullName} finished: ${result.rawBuild.result.toString()}"
if (result.rawBuild.result.isWorseThan(Result.SUCCESS)) {
tempPipelineResult = 'FAILURE'
}
}
then Jenkins prompted me that
groovy.lang.MissingMethodException: No signature of method: PipelineMethods.build() is applicable for argument types: (java.util.LinkedHashMap) values: [[job:test_1, propagate:false]]
I can understand that build() method is from Jenkins Pipeline Build Steps Plugins, but I failed to import it and use it inside that commonMethods library (this local library I just use load () method in the very first phase of my pipeline script.
So my question is
Could I use Jenkins Pipeline Build Step plugins inside the external library I mentioned above?
If it's not possible for the first question, I wonder if there's any cleaner way to make my script simpler and cleaner?
Thanks, everybody!
not sure if it runnable and looks clearer but i just tried to put all together from question and comments
//function that returns closure to be used as one of parallel jobs
def buildSingleJobParallel(steps, mjob){
return {
def result = steps.build job: mjob.fullName, propagate: false
steps.echo "${mjob.fullName} finished: ${steps.result.rawBuild.result}"
if (result.rawBuild.result.isWorseThan(Result.SUCCESS)) {
steps.currentBuild.result = 'FAILURE'
}
}
}
stage('Integration Test Run') {
steps {
script {
//build map<jobName, Closure> and run jobs in parallel
parallel matchingJobs.collectEntries{mjob-> [mjob.name, buildSingleJobParallel(this, mjob)]}
}
}
}

XmlSlurper() in jenkins pipeline. how to avoid java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild

I'm trying to read properties from my pom.xml file. I tried the following and it worked:
steps {
script {
def xmlfile = readFile "pom.xml"
def xml = new XmlSlurper().parseText(xmlfile)
def version = "${xml.version}"
echo version
}
}
When I tried to do something like this:
steps {
script {
def xmlfile = readFile "pom.xml"
def xml = new XmlSlurper().parseText(xmlfile)
def version = "${xml.version}"
def mystring = "blabhalbhab-${version}"
echo mystring
}
}
the pipeline suddenly fails with the error:
Caused: java.io.NotSerializableException: groovy.util.slurpersupport.NodeChild
What might be the problem here?
EDIT: just adding this for others finding it with the same use case. My specific question was about how to avoid the CPS related error with XmlSlurper(). BUT for anyone else trying to parse POMs, afraid of that PR that supposedly will deprecate readMavenPom, the safest most maveny way of doing this is probably something like:
def version = sh script: "mvn help:evaluate -f 'pom.xml' -Dexpression=project.version -q -DforceStdout", returnStdout: true trim()
This way your using maven itself to tell you what the version is and not grepping or sedding all over the damn place. How to get Maven project version to the bash command line
In general, using groovy.util.slurpersupport.NodeChild (the type of your xml variable) or groovy.util.slurpersupport.NodeChildren (the type of xml.version) inside CPS pipeline is a bad idea. Both classes are not serializable, so you can't predicate their behavior in the Groovy CPS. For instance, I run successfully your second example in my Jenkins Pipeline. Most probably because the example you gave is not complete or something like that.
groovy:000> xml = new XmlSlurper().parseText("<tag></tag>")
===>
groovy:000> xml instanceof Serializable
===> false
groovy:000> xml.tag instanceof Serializable
===> false
groovy:000> xml.dump()
===> <groovy.util.slurpersupport.NodeChild#0 node=groovy.util.slurpersupport.Node#5b1f29fa parent= name=tag namespacePrefix=* namespaceMap=[xml:http://www.w3.org/XML/1998/namespace] namespaceTagHints=[xml:http://www.w3.org/XML/1998/namespace]>
groovy:000> xml.tag.dump()
===> <groovy.util.slurpersupport.NodeChildren#0 size=-1 parent= name=tag namespacePrefix=* namespaceMap=[xml:http://www.w3.org/XML/1998/namespace] namespaceTagHints=[xml:http://www.w3.org/XML/1998/namespace]>
groovy:000>
If you want to read pom.xml file, use the readMavenPom pipeline step. It is dedicated to read pom files and what is most important - it is safe to do it without applying any workarounds. This step comes with the pipeline-utility-steps plugin.
However, if you want to use XmlSlurper for some reason, you need to use it inside the method that is annotated with #NonCPS. That way you can access "pure" Groovy and avoid problems you have faced. (Yet still using readMavenPom is the safest way to achieve what you are trying to do.) The point here is to use any non-serializable objects inside a #NonCPS scope so the pipeline does not try to serialize it.
Below you can find a simple example of the pipeline that shows both approaches.
pipeline {
agent any
stages {
stage("Using readMavenPom") {
steps {
script {
def xmlfile = readMavenPom file: "pom.xml"
def version = xmlfile.version
echo "version = ${version}"
}
}
}
stage("Using XmlSlurper") {
steps {
script {
def xmlfile = readFile "pom.xml"
def version = extractFromXml(xmlfile) { xml -> xml.version }
echo "version = ${version}"
}
}
}
}
}
#NonCPS
String extractFromXml(String xml, Closure closure) {
def node = new XmlSlurper().parseText(xml)
return closure.call(node)?.text()
}
PS: not to mention that using XmlSlurper requires at least script 3 approvals before you can start using it.

Groovy DSL for Extended choice parameter plugin

I'm converting my Jenkins job configurations into code using groovy DSL. Am able to convert all the code except extended choice parameter plugin configuration.
I've a groovy script which does some API calls and get the values and return as choice to the defied parameter in the job. I've tested it and working fine. But, when I tried to automate/convert the same into Groovy DSL am not getting enough support from the plugin rather I haven't find any document which helps me with this situation.
kindly help.
I went through the same process a couple of months ago. I found this article tremendously useful - http://www.devexp.eu/2014/10/26/use-unsupported-jenkins-plugins-with-jenkins-dsl.
Here's a sample code snippet:
configure {
project->
project / 'properties' << 'hudson.model.ParametersDefinitionProperty' {
parameterDefinitions {
'com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition' {
name 'TARGET_ENVS'
quoteValue 'false'
saveJSONParameterToFile 'false'
visibleItemCount '15'
type 'PT_CHECKBOX'
value "${deployTargets}"
multiSelectDelimiter ','
projectName "${jobName}"
}
}
}
}
The article suggests appending the 'configure' code block at the end of your DSL job definition, however that didn't work for me. I ended up putting the code block at the start of the definition.
Good luck
Job DSL plugins allows you to add XML configuration to jobs config.xml files. You have to use configure closure and next specify whatever you want. For example I have such configuration:
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
<com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition plugin="extended-choice-parameter#0.76">
<name>PRODUCT_REPO_URL</name>
<description>ssh URL of the product repository</description>
<quoteValue>false</quoteValue>
<saveJSONParameterToFile>false</saveJSONParameterToFile>
<visibleItemCount>10</visibleItemCount>
<type>PT_SINGLE_SELECT</type>
<groovyScript>import hudson.slaves.EnvironmentVariablesNodeProperty
import jenkins.model.Jenkins
Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars.get(&apos;PRODUCT_REPOSITORIES&apos;)</groovyScript>
<bindings></bindings>
<groovyClasspath></groovyClasspath>
<defaultGroovyScript>import hudson.slaves.EnvironmentVariablesNodeProperty
import jenkins.model.Jenkins
Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars.get(&apos;PRODUCT_REPOSITORY_DEFAULT&apos;)</defaultGroovyScript>
<defaultBindings></defaultBindings>
<defaultGroovyClasspath></defaultGroovyClasspath>
<multiSelectDelimiter>,</multiSelectDelimiter>
<projectName>try-to-upgrade-dependencies</projectName>
</com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>
Now I can generate it by adding the following code:
configure {
project -> project / 'properties' << 'hudson.model.ParametersDefinitionProperty' {
parameterDefinitions {
'com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition'(plugin: 'extended-choice-parameter#0.76') {
delegate.name('PRODUCT_REPO_URL')
delegate.description('ssh URL of the product repository')
delegate.quoteValue(false)
delegate.saveJSONParameterToFile(false)
delegate.visibleItemCount(10)
delegate.type('PT_SINGLE_SELECT')
delegate.groovyScript("""import hudson.slaves.EnvironmentVariablesNodeProperty
import jenkins.model.Jenkins
Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars.get('PRODUCT_REPOSITORIES')""")
delegate.defaultGroovyScript("""import hudson.slaves.EnvironmentVariablesNodeProperty
import jenkins.model.Jenkins
Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars.get('PRODUCT_REPOSITORY_DEFAULT')""")
delegate.multiSelectDelimiter(',')
delegate.projectName('try-to-upgrade-dependencies')
}
}
}
}
The final result:
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
<com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition plugin="extended-choice-parameter#0.76">
<name>PRODUCT_REPO_URL</name>
<description>ssh URL of the product repository</description>
<quoteValue>false</quoteValue>
<saveJSONParameterToFile>false</saveJSONParameterToFile>
<visibleItemCount>10</visibleItemCount>
<type>PT_SINGLE_SELECT</type>
<groovyScript>import hudson.slaves.EnvironmentVariablesNodeProperty
import jenkins.model.Jenkins
Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars.get('PRODUCT_REPOSITORIES')</groovyScript>
<defaultGroovyScript>import hudson.slaves.EnvironmentVariablesNodeProperty
import jenkins.model.Jenkins
Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars.get('PRODUCT_REPOSITORY_DEFAULT')</defaultGroovyScript>
<multiSelectDelimiter>,</multiSelectDelimiter>
<projectName>try-to-upgrade-dependencies</projectName>
</com.cwctravel.hudson.plugins.extended__choice__parameter.ExtendedChoiceParameterDefinition>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>

Inject variable in jenkins pipeline with groovy script

I am building a jenkins pipeline and the job can be triggered by remote. I have the requirement to know which IP triggered the job. So I have a little groovy script, which returns the remote IP. With the EnvInject-plugin I can easily use this variable in a normal freestyle job, but how can I use this in the pipeline scirpt? I can't use the EnvInject-plugin with the pipeline-plugin :(
Here is the little script for getting the IP:
import hudson.model.*
import static hudson.model.Cause.RemoteCause
def ipaddress=""
for (CauseAction action : currentBuild.getActions(CauseAction.class)) {
for (Cause cause : action.getCauses()) {
if(cause instanceof RemoteCause){
ipaddress=cause.addr
break;
}
}
}
return ["ip":ipaddress]
You can create a shared library function (see here for examples and the directory structure). This is one of the undocumented (or really hard to find any documentation) features of Jenkins.
If you would put a file triggerIp.groovy in the directory vars, which is in the directory workflow-libs at the root level of JENKINS_HOME and put your code in that file.
The full filename then will be $JENKINS_HOME/workflow-libs/vars/ipTrigger.groovy
(You can even make a git repo for your shared libraries and clone it in that directory)
// workflow-libs/vars/ipTrigger.groovy
import hudson.model.*
import static hudson.model.Cause.RemoteCause
#com.cloudbees.groovy.cps.NonCPS
def call(currentBuild) {
def ipaddress=""
for (CauseAction action : currentBuild.getActions(CauseAction.class)) {
for (Cause cause : action.getCauses()) {
if(cause instanceof RemoteCause){
ipaddress=cause.addr
break;
}
}
}
return ["ip":ipaddress]
}
After a restart of Jenkins, from your pipeline script, you can call the method by the filename you gave it.
So from your pipeline just call def trigger = ipTrigger(currentBuild)
The the ipaddress will be, trigger.ip (sorry for the bad naming, couldn't come up with something original)

jenkinsfile use traits and other groovy synax

I would like to use a slightly more complex pipeline build via jenkinsfiles, with some reusable steps as I have a lot or similar projects. I'm using jenkins 2.0 with the pipeline plugins. I know that you can load groovy scripts which contain can contain some generic pieces of code but I was wondering if these scripts can use some of the Object oriented features of groovy like traits. For example say I had a trait called Step:
package com.foo.something.ci
trait Step {
void execute(){ echo 'Null execution'}
}
And a class that then implemented the trait in another file:
class Lint implements Step {
def execute() {
stage('lint')
node {
echo 'Do Stuff'
}
}
}
And then another class that contained the 'main' function:
class foo {
def f = new Lint()
f.execute()
}
How would I load and use all these classes in a Jenkinsfile, especially since I may have multiple classes each defining a step? Is this even possible?
Have a look at Shared Libaries. These enable the use of native groovy code in Jenkins.
Your Jenkinsfile would include your shared libary, and the use the classes you defined. Be aware, that you have to pass the steps variable of Jenkins, if you want to use stage or the other variables defined in the Jenkins Pipeline plugin.
Excerpt from the documentation:
This is the class, which would define your stages
package org.foo
class Utilities implements Serializable {
def steps
Utilities(steps) {this.steps = steps}
def mvn(args) {
steps.sh "${steps.tool 'Maven'}/bin/mvn -o ${args}"
}
}
You would use it like this:
#Library('utils') import org.foo.Utilities
def utils = new Utilities(steps)
node {
utils.mvn 'clean package'
}

Resources