Jenkins build is passed even if jmeter duration assertion fails - jenkins

I am running jmeter using jenkins, i have duration assertion of 300ms in my script. The assertion is working fine, as my jmeter result is showing error but the jenkins build still passes.
Is there any way to fail my build in case of error in jmeter result ?

You need to define Error Threshold
The Error Threshold marks the the build as unstable or failed if the amount of errors will exceed the specified value.
Use error thresholds on single build – Define error thresholds in % for the current build.

The options are in:
Configure error threshold in Performance Plugin GUI
Jenkins considers build passed when it returns 0 exit status code, you can "tell" JMeter to exit with non-zero exit code by adding JSR223 Listener and using the following code:
if (!prev.isSuccessful()) {
System.out.println("Test failure, exiting...")
System.exit(1)
}
where prev stands for parent SampleResult class instance which gives you control over parent Sampler response code, message, data, etc. Check out Top 8 JMeter Java Classes You Should Be Using with Groovy article for more information on JMeter API shortcuts available for JSR223 Test Elements
it will result in the following behaviour:
and this 1 status code will cause Jenkins build failure
Alternative solution for point 2 would be using Taurus tool as a wrapper for your JMeter test, it provides handly Pass/Fail Criteria subsystem which provides flexible way of defining custom thresholds for considering your test as failed

Related

Display jmeter failed assertion logs in jenkins console output

I have Integrated Jmeter with Jenkins. Sometimes due to Jmeter assertion failure,the Whole Jenkins build gets fail which is expected for my requirement.
Now, Is there any way to display the Jmeter failed request (which is an assertion failure one mentioned above) on Jenkins Console output?
Add a JSR223 Listener at the same level of requests you want to capture (or a level higher, see JMeter Scoping Rules - The Ultimate Guide article to learn why does the placement matter
Put the following code into "Script" area:
if (!prev.isSuccessful()) {
prev.getAssertionResults().each { assertionResult ->
if (assertionResult.failure) {
println('Request ' + prev.getSampleLabel() + ' failed with ' + assertionResult.failureMessage)
}
}
}
That's it, whenever a request fails due to failed assertion you will have the relevant line printed to STDOUT (therefore to Jenkins console log)

Squish Jenkins plugin returning 0 while tests fail

Failing tests resulted in green balls in our "Open Blue Ocean" pipeline overview. When I read the manual (https://doc.froglogic.com/squish/latest/rg-cmdline.html) this is according to specification, but using the --exitCodeOnFail should result in our desired behavior. In our Jenkinsfile we scripted the following:
squish([extraOptions: """--tags
${tag}
--retry
2
--config
addAppPath
${squishsrcdir}
--config
addAUT
startSimProApp.bat
${squishsrcdir}
--exitCodeOnFail
-666
--config
setResponseTimeout
30""", squishPackageName: 'squish for qt 6.5.2', testSuite: "${squishsrcdir}", unstableBuildOnError: true])
Unfortunately this results in the following error:
com.froglogic.squish.SquishException: unknown option --exitCodeOnFail
The squish plug-in version is: 8.1.1
What are my options to get red balls when a test fails under squish?
The --exitCodeOnFail option is not supported by the Squish plugin.
Take a look at https://doc.froglogic.com/squish/latest/ao-hudson.html#ao-jenkins-example-pipeline-jobs
The squish step sets neither build nor stage result. It returns the execution results as a string instead. Your pipeline may act based on the returned value. You can find an example on the last screenshot in the linked above chapter.
Squish has a known issue (reported and expected to be solved) in matching returned Squish test suite execution status with retries with the final result of the job. For example, if your test fails during first retry and get passed in the next retry, the final status of the job will remain as unstable/failed.

How can we get result of summary report parameters (throughput, received & sent bytes) in JMeter script through non gui mode?

How can we get result of summary report parameters (throughput, received & sent bytes) in JMeter script through non gui mode? I have to implement the benchmarking on whole script rather than each thread to mark the status of script pass/fail by comparing the result to the static .csv file which contains the value of parameters .Kindly let me know the approach to opt.
The easiest way is going for JMeterPluginsCMD Command Line Tool, it can generate various tables and charts out of JMeter's .jtl results file.
So you will need to add the following command as the Post Build Step:
JMeterPluginsCMD --generate-csv SummaryReport.csv --input-jtl result.jtl --plugin-type AggregateReport
You can install JMeterPluginsCMD Command Line Tool using JMeter Plugins Manager

How can I set the job timeout for all jobs using the Jenkins DSL

I read How can I set the job timeout using the Jenkins DSL. That sets the timeout for one job. I want to set it for all jobs, and with slightly different settings: 150%, averaged over 10 jobs, with a max of 30 minutes.
According to the relevant job-dsl-plugin documentation I should use this syntax:
job('example-3') {
wrappers {
timeout {
elastic(150, 10, 30)
failBuild()
writeDescription('Build failed due to timeout after {0} minutes')
}
}
}
I tested in http://job-dsl.herokuapp.com/ and this is the relevant XML part:
<buildWrappers>
<hudson.plugins.build__timeout.BuildTimeoutWrapper>
<strategy class='hudson.plugins.build_timeout.impl.ElasticTimeOutStrategy'>
<timeoutPercentage>150</timeoutPercentage>
<numberOfBuilds>10</numberOfBuilds>
<timeoutMinutesElasticDefault>30</timeoutMinutesElasticDefault>
</strategy>
<operationList>
<hudson.plugins.build__timeout.operations.FailOperation></hudson.plugins.build__timeout.operations.FailOperation>
<hudson.plugins.build__timeout.operations.WriteDescriptionOperation>
<description>Build failed due to timeout after {0} minutes</description>
</hudson.plugins.build__timeout.operations.WriteDescriptionOperation>
</operationList>
</hudson.plugins.build__timeout.BuildTimeoutWrapper>
</buildWrappers>
I verified with a job I edited manually before, and the XML is correct. So I know that the Jenkins DSL syntax up to here is correct.
Now I want to apply this to all jobs. First I tried to list all the job names:
import jenkins.model.*
jenkins.model.Jenkins.instance.items.findAll().each {
println("Job: " + it.name)
}
This works too, all job names are printed to console.
Now I want to plug it all together. This is the full code I use:
import jenkins.model.*
jenkins.model.Jenkins.instance.items.findAll().each {
job(it.name) {
wrappers {
timeout {
elastic(150, 10, 30)
failBuild()
writeDescription('Build failed due to timeout after {0} minutes')
}
}
}
}
When I push this code and Jenkins runs the DSL seed job, I get this error:
ERROR: Type of item "jobname" does not match existing type, item type can not be changed
What am I doing wrong here?
The Job-DSL plugin can only be used to maintain jobs that have been created by that plugin before. You're trying to modify the configuration of jobs that have been created in some other way -- this will not work.
For mass-modification of existing jobs (like, in your case, adding the timeout) the most straightforward way is to change the job's XML specification directly,
either by changing the config.xml file on disk, or
using the REST or CLI API
xmlstarlet is a powerful tool for performing such tasks directly on shell level.
Alternatively, it is possible to perform the change via a Groovy script from the "Script Console" -- but for that you need some understanding of Jenkins' internal workings and data structures.

Cloud Dataflow: java.lang.IllegalStateException: no evaluator registered for GroupedValues

I'm getting the following exception when running the pipeline locally. There is no exception when submitting for cloud execution.
Thanks,
Genady
INFO: Executing pipeline using the DirectPipelineRunner.
Exception in thread "main" java.lang.IllegalStateException: no evaluator registered for GroupedValues [GroupedValues]
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:606)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:200)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:196)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:109)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:204)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:583)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:327)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:70)
at app.Main.main(Main.java:124)
The code outline is basically this:
PCollection<KV<MyKey, Iterable<MyValue>>> groupedByMyKey = ...
PCollection<KV<MyKey, MyAggregated>> aggregated = groupedByMyKey.apply(
Combine.<MyKey, MyValue, MyAggregated>groupedValues(new Aggregator()));
Aggregator class extends CombineFn<MyValue, List<MyValue>, MyAggregated>
Can you share a code snippet that triggers this? GroupedValues is a PTransform that is often used within various combining transforms, so it might be from using something like Min, Max, etc.
The error means that the DirectPipelineRunner doesn't know how to evaluate a GroupedValues. However, that's unexpected, since that should have been expanded into a ParDo before execution.
I found the reason to this behaviour
I was using a command line argument to run it in remote mode (--runner=BlockingDataflowPipelineRunner) and then forced it to run locally with
PipelineRunner<?> runner = DirectPipelineRunner.fromOptions(options);
runner.run(p);
After removing these lines and just using the --runner=DirectPipelineRunner argument it worked as expected.

Resources