How to set up SBT build to return zero exit code on test failure for Jenkins? - jenkins

When I am running my Specs2 tests in Jenkins via SBT, then the build is marked as a failure as soon as one test fails. Since Jenkins usually distinguishes between failure to build and test failures, I want to change this.
I know that the build failure in Jenkins is detected by the exit code of the call to SBT, which appears to return 1 as soon as at least one test fails.
What are the options I have assuming I want to avoid changing my build.sbt (or the project in general) just to fix this inconvenience?
Somehow I think it should be possible to put a standard sbt project into a standard Jenkins install and have it work as intended.

tl;dr Use testResultLogger with a custom test result logger that doesn't throw TestsFailedException that in turn sets the non-0 exit code.
Just noticed that I missed that requirement "to avoid changing build.sbt. You can use any other *.sbt file, say exitcodezero.sbt or ~/.sbt/0.13/default.sbt with the custom testResultLogger.
It turns out that since sbt 0.13.5 there's a way to have such behaviour - see Added setting 'testResultLogger' which allows customisation of test reporting where testResultLogger was born.
> help testResultLogger
Logs results after a test task completes.
As it may have been read in the implementation of TestResultLogger.SilentWhenNoTests that's the default value of testResultLogger:
results.overall match {
case TestResult.Error | TestResult.Failed => throw new TestsFailedException
case TestResult.Passed =>
}
It means that when there's an issue executing tests, TestsFailedException exception is thrown that's in turn caught to report it as follows:
[error] Failed: Total 3, Failed 1, Errors 0, Passed 2
[error] Failed tests:
[error] HelloWorldSpec
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
My idea is to disable throwing the exception regardless of the outcome of executing tests. Add the following to build.sbt and have the exit code always 0:
testResultLogger in (Test, test) := new TestResultLogger {
import sbt.Tests._
def run(log: Logger, results: Output, taskName: String): Unit = {
println("Exit code always 0...as you wish")
// uncomment to have the default behaviour back
// TestResultLogger.SilentWhenNoTests.run(log, results, taskName)
}
}
Uncomment TestResultLogger.SilentWhenNoTests.run to have the default behaviour back.
➜ failing-tests-dont-break-build xsbt test; echo $?
JAVA_HOME=/Library/Java/JavaVirtualMachines/java8/Contents/Home
SBT_OPTS= -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -Dfile.encoding=UTF-8
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Set current project to failing-tests-dont-break-build (in build file:/Users/jacek/sandbox/failing-tests-dont-break-build/)
[info] HelloWorldSpec
[info]
[info] The 'Hello world' string should
[info] x contain 11 characters
[error] 'Hello world' doesn't have size 12 but size 11 (HelloWorldSpec.scala:7)
[info]
[info] + start with 'Hello'
[info] + end with 'world'
[info]
[info] Total for specification HelloWorldSpec
[info] Finished in 15 ms
[info] 3 examples, 1 failure, 0 error
Exit code always 0...as you wish
[success] Total time: 1 s, completed Sep 19, 2014 9:58:09 PM
0

You could run the part of the build that runs the tests in a wrapper script that always returns 0. (If you run both the compile and the tests in one run you'd have to split that so you don't ignore build errors)

Based Jacek Laskowski's solution you can do (at least in sbt >= 1.2.8):
testResultLogger in (Test, test) := TestResultLogger {
(log, results, taskName) =>
try {
(testResultLogger in (Test, test)).value.run(log, results, taskName)
} catch {
case _: TestsFailedException =>
println("Ignore TestsFailedException to get exit code 0")
}
}
If you have multi-module project, you can implement it as a plugin:
object TestExitCodePlugin extends AutoPlugin {
override def requires = JvmPlugin
override def trigger = allRequirements
override def projectSettings: Seq[Def.Setting[_]] = Seq(
testResultLogger in(Test, test) := TestResultLogger {
(log, results, taskName) =>
try {
(testResultLogger in(Test, test)).value.run(log, results, taskName)
} catch {
case _: TestsFailedException =>
println("Ignore TestsFailedException to get exit code 0")
}
}
)
}

Related

How to get SoapUi assertion result back in jenkins script

In my Jenkins file, i am executing maven command and it is executing very well.
mvn com.smartbear.soapui:soapui-maven-plugin:5.5.0:test -f src/main/resources/testcases/pom.xml
I can see reports generated and in Jenkins log i can see status of test execution.
SoapUI 5.3.0 TestCaseRunner Summary
Time Taken: 3922ms
Total TestSuites: 1
Total TestCases: 1 (0 failed)
Total TestSteps: 1
Total Request Assertions: 3
Total Failed Assertions: 0
Total Exported Results: 1
what i want is to get the status of test execution, like success or failure, how can i get test execution result back in Jenkins file so i can add stage as success of failure.

How to get execution time of each test in bazel?

When running bazel test the output contains only summary of the all tests, including total run time.
Running bazel with performance profiling does not help, because it does not indicate each test time.
So how to get the info about each test execution time?
UPD:
I have a sample repo to reproduce my problem:
$ git clone https://github.com/MikhailTymchukFT/bazel-java
$ cd bazel-java
$ bazel test //:AllTests --test_output=all --test_summary=detailed
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (20 packages loaded, 486 targets configured).
INFO: Found 2 test targets...
INFO: From Testing //:GreetingTest:
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:56 --
================================================================================
INFO: From Testing //:MainTest:
==================== Test output for //:MainTest:
JUnit4 Test Runner
.
Time: 0.016
OK (1 test)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:57 --
================================================================================
INFO: Elapsed time: 21.009s, Critical Path: 6.68s
INFO: 10 processes: 6 darwin-sandbox, 4 worker.
INFO: Build completed successfully, 18 total actions
Test cases: finished with 3 passing and 0 failing out of 3 test cases
INFO: Build completed successfully, 18 total actions
I can see execution time of both tests in GreetingTest
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
, but cannot see the execution time of each test in this class/rule.
With --test_summary=short (the default value), the end of the output looks like this (lines for the other 325 tests truncated):
INFO: Elapsed time: 148.326s, Critical Path: 85.71s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%]
INFO: 680 processes: 666 linux-sandbox, 14 worker.
INFO: Build completed successfully, 724 total actions
//third_party/GSL/tests:no_exception_throw_test (cached) PASSED in 0.4s
//third_party/GSL/tests:notnull_test (cached) PASSED in 0.5s
//aos/events:shm_event_loop_test PASSED in 12.3s
Stats over 5 runs: max = 12.3s, min = 2.4s, avg = 6.3s, dev = 3.7s
//y2018/control_loops/superstructure:superstructure_lib_test PASSED in 2.3s
Stats over 5 runs: max = 2.3s, min = 1.3s, avg = 1.8s, dev = 0.4s
Executed 38 out of 329 tests: 329 tests pass.
INFO: Build completed successfully, 724 total actions
Confusingly, --test_summary=detailed doesn't include the times, even though the name sounds like it should have strictly more information.
For sharded tests, that output doesn't quite have every single test execution, but it does give statistics about them as shown above.
If you want to access the durations programmatically, the build event protocol has a TestResult.test_attempt_duration_millis field.
Alternatively, using --test_output=all will print all the output from your actual test binaries, including the ones that pass. Many testing frameworks print a total execution time there.
There is a testlogs folder where you can find .xml files with the execution times of each testcase.
The bazel-testlogs symlink points to the same location.
For my example, these files will be located at /private/var/tmp/_bazel_<user>/<some md5 hash>/execroot/<project name>/bazel-out/<kernelname>-fastbuild/testlogs/GreetingTest/test.xml
The content of that file is like this:
<?xml version='1.0' encoding='UTF-8'?>
<testsuites>
<testsuite name='com.company.core.GreetingTest' timestamp='2020-04-07T09:58:28.409Z' hostname='localhost' tests='2' failures='0' errors='0' time='0.01' package='' id='0'>
<properties />
<testcase name='sayHiIsString' classname='com.company.core.GreetingTest' time='0.01' />
<testcase name='sayHi' classname='com.company.core.GreetingTest' time='0.0' />
<system-out />
<system-err /></testsuite></testsuites>

Called by Open Cover, Nunit Runner Console throws NullReferenceException and hangs forever

In my project, I use Nunit 2 framework.
In the build server, I use Open Cover 4.6.519 to analyze the code coverage with the following command:
"OpenCover.4.6.519\OpenCover.Console.exe"
-target:"..\..\Packages\NUnit.Runners.2.6.4\tools\nunit-console-x86.exe"
-targetargs:"/nologo /framework:net-4.0 /process:multiple /domain:multiple /noshadow /nothread <the_list_of_test_assemblies>"
-filter:"+[*]* -[*.Test]* -[*.Stub]* -[Deedle]* -[FSharp]*" -hideskipped:Filter -register -output:Build\OpenCoverResult.xml`
This is the result I get:
Executing: D:\Jenkins\workspace\SC Nightly\Packages\NUnit.Runners.2.6.4\tools\nunit-console-x86.exe
ProcessModel: Multiple DomainUsage: Multiple
Execution Runtime: net-4.0
Unhandled Exception:
System.NullReferenceException: Object reference not set to an instance of an object.
at NUnit.Core.ProxyTestRunner.CountTestCases(ITestFilter filter)
at NUnit.Util.AggregatingTestRunner.CountTestCases(ITestFilter filter)
at NUnit.Util.AggregatingTestRunner.Run(EventListener listener, ITestFilter filter, Boolean tracing, LoggingThreshold logLevel)
at NUnit.ConsoleRunner.ConsoleUi.Execute(ConsoleOptions options)
at NUnit.ConsoleRunner.Runner.Main(String[] args)
Committing...
Visited Classes 0 of 617 (0)
Visited Methods 0 of 4030 (0)
Visited Points 0 of 11734 (0)
Visited Branches 0 of 6814 (0)
==== Alternative Results (includes all methods including those without corresponding source) ====
Alternative Visited Classes 0 of 671 (0)
Alternative Visited Methods 0 of 4663 (0)
I see an null reference exception thrown by Nunit runner, but it seems that OpenCover hangs after Nunit runner stopped.
I am thinking of some potential reasons:
The amount of test assemblies is large (59 dll files)
The assemblies are built in .Net Framework 4.5, but the framework argument of Nunit runner is net-4.0
I am using Nunit 2 which is out of date. (It is very expensive for me to upgrade to Nunit 3)
There might be something wrong with my Jenkins. I use Jenkins to run an .msbuildproj file which containing a task to execute OpenCover. When I run OpenCover command on Windows Command Prompt, it works well.

How to save MS Test's result in a Jenkins variable?

One of my Jenkins job is executing MSTest. I am passing the following command to
Execute Windows batch command:
del TestResults.trx
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\MSTest.exe" /testcontainer:D:\Projects\Jenkins\TestResultVerificationFromJenkins\TestResultVerificationFromJenkins\bin\Debug\TestResultVerificationFromJenkins.dll /resultsfile:TestResults.trx /nologo /detail:stdout
At the time of execution, Console Output is displaying the following values:
Starting execution...
Results Top Level Tests
------- ---------------
Passed TestResultVerificationFromJenkins.UnitTest1.PassTest
[stdout] = Test is passed*
1/1 test(s) Passed
Summary
Test Run Completed.
Passed 1
Total 1
Results file: C:\Program Files (x86)\Jenkins\jobs\JenkinsTestResultReader\workspace\TestResults.trx
Test Settings: Default Test Settings
In the post build step, I have to pass the MS test result "Test is passed" to a HTTP Request.
Is there any way to save this result in a Jenkins variable so that I can pass that to HTTP Request?
Regards,
Umesh
Since you are in the postbuild step, would parsing the console output for the test result and sending it off to the HTTP Request be an option for you?
For example, using Groovy Postbuild plugin, you could write a small script that could do this.
Perhaps something like:
if(manager.build.logFile.text.indexOf("Test Run Completed. Passed") >= 0)
manager.listener.logger.println (new URL("http://localhost?parameter=Test+is+passed")).getText()

Prevent eunit from timing out when running Triq tests

How can I change the timeout for the eunit in rebar3 config?
My eunit runner is timing out when I run property-based Triq tests:
===> Verifying dependencies...
===> Compiling ierminer
===> Performing EUnit tests...
Pending:
test_ec:ec_prop_test/0
%% Unknown error: timeout
undefined
%% Unknown error: {blame,[3,1]}
Finished in ? seconds
3 tests, 0 failures, 3 cancelled
===> Error running tests
Here is my property specification:
-module(ec_property).
-include_lib("triq/include/triq.hrl").
prop_append() ->
?FORALL({Xs,Ys},{list(int()),list(int())},
lists:reverse(Xs++Ys)
==
lists:reverse(Ys) ++ lists:reverse(Xs)).
prop_valid_started() ->
?FORALL({Type, Items, Size},
{oneof([left,right]), non_empty(list(any())), pos_integer()},
element(1, ec:start(Type, Items, Size)) == ok).
and here is how I call it from my eunit test function:
ec_prop_test() -> ?assert(ec_property:check()).
Use a test generator function to specify a timeout longer than the default 5 seconds:
ec_prop_test_() ->
{timeout, 30, ?_assert(ec_property:check())}.
Note the trailing underscore added to the function name—that's how you create a test generator. Note also the leading underscore on _assert, which is one way to create a test object.
Change the 30 in the example to whatever number of seconds you need.

Resources