Prevent eunit from timing out when running Triq tests - erlang

How can I change the timeout for the eunit in rebar3 config?
My eunit runner is timing out when I run property-based Triq tests:
===> Verifying dependencies...
===> Compiling ierminer
===> Performing EUnit tests...
Pending:
test_ec:ec_prop_test/0
%% Unknown error: timeout
undefined
%% Unknown error: {blame,[3,1]}
Finished in ? seconds
3 tests, 0 failures, 3 cancelled
===> Error running tests
Here is my property specification:
-module(ec_property).
-include_lib("triq/include/triq.hrl").
prop_append() ->
?FORALL({Xs,Ys},{list(int()),list(int())},
lists:reverse(Xs++Ys)
==
lists:reverse(Ys) ++ lists:reverse(Xs)).
prop_valid_started() ->
?FORALL({Type, Items, Size},
{oneof([left,right]), non_empty(list(any())), pos_integer()},
element(1, ec:start(Type, Items, Size)) == ok).
and here is how I call it from my eunit test function:
ec_prop_test() -> ?assert(ec_property:check()).

Use a test generator function to specify a timeout longer than the default 5 seconds:
ec_prop_test_() ->
{timeout, 30, ?_assert(ec_property:check())}.
Note the trailing underscore added to the function name—that's how you create a test generator. Note also the leading underscore on _assert, which is one way to create a test object.
Change the 30 in the example to whatever number of seconds you need.

Related

How to get SoapUi assertion result back in jenkins script

In my Jenkins file, i am executing maven command and it is executing very well.
mvn com.smartbear.soapui:soapui-maven-plugin:5.5.0:test -f src/main/resources/testcases/pom.xml
I can see reports generated and in Jenkins log i can see status of test execution.
SoapUI 5.3.0 TestCaseRunner Summary
Time Taken: 3922ms
Total TestSuites: 1
Total TestCases: 1 (0 failed)
Total TestSteps: 1
Total Request Assertions: 3
Total Failed Assertions: 0
Total Exported Results: 1
what i want is to get the status of test execution, like success or failure, how can i get test execution result back in Jenkins file so i can add stage as success of failure.

How to get execution time of each test in bazel?

When running bazel test the output contains only summary of the all tests, including total run time.
Running bazel with performance profiling does not help, because it does not indicate each test time.
So how to get the info about each test execution time?
UPD:
I have a sample repo to reproduce my problem:
$ git clone https://github.com/MikhailTymchukFT/bazel-java
$ cd bazel-java
$ bazel test //:AllTests --test_output=all --test_summary=detailed
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (20 packages loaded, 486 targets configured).
INFO: Found 2 test targets...
INFO: From Testing //:GreetingTest:
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:56 --
================================================================================
INFO: From Testing //:MainTest:
==================== Test output for //:MainTest:
JUnit4 Test Runner
.
Time: 0.016
OK (1 test)
BazelTestRunner exiting with a return value of 0
JVM shutdown hooks (if any) will run now.
The JVM will exit once they complete.
-- JVM shutdown starting at 2020-04-07 09:44:57 --
================================================================================
INFO: Elapsed time: 21.009s, Critical Path: 6.68s
INFO: 10 processes: 6 darwin-sandbox, 4 worker.
INFO: Build completed successfully, 18 total actions
Test cases: finished with 3 passing and 0 failing out of 3 test cases
INFO: Build completed successfully, 18 total actions
I can see execution time of both tests in GreetingTest
==================== Test output for //:GreetingTest:
JUnit4 Test Runner
..
Time: 0.017
OK (2 tests)
, but cannot see the execution time of each test in this class/rule.
With --test_summary=short (the default value), the end of the output looks like this (lines for the other 325 tests truncated):
INFO: Elapsed time: 148.326s, Critical Path: 85.71s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%]
INFO: 680 processes: 666 linux-sandbox, 14 worker.
INFO: Build completed successfully, 724 total actions
//third_party/GSL/tests:no_exception_throw_test (cached) PASSED in 0.4s
//third_party/GSL/tests:notnull_test (cached) PASSED in 0.5s
//aos/events:shm_event_loop_test PASSED in 12.3s
Stats over 5 runs: max = 12.3s, min = 2.4s, avg = 6.3s, dev = 3.7s
//y2018/control_loops/superstructure:superstructure_lib_test PASSED in 2.3s
Stats over 5 runs: max = 2.3s, min = 1.3s, avg = 1.8s, dev = 0.4s
Executed 38 out of 329 tests: 329 tests pass.
INFO: Build completed successfully, 724 total actions
Confusingly, --test_summary=detailed doesn't include the times, even though the name sounds like it should have strictly more information.
For sharded tests, that output doesn't quite have every single test execution, but it does give statistics about them as shown above.
If you want to access the durations programmatically, the build event protocol has a TestResult.test_attempt_duration_millis field.
Alternatively, using --test_output=all will print all the output from your actual test binaries, including the ones that pass. Many testing frameworks print a total execution time there.
There is a testlogs folder where you can find .xml files with the execution times of each testcase.
The bazel-testlogs symlink points to the same location.
For my example, these files will be located at /private/var/tmp/_bazel_<user>/<some md5 hash>/execroot/<project name>/bazel-out/<kernelname>-fastbuild/testlogs/GreetingTest/test.xml
The content of that file is like this:
<?xml version='1.0' encoding='UTF-8'?>
<testsuites>
<testsuite name='com.company.core.GreetingTest' timestamp='2020-04-07T09:58:28.409Z' hostname='localhost' tests='2' failures='0' errors='0' time='0.01' package='' id='0'>
<properties />
<testcase name='sayHiIsString' classname='com.company.core.GreetingTest' time='0.01' />
<testcase name='sayHi' classname='com.company.core.GreetingTest' time='0.0' />
<system-out />
<system-err /></testsuite></testsuites>

busted No test files found matching Lua pattern: spec

my directory
the contents of the file 'hhh.lua' is the same as file 'btest_spec.lua' (see my directory)
when I run 'busted' (just use commond 'busted') ,it return an error:
0 successes / 0 failures / 1 error / 0 pending : 0.00003 seconds
Error → No test files found matching Lua pattern: _spec
when I run 'busted btest_spec.lua' , it success and return :
●●
2 successes / 0 failures / 0 errors / 0 pending : 0.003049 seconds
when I run 'busted *', it success and return :
●●●●
4 successes / 0 failures / 0 errors / 0 pending : 0.006815 seconds
so ,why busted fail to find file 'btest_spec.lua' when I run 'busted'?
I had the same error (macOS Sierra, fish shell) and solved it by running busted . instead of just busted. Note the period indicating busted should look in the current working directory.
This is due to a break in the dependency "penlight", which busted relies on.
See here - https://github.com/Olivine-Labs/busted/issues/528
The fixed version of penlight (1.4.1) is now on luarocks, which should fix your issue if you update busted.

PHPunit Zend framework 2 Failed asserting response code "302", actual status code is "200"

Do the data we try to insert actually inserted when trying with unit testing.
$ phpunit
PHPUnit 3.7.21 by Sebastian Bergmann.
Configuration read from C:\xampp\htdocs\health360\module\Album\test\phpunit.xml
←[41;37mF←[0m.........
Time: 0 seconds, Memory: 8.75Mb
There was 1 failure:
1) AlbumTest\Controller\AlbumControllerTest::testIndexActionCanBeAccessed
Failed asserting response code "302", actual status code is "200"
C:\xampp\htdocs\health360\vendor\zendframework\zendframework\library\Zend\Test\P
HPUnit\Controller\AbstractControllerTestCase.php:418
C:\xampp\htdocs\health360\module\Album\test\AlbumTest\Controller\AlbumController
Test.php:73
I am following the official zend framework documentation
http://framework.zend.com/manual/2.2/en/tutorials/unittesting.html
What might be the issue

How to set up SBT build to return zero exit code on test failure for Jenkins?

When I am running my Specs2 tests in Jenkins via SBT, then the build is marked as a failure as soon as one test fails. Since Jenkins usually distinguishes between failure to build and test failures, I want to change this.
I know that the build failure in Jenkins is detected by the exit code of the call to SBT, which appears to return 1 as soon as at least one test fails.
What are the options I have assuming I want to avoid changing my build.sbt (or the project in general) just to fix this inconvenience?
Somehow I think it should be possible to put a standard sbt project into a standard Jenkins install and have it work as intended.
tl;dr Use testResultLogger with a custom test result logger that doesn't throw TestsFailedException that in turn sets the non-0 exit code.
Just noticed that I missed that requirement "to avoid changing build.sbt. You can use any other *.sbt file, say exitcodezero.sbt or ~/.sbt/0.13/default.sbt with the custom testResultLogger.
It turns out that since sbt 0.13.5 there's a way to have such behaviour - see Added setting 'testResultLogger' which allows customisation of test reporting where testResultLogger was born.
> help testResultLogger
Logs results after a test task completes.
As it may have been read in the implementation of TestResultLogger.SilentWhenNoTests that's the default value of testResultLogger:
results.overall match {
case TestResult.Error | TestResult.Failed => throw new TestsFailedException
case TestResult.Passed =>
}
It means that when there's an issue executing tests, TestsFailedException exception is thrown that's in turn caught to report it as follows:
[error] Failed: Total 3, Failed 1, Errors 0, Passed 2
[error] Failed tests:
[error] HelloWorldSpec
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
My idea is to disable throwing the exception regardless of the outcome of executing tests. Add the following to build.sbt and have the exit code always 0:
testResultLogger in (Test, test) := new TestResultLogger {
import sbt.Tests._
def run(log: Logger, results: Output, taskName: String): Unit = {
println("Exit code always 0...as you wish")
// uncomment to have the default behaviour back
// TestResultLogger.SilentWhenNoTests.run(log, results, taskName)
}
}
Uncomment TestResultLogger.SilentWhenNoTests.run to have the default behaviour back.
➜ failing-tests-dont-break-build xsbt test; echo $?
JAVA_HOME=/Library/Java/JavaVirtualMachines/java8/Contents/Home
SBT_OPTS= -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -Dfile.encoding=UTF-8
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Set current project to failing-tests-dont-break-build (in build file:/Users/jacek/sandbox/failing-tests-dont-break-build/)
[info] HelloWorldSpec
[info]
[info] The 'Hello world' string should
[info] x contain 11 characters
[error] 'Hello world' doesn't have size 12 but size 11 (HelloWorldSpec.scala:7)
[info]
[info] + start with 'Hello'
[info] + end with 'world'
[info]
[info] Total for specification HelloWorldSpec
[info] Finished in 15 ms
[info] 3 examples, 1 failure, 0 error
Exit code always 0...as you wish
[success] Total time: 1 s, completed Sep 19, 2014 9:58:09 PM
0
You could run the part of the build that runs the tests in a wrapper script that always returns 0. (If you run both the compile and the tests in one run you'd have to split that so you don't ignore build errors)
Based Jacek Laskowski's solution you can do (at least in sbt >= 1.2.8):
testResultLogger in (Test, test) := TestResultLogger {
(log, results, taskName) =>
try {
(testResultLogger in (Test, test)).value.run(log, results, taskName)
} catch {
case _: TestsFailedException =>
println("Ignore TestsFailedException to get exit code 0")
}
}
If you have multi-module project, you can implement it as a plugin:
object TestExitCodePlugin extends AutoPlugin {
override def requires = JvmPlugin
override def trigger = allRequirements
override def projectSettings: Seq[Def.Setting[_]] = Seq(
testResultLogger in(Test, test) := TestResultLogger {
(log, results, taskName) =>
try {
(testResultLogger in(Test, test)).value.run(log, results, taskName)
} catch {
case _: TestsFailedException =>
println("Ignore TestsFailedException to get exit code 0")
}
}
)
}

Resources