bazel `java_test` rule - test.xml only exits on failed shard? - bazel

Trying to run bazel test ... with remote execution and java_test rule.
My test sometimes succeeded, sometimes failed. This is another story.
But I want to get test.xml for all cases to check elapsed time for each test.
test.xml exists on bazel-testlogs only when some shard failed.
I tried
bazel test --test_output=all --test_summary=detailed ...
but didn't worked.
How can I get all test.xml even if it succeed?

bazel test --remote_download_toplevel ... worked for me.
remote_execution_test.sh from bazel repo gave me clue!

Related

How to specify "default" target labels when running bazel test

We're considering migrating to Bazel from Make. To make the transition easier I would like to have bazel test (no flags / options) run the current directory's tests, if any.
So instead of bazel test my_tests bazel test would find the current directory's BUILD file, find any *_test rules and run those.
If you want to do exactly as you said then you can use your own script.
When you run “bazel” it actually looks to see if there is a script named “bazel” under tools directory of the current workspace. So if you have an executable under “$workspace/tools/bazel” bazel will run that instead of the bazel binary itself.
This means you can write a script that checks if the only argument is “test” and if so calls “bazel-real test :all”
It can also check the exit code to see if there were no tests (it’s a specific error code) and return 0 instead
You can use the all target pattern to match all targets in the current package: bazel test :all
You can read more about it here: https://docs.bazel.build/versions/master/user-manual.html#target-patterns
Note however that if there are no test targets in the current package, bazel will give an error: "ERROR: No test targets were found, yet testing was requested.". In this case bazel will give an exit code of 4: https://docs.bazel.build/versions/master/user-manual.html#what-exit-code-will-i-get
I recommend creating a alias called bazel-test to bazel test :all.

How can I ask Bazel to rerun a cached test?

I'm trying to analyze and fix a flaky test which is often green.
My problem is that once the test passes Bazel doesn't re-run it until any of the inputs have changed.
I saw you can ask Bazel to re-run a target but AFAICT it's only until the first time it's green (i.e. to mitigate a flaky test and not solve it).
Is there a way to ask Bazel to run the test even if it passed?
I'd like something like bazel test --force-attempts=50 //my-package:my-target
There's a flag for it
--cache_test_results=(yes|no|auto) (-t)
If this option is set to 'auto' (the default) then Bazel will only rerun a test if any of the following conditions applies:
Bazel detects changes in the test or its dependencies
the test is marked as external
multiple test runs were requested with --runs_per_test
the test failed.
If 'no', all tests will be executed unconditionally.
If 'yes', the caching behavior will be the same as auto except that it may cache test failures and test runs with --runs_per_test.
Note that test results are always saved in Bazel's output tree, regardless of whether this option is enabled, so you needn't have used --cache_test_results on the prior run(s) of bazel test in order to get cache hits. The option only affects whether Bazel will use previously saved results, not whether it will save results of the current run.
Users who have enabled this option by default in their .bazelrc file may find the abbreviations -t (on) or -t- (off) convenient for overriding the default on a particular run.
https://docs.bazel.build/versions/master/user-manual.html#flag--cache_test_results
In addition to --cache_test_results, Bazel actually has a flag specifically designed for diagnosing flaky tests: --runs_per_test, which will rerun a test N times and only keep logs from the failing runs:
$ bazel test --runs_per_test=10 :flaker
INFO: Found 1 test target...
FAIL: //:flaker (run 10 of 10) (see /output/testlogs/flaker/test_run_10_of_10.log).
FAIL: //:flaker (run 4 of 10) (see /output/testlogs/flaker/test_run_4_of_10.log).
FAIL: //:flaker (run 5 of 10) (see /output/testlogs/flaker/test_run_5_of_10.log).
FAIL: //:flaker (run 9 of 10) (see /output/testlogs/flaker/test_run_9_of_10.log).
FAIL: //:flaker (run 3 of 10) (see /output/testlogs/flaker/test_run_3_of_10.log).
Target //:flaker up-to-date:
bazel-bin/flaker
INFO: Elapsed time: 0.828s, Critical Path: 0.42s
//:flaker FAILED
Executed 1 out of 1 tests: 1 fails locally.
You can use it to quickly figure out how flaky a test is and get some failing logs.

Executing testcases from jenkins and getting error

We have automated few test cases and trying to execute the same test cases from jenkins and getting below error:
+ pybot -x junit.xml run.robot
==============================================================================
Run
==============================================================================
sip-001 | PASS |
------------------------------------------------------------------------------
sip-002 | PASS |
------------------------------------------------------------------------------
Run | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
Output: /opt/bitnami/apps/jenkins/jenkins_home/jobs/integration-test/workspace/output.xml
[ ERROR ] Reading XML source '/opt/bitnami/apps/jenkins/jenkins_home/jobs/integration-test/workspace/output.xml' failed: ImportError: No module named expat; use SimpleXMLTreeBuilder instead
Here Testcases are passed but results junit.xml are not generating.
The same testcases if we execute from Ubuntu machine. /path/run.robot .
The Testcases are passed and results are generating like junit.xml, output.xml etc..
Manually testcases are executing fine and results are generating.
Can anyone please guide me in resolving the above error which we got while executing from Jenkins.
You can use https://wiki.jenkins-ci.org/display/JENKINS/Robot+Framework+Plugin robotframework plugin to display results in Jenkins.
WORKAROUND: This won't help you keep using Jenkins, but I had a similar issue with Test Cases running fine when I executed them manually from the command prompt on Windows or running Robot Framework in Eclipse. When running in Jenkins I got failures and never was able to get an answer from anyone, anywhere. I finally just dumped Jenkins all together and set up a Windows Task Manager to have the Robot Framework test case run from there instead. I'll continue using that until someone gets me a real fix or answer that I can actually implement. By the way: Thanks for the negative! Keep them coming.

Gtest does not write result xml file on test failure [jenkins]

I invoke our gtest suite for iOS in Jenkins using the shell script
#!/bin/sh
pkill -a "iPhone Simulator"
ios-sim launch ${WORKSPACE}/source/apple/build/Debug-iphonesimulator/MyAppTest.app --args --gtest_output=xml:${WORKSPACE}/JUnitTestResultsIOS.xml
exit $?
This always successfully runs the tests, and when the tests pass the xml file gets generated as expected. However, when the tests fail, no xml file is generated, and the "Execute shell command" build step terminates but does not fail the job. I echoed the exit code and it comes back 0 even when the tests fail.
This is even more confusing to me since we have a basically identical script in the same job for running tests on our OSX version. This always writes the xml and successfully fails the job when the tests fail.
This behavior seems totally arbitrary and everything about our configuration seems to be exactly as it should be. What am I missing?
Thanks!
There were two things at work here.
First of all, we had the break_on_failure gtest option enabled, which works great when running tests on a local machine but isn't useful within Jenkins, so we disabled it on the build machine.
The second issue was around how we used the exit code. Since ios-sim launch ... always succeeds we were always getting an exit code of 0, regardless of whether the tests passed or failed. I ended up using grep to determine if the resulting xml file indicated any failures, and generated an exit code based on that.

Why does clean compile "test-app -unit -integration" run tests twice?

In Jenkins job, my grails target does:
clean compile "test-app -unit -integration"
And it outputs the tests results twice.
I check .jenkins/jobs/myjob/target/test-reports
and there are XML corresponding to the tests but there is no duplication. So everything look likes it only executes once. Same with the console log - I can only see the test execute once.
However, when I look at the build results on Jenkins all the tests are duplicated.
I go to:
.jenkins/myjob/builds/buildnumber/junitResult.xml
and I can see the tests duplciated in this.
So it is as if when Jenkins creates the junitResult.xml file it copies tests.
Any ideas why?

Resources