Bazel cacheing of compilation / test failures? - bazel

When compilation succeeds or a test passes, Bazel caches the result so if we repeat the build / test with the exact same code we get the result immediately.
That's great.
However, if the compilation fails - and I repeat the build with the exact same code - Bazel will attempt to recompile the code (and will fail again, with the exact same result)
Same for tests - if a test fails, and I rerun the test with the exact same code - Bazel will repeat the test.
Is there a way to tell Bazel to cache test / compilation failures as well as successes?
Usecase example:
I changed a lot of code in multiple files
I run bazel test //...:all
100 tests run, 4 different tests fail
I fix the code of one of the tests and rerun bazel test //...:all
All the failing tests run again, even though 3 of the failing tests have no dependency change and there's no point rerunning them
I have to wait 4x longer than necessary for the tests to finish, and I'm sad :(
Something similar for the build failures. Sometimes a failed build can take many minutes to run on our codebase. If I rebuild without changing the files - it's a waste of time for bazel to rerun the failing build if it can use the cache...

I can see why this might be useful, but I'm not aware of a flag or other option that does something like this. At least in the case of a failing test, Bazel re-runs the failed test even if the inputs haven't changed to account for flaky tests. I suppose the same could be said for actions (a flaky compiler sounds pretty scary though). You might consider filing a feature request: https://github.com/bazelbuild/bazel/issues.
My advice would be to test only the specific target you're working on after running //...:all. You can also combine that with --test_filter to test only a specific test method within the test target.

Partial answer:
If we want to cache test failures, we can add --cache_test_results=yes (as opposed to the default auto which only caches successes). Personally, I've added it to my .bazelrc...
I haven't found a way to cache compilation failures yet, unfortunately...

Related

Run each test in a new container

I have google test based test suite. Since the tests manipulate the filesystem and do other things that I don't want to be left behind in case of a test crash, besides just not playing nicely with running tests on parallel, I want to run each test case in a new container. I am currently using CTest (aka. CMake test) to run the gtest binary, but I am not very attached to either of these, so if the best option is some other tool, I can accept that.
Can anyone suggest a way to automate this? Right now I am adding each individual test case manually to CTest with a call to docker run as part of the test command, but it is brittle and time consuming. Maybe I am doing this wrong?
You can run your GTest runner with --gtest_list_tests to list all tests.
You can then loop through this list and call your GTest runner with --gtest_filter set to the name of specific test.
The format of the list is a bit awkward to parse though, so need some shell scripting skills to get the actual test names.
Check the exit code of the GTest runner the know whether the test succeeded or failed.
I do not know if this integrates well with CTest.

Run a test until it fails

In Bazel, you can re-run a test multiple times with:
bazel test --runs_per_test=<n> <target>
This is useful to reproduce conditions that lead a flaky test to fail.
The shortcoming of this approach, though, is that the tests will be run n times regardless of the results.
For tests that are flaky in very rare conditions, this means that you have to set n high, but this means you may have to scroll through a lot of text before you find the failing test's output.
Is there a built-in way in Bazel to run a test until it fails? Right now I'm using a while loop in Bash, which is good enough for my use case but not portable:
while bazel test --test_output=errors -t- <target_name>; do :; done
Passing --notest_keep_going will cause Bazel to exit as soon as a test fails. So, you could use --notest_keep_going in combination with an absurdly high --runs_per_test.

How do you label flaky tests using junit?

How do I label flaky tests in junit xml syntax that jenkins uses to give us a report? jenkins gives me a nice report of tests that succeeeded and failed. I would like to know which tests are known to be flaky.
Certainly a flaky test is something that needs to be avoided and fixed. Tests should always pass and if not they need to be worked on.
However we don't live in a perfect world and thus it might be helpful to identify tests that have failed every now and then in the past.
Since Jenkins keeps track of test results you can step through the history of a particular test case (using the "Previous/Next build" links).
Additionally there are two Jenkins plugins that might be helpful:
Test Results Analyzer Plugin:
this plugin lets you see the history of a particular test case (or test suite) at one glance (plus adds nice charts)
Flaky Test Handler Plugin: this plugin has deeper support for failing tests (i.e. rerunning them). It's a bit restricted to maven and git though.
You label such tests not helpful. And your primary goal is to eliminate the "flakiness" of such tests - by identifying the root cause of the problem and either fixing production or test code; or both; or worst case by deleting or #Ignore'ing those tests.
Disclaimer of course: jenkins can't tell you about flaky testcases. How could it?!
If you assume that you have flaky testcases, you could spent some time to create a jenkins setup were all tests are run 5, 10, 50 times against the same build output; to then compare statistics. But this is nothing that would come for free - you will have to implement that yourself.

Test Impact Analysis & Ms build execute only Impacted Test

I have a TFS build in VS2010. Following the build unit tests are executed.
In the Build summary it tells me that "1 test run(s) completed - 100% average pass rate" but below this it states "No tests were impacted".
I guess Impacted Tests relate to functionality providing the ability to only run tests that were impacted by code checked in?
Or is there a way that i can run only tests which where impacted based on result of Test Impact Analysis.
I have Set "Analyze Test Impact" to True but still no result coming and its executing all test cases in test projects.
The following thing worked for me.
Microsoft.TeamFoundation.TestImpact
http://scrumdod.blogspot.in/2011/03/tfs-2010-build-only-run-impacted-tests.html

Validating number of failing Junits in ANT

I would like to run a JUnit regression Test Suite from within an ANT build script. The test suite has a known number of failures which we are working towards fixing. I want the build to fail if the number of failing tests increases (or changes at all if that's easier to code) - this value can be hard coded in the ANT script.
I'm looking for details on how to implement this.
Even though I'm a bit late to the party:
Another option would be to use JUnit 4 and annotate the failing tests with #Ignore.
That way they will not run, and not count as failures, but the number of ignored tests will still be printed, reminding you there is work left to do.
If a test fails for the first time, you'll get a regular failure; then you can decide whether to fix or to ignore it.
More information:
http://www.devx.com/Java/Article/31983/1954
The junit task creates and XML file with the failing tests. There's no reason you couldn't use XMLTask to count the number of failures and have a conditional fail if that value is too high.

Resources