Does OpenCover run tests in parallel? Is there an option to have it not? - code-coverage

I'm having some issues running OpenCover.console over some .NET assemblies.
We're seeing some MSTEST tests fail, only when run under OpenCover.
What's going on is that we have certain of our tests that are reading and writing to a LocalDB database. Yes, these are more properly called integration tests than unit tests, but they are testing functionality that we need tested.
The tests in question start by clearing the tables that it works with, populating seed data, running some code that should insert records into the database, then reading the database to ensure that the records exist.
What we're seeing is that everything runs correctly, except we're not seeing all of the records that we'd expect.
What we're hypothesizing is that the tests might, when run from OpenCover, be running in parallel, and that the clearing code of one test is running before the testing code of the other test runs.
So, the questions:
Does OpenCover run tests in parallel?
Is there a way to configure it not to?
Better yet - is there a way to configure it not to for some tests but not for others? (Most of our tests don't access the database, and parallelism for them wouldn't be a bad thing.)
The command we're using:
<path>\OpenCover.Console.exe ^
-target:"<path>\vstestconsole.exe" ^
-targetargs:"/Logger.trx <listofassemblies>" ^
-excludebyattribute:*.TestClass*;*.TestMethod*;*.ExcludeFromCoverage* ^
-output:TestResults\outputCoverage.xml ^
-filter:"<listoffilters" ^

Related

Bazel cacheing of compilation / test failures?

When compilation succeeds or a test passes, Bazel caches the result so if we repeat the build / test with the exact same code we get the result immediately.
That's great.
However, if the compilation fails - and I repeat the build with the exact same code - Bazel will attempt to recompile the code (and will fail again, with the exact same result)
Same for tests - if a test fails, and I rerun the test with the exact same code - Bazel will repeat the test.
Is there a way to tell Bazel to cache test / compilation failures as well as successes?
Usecase example:
I changed a lot of code in multiple files
I run bazel test //...:all
100 tests run, 4 different tests fail
I fix the code of one of the tests and rerun bazel test //...:all
All the failing tests run again, even though 3 of the failing tests have no dependency change and there's no point rerunning them
I have to wait 4x longer than necessary for the tests to finish, and I'm sad :(
Something similar for the build failures. Sometimes a failed build can take many minutes to run on our codebase. If I rebuild without changing the files - it's a waste of time for bazel to rerun the failing build if it can use the cache...
I can see why this might be useful, but I'm not aware of a flag or other option that does something like this. At least in the case of a failing test, Bazel re-runs the failed test even if the inputs haven't changed to account for flaky tests. I suppose the same could be said for actions (a flaky compiler sounds pretty scary though). You might consider filing a feature request: https://github.com/bazelbuild/bazel/issues.
My advice would be to test only the specific target you're working on after running //...:all. You can also combine that with --test_filter to test only a specific test method within the test target.
Partial answer:
If we want to cache test failures, we can add --cache_test_results=yes (as opposed to the default auto which only caches successes). Personally, I've added it to my .bazelrc...
I haven't found a way to cache compilation failures yet, unfortunately...

Run each test in a new container

I have google test based test suite. Since the tests manipulate the filesystem and do other things that I don't want to be left behind in case of a test crash, besides just not playing nicely with running tests on parallel, I want to run each test case in a new container. I am currently using CTest (aka. CMake test) to run the gtest binary, but I am not very attached to either of these, so if the best option is some other tool, I can accept that.
Can anyone suggest a way to automate this? Right now I am adding each individual test case manually to CTest with a call to docker run as part of the test command, but it is brittle and time consuming. Maybe I am doing this wrong?
You can run your GTest runner with --gtest_list_tests to list all tests.
You can then loop through this list and call your GTest runner with --gtest_filter set to the name of specific test.
The format of the list is a bit awkward to parse though, so need some shell scripting skills to get the actual test names.
Check the exit code of the GTest runner the know whether the test succeeded or failed.
I do not know if this integrates well with CTest.

Run a test until it fails

In Bazel, you can re-run a test multiple times with:
bazel test --runs_per_test=<n> <target>
This is useful to reproduce conditions that lead a flaky test to fail.
The shortcoming of this approach, though, is that the tests will be run n times regardless of the results.
For tests that are flaky in very rare conditions, this means that you have to set n high, but this means you may have to scroll through a lot of text before you find the failing test's output.
Is there a built-in way in Bazel to run a test until it fails? Right now I'm using a while loop in Bash, which is good enough for my use case but not portable:
while bazel test --test_output=errors -t- <target_name>; do :; done
Passing --notest_keep_going will cause Bazel to exit as soon as a test fails. So, you could use --notest_keep_going in combination with an absurdly high --runs_per_test.

Test failing when run with XCTool with logicTestBucketSize

I have a project with over 1000 unit test and was thinking to speed up the build by using the xctool's parallelise option.
So i turned that on and set logicTestBucketSize to 50. The test run, but some are failing which are not failing when not using this option.
My question: are buckets run independently in their own sandbox or do they share global variable that the unit test might set up? which might explain some cross contamination between the tests
Yes. When running tests in parallel, xctool will run each bucket of tests in a single process, and run multiple buckets simultaneously in different processes. Additionally, you can select whether bucketing will be done on a case or class basis with -bucketBy class. You should probably use class unless you have very large test classes with many test cases.
Your tests may fail now though it didn't before because:
A test case relies on global state set up by a previous test case, even from a different test class, as long as it is grouped into the same binary. This test would now fail as the order the tests run in may be different, or not run at all.
A test alters global state and cause later tests to fail. This may not have been a problem before because that test was run after other tests that might be affected have already ran.
A good way of dealing with the first type of failures is run with bucket size 1 (either bucket-by-class mode or bucket-by-case mode, depending on what mode you'll be running later).

Run JUnit tests in parallel

When running unit tests, Gradle can execute multiple tests in parallel without any changes to the tests themselves (i.e. special annotations, test runners, etc.). I'd like to achieve the same thing with ant, but I'm not sure how.
I've seen this question but none of the answers really appeal to me. They either involve hacks with ant-contrib, special runners set up with the #RunWith annotation, some other special annotations, etc. I'm also aware of TestNG, but I can't make the Eclipse plug-in migrate our tests - and we have around 10,000 of them so I'm not doing it by hand!
Gradle doesn't need any of this stuff, so how do I do it in ant? I guess Gradle uses a special runner, but if so, it's set up as part of the JUnit setup, and not mentioned on every single test. If that's the case, then that's fine. I just don't really want to go and modify c. 10,000 unit tests!
Gradle doesn't use a special JUnit runner in the strict sense of the word. It "simply" has a sophisticated test task that knows how to spin up multiple JVMs, run a subset of test classes in each of them (by invoking JUnit), and report back the results to the JVM that executes the build. There the results get aggregated to make it look like a single-JVM, single-threaded test execution. This even works for builds that define their own test listeners.
To get parallel test execution in Ant, you would need an Ant task that supports this feature (not sure if one exists). An alternative is to import your Ant build into Gradle (ant.importBuild "build.xml") and add a test task on the Gradle side.

Resources