I have a repo using bazel as build and test system. This repo has both python and golang. There are two types of tests, unit tests, and integration tests. I would like to run them in two separate test steps in our CI. I would like to automatically discover new tests in the repo when new tests are added. we are currently using bazel test .... but this will not help me to split the unit test and integration test. Is there any rule or existing method to do this? Thanks.
Bazel doesn't really have a direct concept of unit vs integration testing, but it does have the concept of a test "size", or how "heavy" a test is. This docs page gives an outline of the size attribute on test rules while the Test encyclopedia gives a great overview.
When the tests are appropriately sized, it's then possible to use --test_size_filters flag to run the test for that size.
For example,
bazel test ... --test_size_filters=small for running unit tests
bazel test ... --test_size_filters=large for integration tests
You may want to add additional flags for unit tests vs integration tests, so adding a new config to .bazelrc might be a good idea, then run via bazel test ... --config=integration for example.
--test_size_filters is the best way, because it is a wide used solution. If you need another separation, then tags are way to go:
py_test(
name = "unit_test",
tags = ["unit"],
)
py_test(
name = "integration_test",
tags = ["integration"],
)
And then
bazel test --test_tag_filters=unit //...
bazel test --test_tag_filters=integration //...
bazel test --test_tag_filters=-integration,-unit //... # each test which is not "unit" nor "integration"
Related
I have a Java based test suite running Bazel 4.2.2, and my goal is to collect code coverage regardless of test flakiness. I tried to add these options:
bazel coverage ... --runs_per_test=3 --cache_test_results=no ...
but it looks like if 1/3 of those fail, then the test is failed and coverage data is not collected for failing tests.
Does Bazel have any flags to take the first passing result, and retry only on failures?
The full command I've tried is
bazel coverage --jobs=6 --runs_per_test=3 --cache_test_results=no --combined_report=lcov --coverage_report_generator="#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main" -- //$TARGET/... 2>&1
Thanks!
Answer to my question (can't accept it yet): there's an option in the documentation I found
https://docs.bazel.build/versions/0.25.0/command-line-reference.html
--flaky_test_attempts=<a positive integer, the string "default", or test_regex#attempts. This flag may be passed more than once> multiple uses are accumulated
Each test will be retried up to the specified number of times in case of any test failure. Tests that required more than one attempt to pass would be marked as 'FLAKY' in the test summary. If this option is set, it should specify an int N or the string 'default'. If it's an int, then all tests will be run up to N times. If it is not specified or its value is ' default', then only a single test attempt will be made for regular tests and three for tests marked explicitly as flaky by their rule (flaky=1 attribute).
Another option is using the flaky attribute on your test rules for the problematic tests. That will run them up to 3 times even with normal bazel test, without needing any flags.
I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:
I have many test code lines annotated with [<Fact(Skip="~~")>] expression.
However, when I run test with dotnet test command, they are not considered as active test cases.
How can I easily convert them into active test cases?
I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.
I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.
We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)