I have many test code lines annotated with [<Fact(Skip="~~")>] expression.
However, when I run test with dotnet test command, they are not considered as active test cases.
How can I easily convert them into active test cases?
Related
I have a Java based test suite running Bazel 4.2.2, and my goal is to collect code coverage regardless of test flakiness. I tried to add these options:
bazel coverage ... --runs_per_test=3 --cache_test_results=no ...
but it looks like if 1/3 of those fail, then the test is failed and coverage data is not collected for failing tests.
Does Bazel have any flags to take the first passing result, and retry only on failures?
The full command I've tried is
bazel coverage --jobs=6 --runs_per_test=3 --cache_test_results=no --combined_report=lcov --coverage_report_generator="#bazel_tools//tools/test/CoverageOutputGenerator/java/com/google/devtools/coverageoutputgenerator:Main" -- //$TARGET/... 2>&1
Thanks!
Answer to my question (can't accept it yet): there's an option in the documentation I found
https://docs.bazel.build/versions/0.25.0/command-line-reference.html
--flaky_test_attempts=<a positive integer, the string "default", or test_regex#attempts. This flag may be passed more than once> multiple uses are accumulated
Each test will be retried up to the specified number of times in case of any test failure. Tests that required more than one attempt to pass would be marked as 'FLAKY' in the test summary. If this option is set, it should specify an int N or the string 'default'. If it's an int, then all tests will be run up to N times. If it is not specified or its value is ' default', then only a single test attempt will be made for regular tests and three for tests marked explicitly as flaky by their rule (flaky=1 attribute).
Another option is using the flaky attribute on your test rules for the problematic tests. That will run them up to 3 times even with normal bazel test, without needing any flags.
I have a repo using bazel as build and test system. This repo has both python and golang. There are two types of tests, unit tests, and integration tests. I would like to run them in two separate test steps in our CI. I would like to automatically discover new tests in the repo when new tests are added. we are currently using bazel test .... but this will not help me to split the unit test and integration test. Is there any rule or existing method to do this? Thanks.
Bazel doesn't really have a direct concept of unit vs integration testing, but it does have the concept of a test "size", or how "heavy" a test is. This docs page gives an outline of the size attribute on test rules while the Test encyclopedia gives a great overview.
When the tests are appropriately sized, it's then possible to use --test_size_filters flag to run the test for that size.
For example,
bazel test ... --test_size_filters=small for running unit tests
bazel test ... --test_size_filters=large for integration tests
You may want to add additional flags for unit tests vs integration tests, so adding a new config to .bazelrc might be a good idea, then run via bazel test ... --config=integration for example.
--test_size_filters is the best way, because it is a wide used solution. If you need another separation, then tags are way to go:
py_test(
name = "unit_test",
tags = ["unit"],
)
py_test(
name = "integration_test",
tags = ["integration"],
)
And then
bazel test --test_tag_filters=unit //...
bazel test --test_tag_filters=integration //...
bazel test --test_tag_filters=-integration,-unit //... # each test which is not "unit" nor "integration"
Using VS2015 the Test Explorer allows you to run a single scenario outline.
Now I need to do the same using NUnit3 console tool (I'm using NUnit as Unit Test Provider).
Currently I'm using the following command in order to run a test using console tool.
"C:\NUnit-3.0.1\bin\nunit3-console.exe" Path.Scripts.dll --test:Fully.Qualified.Name.TestAAAFeature.TestAAA --x86
I could run a single row of a specflow scenario outline example using the --testlist: option.
# list.txt
TestC112169Feature.TestCase112169("1","atomic",null)
# cmd
"C:\NUnit-3.0.1\bin\nunit3-console.exe" Path.Scripts.dll --testlist:"c:\list.txt" --x86
And that do the trick.
First and foresmost, I think you should rename your test cases to be more informative as a best practice.
Coming to your question, you should use filters which can be specified by using a where clause. For running a specific test case, you can use either method or name to filter down to one or more target test case(s).
Just append the following to your command and you should be good to go.
--where "name == 'TestCase11257'"
OR
--where "method == 'TestCase11257'"
or you can even combine multiple filters like
--where "name == 'TestCase11257' || method == 'TestCase11257'"
You can read more about filters here
In Fitnesse Commands: http://<host>:<port>/<suite path and test name>?responder=suite&startTest=TestTwo. I tried to execute. It is executing the test case which is passed in the url. If we pass the suite path and remove the test name, it is executing the whole suite. Is there any way we can run all tests coming after TestTwo?
No, you can run an entire suite or you can run an individual test. Perhaps you can break the suite into smaller sub-suites?
You can use firstTest parameters when calling the url. In your case, you can do:
http://<host>:<port>/<suite path>?responder=suite&firstTest=TestTwo
Note that, this only works based on alphabetical order of full path names of the test cases as mentioned in the fitnesse wiki here
firstTest: if present, only tests whose full path names are lexigraphically greater (later in alphabetical order) will be run. This is of questionable use since tests are not guaranteed to be run in any particular order.
Alternatively, you can tag certain test cases with tags, and execute them using parameter suiteFilter. You can find the relevant documentation in the same wiki page.
I have a test suite which has the init and end functions implemented in it.
When I run the suite it produces some html outputs to show the results of the test cases (pass and fail etc.) from the suite.
But in the log the init_per_suite and end_per_suite are also counted as test cases and their run result is shown in the log. Is there a way to avoid this? I guess there might be a flag in Erlang common test which can be used to disable this.
No, you can't disable it. Besides it may be important information if start_per_suite/end_per_suite succeeds or or fails.
Also you can see that start_per_suite/end_per_suite are not included in general numeration of testcases in resulting html. May be it'll help you if you want to parse the html output. Also you can sort cases by their numbers so the unnumered cases will be on the top/bottom.