How to run a single specflow scenario outline using nunit3 console tool - specflow

Using VS2015 the Test Explorer allows you to run a single scenario outline.
Now I need to do the same using NUnit3 console tool (I'm using NUnit as Unit Test Provider).
Currently I'm using the following command in order to run a test using console tool.
"C:\NUnit-3.0.1\bin\nunit3-console.exe" Path.Scripts.dll --test:Fully.Qualified.Name.TestAAAFeature.TestAAA --x86

I could run a single row of a specflow scenario outline example using the --testlist: option.
# list.txt
TestC112169Feature.TestCase112169("1","atomic",null)
# cmd
"C:\NUnit-3.0.1\bin\nunit3-console.exe" Path.Scripts.dll --testlist:"c:\list.txt" --x86
And that do the trick.

First and foresmost, I think you should rename your test cases to be more informative as a best practice.
Coming to your question, you should use filters which can be specified by using a where clause. For running a specific test case, you can use either method or name to filter down to one or more target test case(s).
Just append the following to your command and you should be good to go.
--where "name == 'TestCase11257'"
OR
--where "method == 'TestCase11257'"
or you can even combine multiple filters like
--where "name == 'TestCase11257' || method == 'TestCase11257'"
You can read more about filters here

Related

how to show all gtest case by bazel without "test" cmd

I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:

Ranorex custom re-run component

Since Ranorex does not provide re-run functionality from under the hood, I have to write my own and before I started, just want to ask for advice from people who've done it or maybe possible existing solution on the market.
Goal is:
In the end of the run, to re-run failed test cases.
Requirements:
Amount of recursive iterations should be customized
If Data binding is used, should include only Iterations for Data binding that failed
I would use the Ranorex command line argument possiblities to achieve this. Main thing would be to structure the suit accordingly that each test-case could be run seperately.
During the test I would log down the failed test cases either into a text file, database or any other solution that you can later on read the data from (even parse it from the xml result if you want to).
And from that data you'll just insert the test-case name as a command line argument while running the suite again:
testSuite.exe /testcase:TestCaseName
or
testSuite.exe /tc:TestCaseName
The full command line args reference can be found here:
https://www.ranorex.com/help/latest/lesson-4-ranorex-test-suite
Possible solutions:
1a. Based on the report xml: Parse report and collect info about all failed TC.
Cons:
Parse will be tricky
or:
1b. Or create list of failed TC on runtime : If failure occurs on tear-down add this iteration to the re-run list (could be file or DB table).
Using for example:
string testCaseName = TestCaseNode.Current.Name;
int testCaseIndex = TestSuite.Current.GetTestCase(testCaseName).DataContext.CurrentRowIndex;
then:
2a. Based on the list, run executable with parameters, looping though each record.
like this:
testSuite.exe /tc:testCaseName tcdr:testCaseIndex
or:
2b. Or generate new TestSuite file .rxtxt and recompile solution to created updated executable.
and last part:
3a. In the end repeat process, checking that failedTestCases == 0 || currentRerunIterations < expectedRerunIterations with script through CI run executable
or:
3b. Wrap whole Test Suite into Rerun test module and do the same check for failedTestCases == 0 || currentRerunIterations < expectedRerunIterations and run Ranorex from TestModule
Please let me know what you think about it.

Machine parseable error messages

(From https://groups.google.com/d/msg/bazel-discuss/cIBIP-Oyzzw/caesbhdEAAAJ)
What is the recommended way for rules to export information about failures such that downstream tools can include them in UIs.
Example use case:
I ran bazel test //my:target, and one of the actions for //my:target fails because there is an unknown variable "usrname" in my/target.foo at line 7 column 10. It would also like to report that "username" is a valid variable and this is a possible misspelling. And thus wants to suggest an addition of an "e" character.
One way I have thought to do this is to have a separate file that my action produces //my:target.errors that is in a separate output group and have it write machine parseable data there in addition to human readable data on stdout.
I can then find all of these files and parse the data in them in downstream tools.
Is there any prior work on this, or does everything just try to parse the human readable output?
I recommend running the error checkers as extra actions.
I don't think Bazel currently has hooks for custom error handlers like you describe. Please consider opening a feature request: https://github.com/bazelbuild/bazel/issues/new

In Fitnesse: command to run stating which test you would like to start and end untill the end of test suite

In Fitnesse Commands: http://<host>:<port>/<suite path and test name>?responder=suite&startTest=TestTwo. I tried to execute. It is executing the test case which is passed in the url. If we pass the suite path and remove the test name, it is executing the whole suite. Is there any way we can run all tests coming after TestTwo?
No, you can run an entire suite or you can run an individual test. Perhaps you can break the suite into smaller sub-suites?
You can use firstTest parameters when calling the url. In your case, you can do:
http://<host>:<port>/<suite path>?responder=suite&firstTest=TestTwo
Note that, this only works based on alphabetical order of full path names of the test cases as mentioned in the fitnesse wiki here
firstTest: if present, only tests whose full path names are lexigraphically greater (later in alphabetical order) will be run. This is of questionable use since tests are not guaranteed to be run in any particular order.
Alternatively, you can tag certain test cases with tags, and execute them using parameter suiteFilter. You can find the relevant documentation in the same wiki page.

Can we Ignore particular #tag using specflow

I am using specflow for writing my feature files .my feature files contains "#Tags"(like:#Authentication,#Login,#Permission,etc...) so i want to run all of them except #Authentication..
so can we use tag like:
~#Authentication so this will execute all test cases except test case containing #Authentication tag
As you have stated that you are running the tests from the command line using MSTest.exe then you should be able to run tests that are not in a category (according to the command line options like this:
"%VS100COMNTOOLS%..\IDE\MSTest.exe" /testcontainer:"Project.dll" /category:"!Authentication"

Resources