I have a test filter criteria which follows the format namespace.subnamespace.testClass. I want to use a new MSTest test filter criteria for just namespace.subnamespace to run all the test classes in this subnamespace.
I originally tried dotnet test --filter FullyQualifiedName=namespace.subnamespace, which does not work for me. I also tried dotnet test --filter FullyQualifiedName~namespace.subnamespace (swapped = for ~), which does work.
I am curious: does the original approach work at all? It would seem like it should, lest I'm misunderstanding what a "FullyQualifiedName" is.
FullyQualifiedName includes the Namespace and the Classname and the Methodname. Hence, the = operator won't work, but the ~ contains operator will.
The docs are a bit vague on the operators and their values. This blog post has been my rescue in many occasions.
Also blogged on this topic:
https://jessehouwing.net/staged-execution-of-tests-in-azure-devops-pipelines/
Related
We can select multiple scenario's by including the following on the command line:
-Dcucumber.options="--tags #S1,#S2,#S6"
And if I want to exclude #S6 I can with:
-Dcucumber.options="--tags ~#S6"
But if I want to include #S1, #S2 and exclude #S6 all tags are ignored with:
-Dcucumber.options="--tags #S1,#S2,~#S6"
and all tags are also ignored if I try to double up on the options with:
-Dcucumber.options="--tags #S1,#S2" -Dcucumber.options="--tags ~#S6"
Is there a command line way to include and exclude in the one command line?
The reason I would like to do this is to run all of a type of test but exclude tests that use some external system that may be down temporarily.
EDIT: Turns out I was not fully aware of the difference between AND and OR in the Cucumber world. This SO answer and this article was a good reference, look for "Logical OR" and "Logical AND":
so to run S1 OR S2 AND NOT S3
mvn test -Dkarate.options="--tags #S1,#S2 --tags ~#S3"
Using the Java (or Runner) API, just use commas for OR and separate strings for AND:
Results results = Runner.path("classpath:com/myco/some.feature")
.tags("#S1,#S2", "~#S3")
.parallel(5);
For really advanced cases, also see: https://stackoverflow.com/a/67224240/143475
I want to query all gtest cases by bazel,
parameter "--gtest_filter" only can be used with "bazel test " cmd
and I am try to use "bazel query bazel query //xxx:all", but it will show the test list which defined in BUILD file , I want to get the cases list from xxx.cc files.
This is not a job that bazel query can do. Query operates on the graph structure of targets. A fundamental design decision of Bazel is that this graph can be computed by looking only at BUILD files and the .bzl files they depend on. In particular, parsing source files is not allowed.
(The argument to --test_filter is simply passed through the test runner; Bazel does not know what it represents.)
If you use CLion with the Bazel plugin you get the following view for googletest tests:
This works even with Catch2 (but for Catch2 the view is not so nice). I guess that's some IDE magic here - nevertheless, it gives you what you want. I assume you can also come up with some type of Bazel Aspect that produces this information for you.
I tested this also with Lavender (with minor modifications) and Visual Studio which gives me in the test overview also a list of all test:
I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.
I want to call an exe file in my fitnesse test case.
Help me in calling an exe file in my test cases
With fitnesse, you'll need to write a fixture to run the EXE (and/or find a fitnesse plugin to do it for you). The easiest way is to write a simple fixture and just run
Runtime.getRuntime().exec(<cmd>);
While #Steven Mastandrea's answer is right but it does require you to write you a Java class extending one of the provided fixture's from Fitnesse and compile and put the class files in Fitnesse classpath and then use it.
There is a much simpler way of doing it if you use Generic Fixture like this:
!| Generic Fixture |
| exec | mycommand.exe | | expected outpout |
Disclaimer: Generic Fixture was written and distributed by me as open source 2 years ago on sourceforge.
With fitSharp on Windows, you can write this:
|with|type|System.Diagnostics.Process|
|with|start|C:\dev\myFileImporter.exe||-f c:\dev\data\file.txt|
|wait for exit|
I would suggest taking the CommandLineFixture as a baseline, and expand it from there. The CommandLineFixture has a lot of functionality and is well commented and easily extended should you wish to do so.
This fixture incorporates Steven's code, but has a lot more functionality than simply exec, including being able to asynchronously spawn processes, search output for expected results, etc.
Post a command if you feel some examples of how to use it would be helpful!
I just caught myself doing something I do a lot, and wanted to generalize it, express it, share it and see who else is following this general practice, to find some other example situations where it might be relevant.
The general practice is getting something wrong first, on purpose, to establish that everything else is right before undertaking the current task.
What I was trying to do, specifically, was to find examples in our code base where the dojo TextArea widget was used. I knew (because I had it in front of me - existence proof) that the TextBox widget was present in at least one file. So I looked first for what I knew was there:
grep -r digit.form.TextBox | grep -v
svn
This wasn't right - I had made a common (for me) mistake of leaving off the star, so I fixed that:
grep -r digit.form.TextBox * | grep
-v svn
which found no results! Quick comparison with the file I was looking at showed me I had misspelled "dijit":
grep -r dijit.form.TextBox * | grep
-v svn
And now I got results. Cool; doing it wrong first on purpose meant my query was correct except for looking for the wrong thing, so now I could construct the right query:
grep -r dijit.form.TextArea * | grep
-v svn
and be confident that when it gave me no results, it was because there are no such files, and not because I had malformed the query.
I'll add three other examples as answers; please add any others you're aware of.
TDD
The red-green-refactor cycle of test-driven development may be the archetype of this practice. With red, demonstrate that the functionality doesn't exist; then make it exist and demonstrate that you've done so by witnessing the green bar.
http://support.microsoft.com/kb/275085
This VBA routine turns off the "subdatasheets" property for every table in your MS Access database. The user is instructed to make sure error-handling is set to "Break only on unhandled errors." The routine identifies tables needing the fix by the error that is thrown. I'm not sure this precisely fits your question, but it's always interesting to me that the error is being used in a non-error way.
Here's an example from VBA:
I also use camel case when I Dim my variables. ThisIsAnExampleOfCamelCase. As soon as I exit the VBA code line if Access doesn't change the lower case variable to camel case then I know I've got a typo. [OR, Option Explicit isn't set, which is the post topic.]
I also use this trick, several times an hour at least.
arrange - assert - act - assert
I sometimes like, in my tests, to add a counter-assertion before the action to show that the action is actually responsible for producing the desired outcome demonstrated by the concluding assertion.
When in doubt of my spelling, and of my editor's spell-checking
We use many editors. Many of them highlight misspelled words as I type them - some do not. I rely on automatic spell checking, but I can't always remember whether the editor of the moment has that feature. So I'll enter, say, "circuitx" and hit space. If it highlights, I'll back up over the space and the "x" and type another space - and learn that I spelled circuit correctly - but if it doesn't, I'll copy the word and paste it into a known spell-checker to see whether I did.
I'm not sure it's the best way to act, as it does not prevent you from mispelling the final command, for example typing "TestArea" or something like that instead of "TextArea" (your finger just have to slip a little for such a mistake).
IMHO the best way is to run your "final" command, but on two sample files first : one containing the requested text, another that doesn't.
In other words, instead of running a "similar" command, run the real one, but over "similar" data.
(Not sure if this would be a good idea to try for real!)
For example, you might give the system to the users for testing and tell them the password to get started is "Apple".
You know the users are fully up and ready to test (everything is installed and connections to databases working) when they contact you and say the password doesn't work (it's actually "Orange").