Just embarking on using a test framework for writing unit tests and also the TDD approach. Not having any prior experience felt it would be good to go for XUnit although NUnit was the best alternative. Trying to transpose the MS Unit testing methods that I have been looking at in the MVC books I have, to XUnit equivalents and am already stumbling.
Specifically the following:
Testing list of entries for a view collection like Index:
CollectionAssert.AllItemsAreInstancesOfType((ICollection)result.ViewData.Model,typeof(MyObject)); (from MVC unleashed book)
How would you do this in XUnit or can't it be done like this?
What puts me off is the lack of documentation for XUnit and am wondering if NUnit is better option.........
Also it appears that the testing code is almost its own language. Would it be fair to say that there is a common set of tests that can be run for all projects?
Regards to TDD..I understand the concept but are the tests themselves the same as unit tests in what they contain and are testing? Not sure what the actual difference is apart from when they get written!
I am a fan of mspec. See these questions
Helpful links :
MSpec installer
It runs on top of NUnit. There are also mvc extension methods for things like
result.ShouldBeAView().and().ShouldHaveModelOfType<T>()
A controller test can look like this
[Subject(typeof(JobsController))]
public class when_viewing_imdex_page : specifications_for_jobs_controller
{
static ActionResult result;
Establish context =
() => result = controller.Index();
It should_return_a_view_result =
() => result.ShouldBeAView();
It should_return_a_view_with_formviewdata =
() => result.ShouldBeAView().And().ShouldHaveModelOfType<IList<Job>>();
It should_contain_a_list_of_jobs =
() => result.Model<IList<Job>>().ShouldNotBeEmpty();
}
I don't know about XUnit but in NUnit there is collections constraints
look at this NUnit
for your example you could use this code
Assert.That((ICollection)result.ViewData.Model, Is.All.InstanceOf<MyObject>());
I think it would be unfair for me to comment on which Testing Framework you should use, having only used NUnit, but it's always been enough for me.
Regarding your second and third points however; the majority of tests a very similar, but that is the point of TDD, start from the base, and continue refactoring until you have "no more, and no less".
Test Driven Development and Test After Development are both a form of Unit Testing; but within TDD, the tests are DRIVING the development, ensuring that every line you write has a purpose and is fully tested.
There are disadvantages; sometimes, you have to write the code before testing (especially when starting out), so you can figure out how to test it, so it may be fair to say that your development will contain a bit of both types mentioned.
TDD, and any form of automated testing is definitely worth the effort, if for nothing but the satisfaction you get from seeing hundreds of tests pass in your application on your final test run.
Related
I'm writing tests with EUnit and some of the Units Under Test need to read a data file via file:consult/1. My tests make assumptions about the data that will be available in /priv, but the data will be different in production. What's the best way to accomplish this?
I'm a total newbie in Erlang and I have thought of a few solutions that feel a bit ugly to me. For example,
Put both files in /priv and use a macro (e.g., "-ifdef(EUNIT)") to determine which one to pass to file:consult/1. This seems too fragile/error-prone to me.
Get Rebar to copy the right file to /priv.
Also please feel free to point out if I'm trying to do something that is fundamentally wrong. That may well be the case.
Is there a better way to do this?
I think both of your solutions would work. It is rather question of maintaining such tests, and both of those rely on some external setup (file existing, and having wright data).
For me easiest way to keep contents of such file local to given test is mocking, and making file:consult/1 return value you want.
7> meck:new(file, [unstick, passthrough]).
ok
8> meck:expect(file, consult, fun( _File ) -> {some, data} end).
ok
9> file:consult(any_file_at_all).
{some,data}
It will work, but there are two more things you could do.
First of all, you should not be testing file:consult/1 at all. It is part of standard library, and can assume it works all wright. Rather than doing that you should test functions that use data you read from this file; and of course pass to them some "in-test-created" data. It will give you some nice separation between data source, and parsing (acting on) it. And later it might be simpler to replace file:consult with call to external service, or something like that.
Other thing is that problem with testing something should be sign of bad smell for you. You might think a little about redesigning your system. I'm not saying that you have to, but such problems are good indicator to justify on . If you testing some functionality x, and you would like it to behave one way in production and other in tests (read one file or another), than maybe this behaviour should be injected to it. Or in other words, maybe file your function should read should be a parameter in this function. If you would like to still have some "default file to read" functionality in your production code, you could use something like this
function_with_file_consult(A, B, C) ->
function_with_file_consult(A, B, C, "default_file.dat").
function_with_file_consult(A, B, C, File) ->
[ ... actual function logic ... ]
It will allow you to use shorter version in production, and longer just for your tests.
I'm writing a set of wizards to enhance source code editing in the IDE and the text manipulation is done through IOTAEditPosition which I get from BorlandIDEServices.
How can I automate tests against my methods so I can assert that the text manipulation was done right? IOTAEditPosition can't be stubbed or mocked because I would like to simulate the same environment of production code (the real IOTAEditPosition does some automatic work with indentation, line breaks, it has several methods like checking to see if a character is an identifier or a word separator, text search mechanisms and so on, it really does a lot of things that are specific to the IDE editor).
In this case, I don't care about puritanism like Unit vs Integration test. Call it integration test if you will, the fact is that my code totally depends and it's very sensible to the behavior of IOTAEditPosition and I need to test them together.
Just to make it a bit more clear. In my mind, the ideal assertion would be something like:
// DUnit test case
MyObject.LoadText(SampleText);
MyObject.ManipulateText;
CheckEquals(ExpectedManipulatedText, MyObject.GetManipulatedText);
As the actual underlying text manipulation is done by IOTAEditPosition, I need an implementation of it that behaves exactly like the one I get in production code.
I'm going to be doing a presentation on Spock next week and as part of the presentation I need to give a demonstration. I have used Spock a bit before for a project but haven't used it in about a year or so.
The demonstration needs to be more than just a "hello world" type demonstration. I'm looking for ideas of cool things I can demonstrate using Spock ... Any ideas?
The only thing I have now is the basic example that is included in the "getting started" section of the Spock website.
def "length of Spock's and his friends' names"() {
expect:
name.size() == length
where:
name << ["Kirk", "Spock", "Scotty"]
length << [4,5,6]
/*
name | length
"Spock" | 5
"Kirk" | 4
"Scotty" | 6
*/
}
Same tool for end-to-end testing as well as unit testing. Since it is based on groovy you can provide your own simple domain specific dsl based automation framework leveraging spock. I've around 5000 automated tests running as part of CI using this framework.
For Acceptance Testing
use of power asserts focus on how easy it is to interpret the failed assertions
BDD with given-when-then
data driven specs and unrolling
business friendly reporting
Powerful UI automation by marrying with Geb
For unit and integration testing
interaction based testing and mocking
simplified xml etc testing because of groovy goodies
Get more ideas from their documentation
I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.
I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.
We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)