I love the way that you can write clean concise code in Dart, but it appears that Dart is one of those languages that it easy to write but hard to test!
For example, given the following fairly simple method, how does one go about unit testing it?
typedef void HandleWebSocket(WebSocket webSocket);
Router createWebSocketRouter(HttpServer server, String context, HandleWebSocket handler) {
var router = new Router(server);
router.serve(context).transform(new WebSocketTransformer()).listen(handler);
return router;
}
You need to somehow replace the new Router() with some sort of factory method that returns a mock. The mock then needs to return a mock when serve is called. That then needs to have a mock transform* method that returns a mock stream.....and at that point most people will give up!
I have managed to write a unit test using the above approach but as it required 80 odd lines and polluted the actual class with a factory method I can hardly say I am happy with it!
Is there a better way of doing this?
I have a SpecFlow test that fails if you run it enough times. How can I take the existing SpecFlow test and make it run infinity times until it fails? (Ideally I'd like to count how many times it takes.)
My initial guess was to just call the binding methods that the test script ultimately calls - but I keep getting null pointer exceptions. Apparently SpecFlow is initialising something that I'm not.
My next guess was to try to launch the auto-generated code for the test feature, but it seems to want all sorts of data from the SpecFlow framework that I don't know how to generate.
All I want to do is run the test multiple times. Surely there must be some way to accomplish this utterly trivial task?
It seems I was trying too hard. Here's what I came up with:
using System;
using NUnit.Framework;
[TestFixture]
public sealed class StressTest
{
[Test]
public void Test()
{
var thing = new FoobarFeature();
thing.FeatureSetup();
thing.TestInitialize();
var n = 0;
while (true)
{
Console.WriteLine("------------ Attempt {0} ----------------", n);
thing.Scenario1();
thing.Scenario2();
thing.Scenario3();
n++;
}
}
}
I've got a Foobar.feature file, which autogenerates a Foobar.feature.cs file containing the FoobarFeature class. The names of the scenario methods obviously change depending what's in your feature file.
I'm not 100% sure this works for tests having complex setup / teardown, but it works for my specific case...
I have some integration tests written for MsTest. The integration tests have the following structure:
[TestClass]
public class When_Doing_Some_Stuff
{
[TestInitialize]
protected override void TestInitialize()
{
// create the Integration Test Context
EstablishContext();
// trigger the Integration Test
When();
}
protected void EstablishContext()
{
// call services to set up context
}
protected override void When()
{
// call service method to be tested
}
[TestMethod]
public void Then_Result_Is_Correct()
{
// assert against the result
}
}
I need to filter the code coverage results of a function by who is calling it. Namely, I want the coverage to be considered only if the function is called from a function named "When" or which has a certain attribute applied to it.
Now, even if a certain method in the system is called in the EstablishContext part of some tests, the method is marked as visited.
I believe there is no filter for this and I would like to make the changes myself, as OpenCover is... well.. open. But I really have no idea where to start. Can anyone point me in the right direction?
You might be better addressing this with the OpenCover developers; hmmm... that would be me then, if you look on the wiki you will see that coverage by test is one of the eventual aims of OpenCover.
If you look at the forking you will see a branch from mancau - he initially indicated that he was going to try to implement this feature but I do not know how far he has progressed or if he has abandoned his attempt (what he has submitted is just a small re-introduction of code to allow the tracing of calls).
OpenCover tracks by emitting a visit identifier and updating the next element in an array that resides in shared memory (shared between the profiler (C++/native/32-64bit) and the console (C#/managed/any-cpu)). What I suggested to him was (and this will be my approach when I get round to it, if no one else does and is why I emit the visit data in this way) that he may want to add markers into the sequence to indicate that he has entered/left a particular test method (filtered on [TestMethod] attribute perhaps) and then when processing the results in the console this can then be added to the model in some way. Threading may also be a concern as this could cause the interleaving of visit points for tests run in parallel.
Perhaps you will think of a different approach and I look forward to hearing your ideas.
so I think I'm perhaps not fully understanding how you would use an IOC container for doing Integration tests.
Let's assume I have a couple of classes:
public class EmailComposer : IComposer
{
public EmailComposer(IEmailFormatter formatter)
{
...
}
...
public string Write(string message)
{
...
return _formatter.Format(message);
}
}
OK so for use during the real application (I'm using autofac here) I'd create a module and do something like:
protected override void Load(ContainerBuilder containerBuilder)
{
containerBuilder.RegisterType<HtmlEmailFormatter>().As<IEmailFormatter>();
}
Makes perfect sense and works great.
When it comes to Unit Tests I wouldn't use the IOC container at all and would just mock out the formatter when I'm doing my tests. Again works great.
OK now when it comes to my integration tests...
Ideally I'd be running the full stack during integration tests obviously, but let's pretend the HtmlEmailFormatter is some slow external WebService so I decide it's in my best interest to use a Test Double instead.
But... I don't want to use the Test Double on all of my integration tests, just a subset (a set of smoke-test style tests that are quick to run).
At this point I want to inject a mock version of the webservice, so that I can validate the correct methods were still called on it.
So, the real question is:
If I have a class with a constructor that takes in multiple parameters, how do I make one of the parameters resolve to a an instance of an object (i.e. the correctly setup Mock) but the rest get populated by autofac?
I would say you use the SetUp and TearDown (NUnit) or ClassInitialize and ClassCleanup (MSTest) for this. In initialize you register your temporary test class and in cleanup you restore to normal state.
Having the DI container specify all the dependencies for you has the benefit of getting an entire object graph of dependencies resolved. However if there's a single test in which you want to use a different implementation I would use a Mocking framework instead.
So I'm starting to catch the TDD bug but I'm wondering if I'm really doing it right... I seem to be writing A LOT of tests.
The more tests the better, sure, but I've got a feeling that I'm over doing it. And to be honest, I don't know how long I can keep up writing these simple repetitive tests.
For instance, these are the LogOn actions from my AccountController:
public ActionResult LogOn(string returnUrl)
{
if (string.IsNullOrEmpty(returnUrl))
returnUrl = "/";
var viewModel = new LogOnForm()
{
ReturnUrl = returnUrl
};
return View("LogOn", viewModel);
}
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult LogOn(LogOnForm logOnForm)
{
try
{
if (ModelState.IsValid)
{
AccountService.LogOnValidate(logOnForm);
FormsAuth.SignIn(logOnForm.Email, logOnForm.RememberMe);
return Redirect(logOnForm.ReturnUrl);
}
}
catch (DomainServiceException ex)
{
ex.BindToModelState(ModelState);
}
catch
{
ModelState.AddModelError("*", "There was server error trying to log on, try again. If your problem persists, please contact us.");
}
return View("LogOn", logOnForm);
}
Pretty self explanatory.
I then have the following suite of tests
public void LogOn_Default_ReturnsLogOnView()
public void LogOn_Default_SetsViewDataModel()
public void LogOn_ReturnUrlPassedIn_ViewDataReturnUrlSet()
public void LogOn_ReturnUrlNotPassedIn_ViewDataReturnUrDefaults()
public void LogOnPost_InvalidBinding_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_InvalidBinding_DoesntCallAccountServiceLogOnValidate()
public void LogOnPost_ValidBinding_CallsAccountServiceLogOnValidate()
public void LogOnPost_ValidBindingButAccountServiceThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingButAccountServiceThrows_DoesntCallFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModelButFormsAuthThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingAndValidModel_CallsFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModel_RedirectsToReturnUrl()
Is that over kill? I haven't even shown the services tests!
Which ones (if any) can I cull?
TIA,
Charles
It all depends on how much coverage you need / want and how much dependability is an issue.
Here are the questions you should ask yourself:
Does this unit test help implement a feature / code change that I don't already have?
Will this unit test help regression test/debug this unit if I make changes later?
Is the code to satisfy this unit test non-trivial or does it deserve a unit test?
Regarding the 3rd one, I remember when I started writing unit tests (I know, not the same thing as TDD) I would have tests that would go like:
string expected, actual;
TypeUnderTest target = new TypeUnderTest();
target.PropertyToTest = expected;
actual = target.PropertyToTest;
Assert.AreEqual<string>(expected, actual);
I could have done something more productive with my time like choose a better wallpaper for my desktop.
I recommend this article by ASP.net MVC book author Sanderson:
http://blog.codeville.net/2009/08/24/writing-great-unit-tests-best-and-worst-practises/
I'd say you are doing a little more than you probably have to. While it is nice to test every possible path your code can take, some paths just aren't very important or don't result in real differences in behavior.
In your example take LogOn(string returnUrl)
The first thing you do in there is check the returnUrl parameter and re-assign it to a default value if it is null/empty. Do you really need a whole unit test just to make sure that one line of code happens as expected? It isn't a line likely to break easily.
Most changes that might break that line be things that would throw a compile error. A change in the default value being assigned is possible in that line (maybe you decide later that "/" isn't a good default value... but in your unit test, I bet you hard-coded it to check for "/" didn't you? So the change in the value will necessitates a change in your test... which means you aren't testing your behavior, instead you are testing your data.
You can achieve a test for the behavior of the method by simply having one test that does NOT supply a parameter. That will hit your "set default" part of the routine while still testing that the rest of the code behaves well too.
That looks about right to me. Yes, you will write a lot of Unit Tests and, initially, it will seem like overkill and TBH a waste of time; but stick with it, it'll be worth it. What you should be aiming for (rather than just 100% code coverage) is 100% function coverage. However... if you find that you're writing a lot of UTs for the same method it's possible that that method is doing too much. Try separating your concerns more. In my experience the body of an Action should be doing little more than newing-up a class to do the real work. It's that class that you should really be targeting with UTs.
Chris
100% coverage is very ideal, its really helpful if you have to massively refactor your code however, as the tests will govern your code specs to make sure it is correct.
I am personally not a 100% TDD (sometimes too lazy too) but if you intend to do 100%, maybe you should write some test helpers to take away some burden on these repetitive tests. For example, write a helper to test all your CRUD in a standard post structure with a callback to allow you pass in some evaluation might save you a lot of time.
I'm unit testing only code what i'm unsure about. Sure - you can never know what will back stab you but writing tests for trivial things seems like an overkill for me.
I'm not a unit-testing/tdd guru - but i think that it's fine if you do NOT write tests just to have them. They must be useful. If you are experienced enough with unit testing - you start to feel when they are going to be valuable and when not.
You might like this book.
Edit:
Actually - i just found a quote about this underneath isolation frameworks chapter. It's about overspecifying tests (one particular test), but i guess the idea remains in more global scope:
Overspecifying the tests
If your test has too many expectations, you may create a test that
breaks down with even the lightest of code changes, even though the
overall functionality still works. Consider this a more technical way of
not verifying the right things. Testing interactions is a double-edged
sword: test it too much, and you start to lose sight of the big picture—the overall functionality; test it too little, and you’ll miss the
important interactions between objects.