Run SpecFlow test infinity times - specflow

I have a SpecFlow test that fails if you run it enough times. How can I take the existing SpecFlow test and make it run infinity times until it fails? (Ideally I'd like to count how many times it takes.)
My initial guess was to just call the binding methods that the test script ultimately calls - but I keep getting null pointer exceptions. Apparently SpecFlow is initialising something that I'm not.
My next guess was to try to launch the auto-generated code for the test feature, but it seems to want all sorts of data from the SpecFlow framework that I don't know how to generate.
All I want to do is run the test multiple times. Surely there must be some way to accomplish this utterly trivial task?

It seems I was trying too hard. Here's what I came up with:
using System;
using NUnit.Framework;
[TestFixture]
public sealed class StressTest
{
[Test]
public void Test()
{
var thing = new FoobarFeature();
thing.FeatureSetup();
thing.TestInitialize();
var n = 0;
while (true)
{
Console.WriteLine("------------ Attempt {0} ----------------", n);
thing.Scenario1();
thing.Scenario2();
thing.Scenario3();
n++;
}
}
}
I've got a Foobar.feature file, which autogenerates a Foobar.feature.cs file containing the FoobarFeature class. The names of the scenario methods obviously change depending what's in your feature file.
I'm not 100% sure this works for tests having complex setup / teardown, but it works for my specific case...

Related

Response.BinaryWrite() in ASP.Net MVC Test Project

I am stuck in a situation, in which i am creating a test project in ASP.Net MVC, here i am testing a Method which is actually using to download a file, so whenever i am trying to test this method it gives
OutputStream is not available when a custom TextWriter is used
Error in Response.BinaryWrite(), everything is fine except this, can anyone tell me how to resolve this exception, i am using MOQ dll for Mocking, Please suggest me to get rid of this situation.
HttpContext.Current.Response.BinaryWrite()
This is the line which is actually generating exception, now i have a question that- Is this good to test a download method or i have to leave it, if it is good then how to resolve this issue.
Thanks.
If you are writing a unit test you generally don't want to be writing a test that have dependencies that you can't control (e.g. writing to the database, filesystem or output stream). You can also assume that the Response.BinaryWrite does what it is supposed to.
You could do something like this to get around the error you are seeing.
public interface IBinaryWriter
{
void BinaryWrite(byte[] buffer);
}
public class ResponseBinaryWriteWrapper : IBinaryWriter
{
public void BinaryWrite(byte[] buffer)
{
HttpContext.Current.Response.BinaryWrite(buffer);
}
}
This will give you the ability to inject the IBinaryWriter into the class you want to test as a mock and then you can check that BinaryWrite is called with the correct byte array. In your production code you then inject your concrete ResponseBinaryWriterWrapper class.

Understanding Mock Unit Testing

am trying to understand using Mock unit testing and i started with MOQ . this question can be answered in General as well.
Am just trying to reuse the code given in How to setup a simple Unit Test with Moq?
[TestInitialize]
public void TestInit() {
//Arrange.
List<string> theList = new List<string>();
theList.Add("test3");
theList.Add("test1");
theList.Add("test2");
_mockRepository = new Mock<IRepository>();
//The line below returns a null reference...
_mockRepository.Setup(s => s.list()).Returns(theList);
_service = new Service(_mockRepository.Object);
}
[TestMethod]
public void my_test()
{
//Act.
var myList = _service.AllItems();
Assert.IsNotNull(myList, "myList is null.");
//Assert.
Assert.AreEqual(3, myList.Count());
}
Here is my question
1 . In testInitialize we are setting theList count to 3(string) and we are returning the same using MOQ and in the below line we are going to get the same
var myList = _service.AllItems(); //Which we know will return 3
So what we are testing here ?
2 . what are the possible scenarios where the Unit Testing fails ? yes we can give wrong values as 4 and fail the test. But in realtime i dont see any possiblity of failing ?
i guess am little backward in understanding these concepts. I do understand the code but am trying to get the insights !! Hope somebody can help me !
The system under test (SUT) in your example is the Service class. Naturally, the field _service uses the true implementation and not a mock. The method tested here is AllItems, do not confuse with the list() method of IRepository. This latter interface is a dependency of your SUT Service therefore it is mocked and passed to the Service class via constructor. I think you are confused by the fact that AllItems method seems to only return the call from list() method of its dependency IRepository. Hence, there is not a lot of logic involved there. Maybe, reconsider this example and add more expected logic for the AllItems method. For example you may assert that the AllItems returns the same elements provided by the list() method but reordered.
I hope I can help you with this one.
1.) As for this one, your basically testing he count. Sometimes in a collection, the data accumulates so it doesn't necessarily mean that each time you exectue the code is always 3. The next time you run, it adds 3 so it becomes 6 then 9 and so on.
2.) For unit testing, there are a lot of ways to fail like wrong computations, arithmetic overflow errors and such. Here's a good article.
The test is supposed to verify that the Service talks to its Repository correctly. We do this by setting up the mock Repository to return a canned answer that is easy to verify. However, with the test as it is now :
Service could perfectly return any list of 3 made-up strings without communicating with the Repository and the test would still pass. Suggestion : use Verify() on the mock to check that list() was really called.
3 is basically a magic number here. Changes to theList could put that number out of sync and break the test. Suggestion : use theList.Count instead of 3. Better : instead of checking the number of elements in the list, verify that AllItems() returns exactly what was passed to it by the Repository. You can use a CollectionAssert for that.
This means getting theList and _mockRepository out of TestInit() to make them accessible in a wider scope or directly inside the TestMethod, which is probably better anyways (not much use having a TestInitialize here).
The test would fail if the Service somehow stopped talking to its Repository, if it stopped returning exactly what the Repository gives it, or if the Repository's contract changed. More importantly, it wouldn't fail if there was a bug in the real implementation for IRepository - testing small units allows you to point your finger at the exact object that is failing and not its neighbors.

Filtering code coverage by calling function in OpenCover

I have some integration tests written for MsTest. The integration tests have the following structure:
[TestClass]
public class When_Doing_Some_Stuff
{
[TestInitialize]
protected override void TestInitialize()
{
// create the Integration Test Context
EstablishContext();
// trigger the Integration Test
When();
}
protected void EstablishContext()
{
// call services to set up context
}
protected override void When()
{
// call service method to be tested
}
[TestMethod]
public void Then_Result_Is_Correct()
{
// assert against the result
}
}
I need to filter the code coverage results of a function by who is calling it. Namely, I want the coverage to be considered only if the function is called from a function named "When" or which has a certain attribute applied to it.
Now, even if a certain method in the system is called in the EstablishContext part of some tests, the method is marked as visited.
I believe there is no filter for this and I would like to make the changes myself, as OpenCover is... well.. open. But I really have no idea where to start. Can anyone point me in the right direction?
You might be better addressing this with the OpenCover developers; hmmm... that would be me then, if you look on the wiki you will see that coverage by test is one of the eventual aims of OpenCover.
If you look at the forking you will see a branch from mancau - he initially indicated that he was going to try to implement this feature but I do not know how far he has progressed or if he has abandoned his attempt (what he has submitted is just a small re-introduction of code to allow the tracing of calls).
OpenCover tracks by emitting a visit identifier and updating the next element in an array that resides in shared memory (shared between the profiler (C++/native/32-64bit) and the console (C#/managed/any-cpu)). What I suggested to him was (and this will be my approach when I get round to it, if no one else does and is why I emit the visit data in this way) that he may want to add markers into the sequence to indicate that he has entered/left a particular test method (filtered on [TestMethod] attribute perhaps) and then when processing the results in the console this can then be added to the model in some way. Threading may also be a concern as this could cause the interleaving of visit points for tests run in parallel.
Perhaps you will think of a different approach and I look forward to hearing your ideas.

Overriding IOC Registration for use with Integration Testing

so I think I'm perhaps not fully understanding how you would use an IOC container for doing Integration tests.
Let's assume I have a couple of classes:
public class EmailComposer : IComposer
{
public EmailComposer(IEmailFormatter formatter)
{
...
}
...
public string Write(string message)
{
...
return _formatter.Format(message);
}
}
OK so for use during the real application (I'm using autofac here) I'd create a module and do something like:
protected override void Load(ContainerBuilder containerBuilder)
{
containerBuilder.RegisterType<HtmlEmailFormatter>().As<IEmailFormatter>();
}
Makes perfect sense and works great.
When it comes to Unit Tests I wouldn't use the IOC container at all and would just mock out the formatter when I'm doing my tests. Again works great.
OK now when it comes to my integration tests...
Ideally I'd be running the full stack during integration tests obviously, but let's pretend the HtmlEmailFormatter is some slow external WebService so I decide it's in my best interest to use a Test Double instead.
But... I don't want to use the Test Double on all of my integration tests, just a subset (a set of smoke-test style tests that are quick to run).
At this point I want to inject a mock version of the webservice, so that I can validate the correct methods were still called on it.
So, the real question is:
If I have a class with a constructor that takes in multiple parameters, how do I make one of the parameters resolve to a an instance of an object (i.e. the correctly setup Mock) but the rest get populated by autofac?
I would say you use the SetUp and TearDown (NUnit) or ClassInitialize and ClassCleanup (MSTest) for this. In initialize you register your temporary test class and in cleanup you restore to normal state.
Having the DI container specify all the dependencies for you has the benefit of getting an entire object graph of dependencies resolved. However if there's a single test in which you want to use a different implementation I would use a Mocking framework instead.

ASP.NET MVC - Unit testing overkill? (TDD)

So I'm starting to catch the TDD bug but I'm wondering if I'm really doing it right... I seem to be writing A LOT of tests.
The more tests the better, sure, but I've got a feeling that I'm over doing it. And to be honest, I don't know how long I can keep up writing these simple repetitive tests.
For instance, these are the LogOn actions from my AccountController:
public ActionResult LogOn(string returnUrl)
{
if (string.IsNullOrEmpty(returnUrl))
returnUrl = "/";
var viewModel = new LogOnForm()
{
ReturnUrl = returnUrl
};
return View("LogOn", viewModel);
}
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult LogOn(LogOnForm logOnForm)
{
try
{
if (ModelState.IsValid)
{
AccountService.LogOnValidate(logOnForm);
FormsAuth.SignIn(logOnForm.Email, logOnForm.RememberMe);
return Redirect(logOnForm.ReturnUrl);
}
}
catch (DomainServiceException ex)
{
ex.BindToModelState(ModelState);
}
catch
{
ModelState.AddModelError("*", "There was server error trying to log on, try again. If your problem persists, please contact us.");
}
return View("LogOn", logOnForm);
}
Pretty self explanatory.
I then have the following suite of tests
public void LogOn_Default_ReturnsLogOnView()
public void LogOn_Default_SetsViewDataModel()
public void LogOn_ReturnUrlPassedIn_ViewDataReturnUrlSet()
public void LogOn_ReturnUrlNotPassedIn_ViewDataReturnUrDefaults()
public void LogOnPost_InvalidBinding_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_InvalidBinding_DoesntCallAccountServiceLogOnValidate()
public void LogOnPost_ValidBinding_CallsAccountServiceLogOnValidate()
public void LogOnPost_ValidBindingButAccountServiceThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingButAccountServiceThrows_DoesntCallFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModelButFormsAuthThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingAndValidModel_CallsFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModel_RedirectsToReturnUrl()
Is that over kill? I haven't even shown the services tests!
Which ones (if any) can I cull?
TIA,
Charles
It all depends on how much coverage you need / want and how much dependability is an issue.
Here are the questions you should ask yourself:
Does this unit test help implement a feature / code change that I don't already have?
Will this unit test help regression test/debug this unit if I make changes later?
Is the code to satisfy this unit test non-trivial or does it deserve a unit test?
Regarding the 3rd one, I remember when I started writing unit tests (I know, not the same thing as TDD) I would have tests that would go like:
string expected, actual;
TypeUnderTest target = new TypeUnderTest();
target.PropertyToTest = expected;
actual = target.PropertyToTest;
Assert.AreEqual<string>(expected, actual);
I could have done something more productive with my time like choose a better wallpaper for my desktop.
I recommend this article by ASP.net MVC book author Sanderson:
http://blog.codeville.net/2009/08/24/writing-great-unit-tests-best-and-worst-practises/
I'd say you are doing a little more than you probably have to. While it is nice to test every possible path your code can take, some paths just aren't very important or don't result in real differences in behavior.
In your example take LogOn(string returnUrl)
The first thing you do in there is check the returnUrl parameter and re-assign it to a default value if it is null/empty. Do you really need a whole unit test just to make sure that one line of code happens as expected? It isn't a line likely to break easily.
Most changes that might break that line be things that would throw a compile error. A change in the default value being assigned is possible in that line (maybe you decide later that "/" isn't a good default value... but in your unit test, I bet you hard-coded it to check for "/" didn't you? So the change in the value will necessitates a change in your test... which means you aren't testing your behavior, instead you are testing your data.
You can achieve a test for the behavior of the method by simply having one test that does NOT supply a parameter. That will hit your "set default" part of the routine while still testing that the rest of the code behaves well too.
That looks about right to me. Yes, you will write a lot of Unit Tests and, initially, it will seem like overkill and TBH a waste of time; but stick with it, it'll be worth it. What you should be aiming for (rather than just 100% code coverage) is 100% function coverage. However... if you find that you're writing a lot of UTs for the same method it's possible that that method is doing too much. Try separating your concerns more. In my experience the body of an Action should be doing little more than newing-up a class to do the real work. It's that class that you should really be targeting with UTs.
Chris
100% coverage is very ideal, its really helpful if you have to massively refactor your code however, as the tests will govern your code specs to make sure it is correct.
I am personally not a 100% TDD (sometimes too lazy too) but if you intend to do 100%, maybe you should write some test helpers to take away some burden on these repetitive tests. For example, write a helper to test all your CRUD in a standard post structure with a callback to allow you pass in some evaluation might save you a lot of time.
I'm unit testing only code what i'm unsure about. Sure - you can never know what will back stab you but writing tests for trivial things seems like an overkill for me.
I'm not a unit-testing/tdd guru - but i think that it's fine if you do NOT write tests just to have them. They must be useful. If you are experienced enough with unit testing - you start to feel when they are going to be valuable and when not.
You might like this book.
Edit:
Actually - i just found a quote about this underneath isolation frameworks chapter. It's about overspecifying tests (one particular test), but i guess the idea remains in more global scope:
Overspecifying the tests
If your test has too many expectations, you may create a test that
breaks down with even the lightest of code changes, even though the
overall functionality still works. Consider this a more technical way of
not verifying the right things. Testing interactions is a double-edged
sword: test it too much, and you start to lose sight of the big picture—the overall functionality; test it too little, and you’ll miss the
important interactions between objects.

Resources