So I'm starting to catch the TDD bug but I'm wondering if I'm really doing it right... I seem to be writing A LOT of tests.
The more tests the better, sure, but I've got a feeling that I'm over doing it. And to be honest, I don't know how long I can keep up writing these simple repetitive tests.
For instance, these are the LogOn actions from my AccountController:
public ActionResult LogOn(string returnUrl)
{
if (string.IsNullOrEmpty(returnUrl))
returnUrl = "/";
var viewModel = new LogOnForm()
{
ReturnUrl = returnUrl
};
return View("LogOn", viewModel);
}
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult LogOn(LogOnForm logOnForm)
{
try
{
if (ModelState.IsValid)
{
AccountService.LogOnValidate(logOnForm);
FormsAuth.SignIn(logOnForm.Email, logOnForm.RememberMe);
return Redirect(logOnForm.ReturnUrl);
}
}
catch (DomainServiceException ex)
{
ex.BindToModelState(ModelState);
}
catch
{
ModelState.AddModelError("*", "There was server error trying to log on, try again. If your problem persists, please contact us.");
}
return View("LogOn", logOnForm);
}
Pretty self explanatory.
I then have the following suite of tests
public void LogOn_Default_ReturnsLogOnView()
public void LogOn_Default_SetsViewDataModel()
public void LogOn_ReturnUrlPassedIn_ViewDataReturnUrlSet()
public void LogOn_ReturnUrlNotPassedIn_ViewDataReturnUrDefaults()
public void LogOnPost_InvalidBinding_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_InvalidBinding_DoesntCallAccountServiceLogOnValidate()
public void LogOnPost_ValidBinding_CallsAccountServiceLogOnValidate()
public void LogOnPost_ValidBindingButAccountServiceThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingButAccountServiceThrows_DoesntCallFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModelButFormsAuthThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingAndValidModel_CallsFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModel_RedirectsToReturnUrl()
Is that over kill? I haven't even shown the services tests!
Which ones (if any) can I cull?
TIA,
Charles
It all depends on how much coverage you need / want and how much dependability is an issue.
Here are the questions you should ask yourself:
Does this unit test help implement a feature / code change that I don't already have?
Will this unit test help regression test/debug this unit if I make changes later?
Is the code to satisfy this unit test non-trivial or does it deserve a unit test?
Regarding the 3rd one, I remember when I started writing unit tests (I know, not the same thing as TDD) I would have tests that would go like:
string expected, actual;
TypeUnderTest target = new TypeUnderTest();
target.PropertyToTest = expected;
actual = target.PropertyToTest;
Assert.AreEqual<string>(expected, actual);
I could have done something more productive with my time like choose a better wallpaper for my desktop.
I recommend this article by ASP.net MVC book author Sanderson:
http://blog.codeville.net/2009/08/24/writing-great-unit-tests-best-and-worst-practises/
I'd say you are doing a little more than you probably have to. While it is nice to test every possible path your code can take, some paths just aren't very important or don't result in real differences in behavior.
In your example take LogOn(string returnUrl)
The first thing you do in there is check the returnUrl parameter and re-assign it to a default value if it is null/empty. Do you really need a whole unit test just to make sure that one line of code happens as expected? It isn't a line likely to break easily.
Most changes that might break that line be things that would throw a compile error. A change in the default value being assigned is possible in that line (maybe you decide later that "/" isn't a good default value... but in your unit test, I bet you hard-coded it to check for "/" didn't you? So the change in the value will necessitates a change in your test... which means you aren't testing your behavior, instead you are testing your data.
You can achieve a test for the behavior of the method by simply having one test that does NOT supply a parameter. That will hit your "set default" part of the routine while still testing that the rest of the code behaves well too.
That looks about right to me. Yes, you will write a lot of Unit Tests and, initially, it will seem like overkill and TBH a waste of time; but stick with it, it'll be worth it. What you should be aiming for (rather than just 100% code coverage) is 100% function coverage. However... if you find that you're writing a lot of UTs for the same method it's possible that that method is doing too much. Try separating your concerns more. In my experience the body of an Action should be doing little more than newing-up a class to do the real work. It's that class that you should really be targeting with UTs.
Chris
100% coverage is very ideal, its really helpful if you have to massively refactor your code however, as the tests will govern your code specs to make sure it is correct.
I am personally not a 100% TDD (sometimes too lazy too) but if you intend to do 100%, maybe you should write some test helpers to take away some burden on these repetitive tests. For example, write a helper to test all your CRUD in a standard post structure with a callback to allow you pass in some evaluation might save you a lot of time.
I'm unit testing only code what i'm unsure about. Sure - you can never know what will back stab you but writing tests for trivial things seems like an overkill for me.
I'm not a unit-testing/tdd guru - but i think that it's fine if you do NOT write tests just to have them. They must be useful. If you are experienced enough with unit testing - you start to feel when they are going to be valuable and when not.
You might like this book.
Edit:
Actually - i just found a quote about this underneath isolation frameworks chapter. It's about overspecifying tests (one particular test), but i guess the idea remains in more global scope:
Overspecifying the tests
If your test has too many expectations, you may create a test that
breaks down with even the lightest of code changes, even though the
overall functionality still works. Consider this a more technical way of
not verifying the right things. Testing interactions is a double-edged
sword: test it too much, and you start to lose sight of the big picture—the overall functionality; test it too little, and you’ll miss the
important interactions between objects.
Related
I am a newbie on testing and so I am stumbling about testing internal functionality on some code parts. How to test ONLY the privateParseAndCheck and/or privateFurtherProcessing functionality with different input, but I dont want it as public functions?
-(BOOL) publicFunction()
{
//some stuff with network
NSError* error;
NSData* data = load(&error);
//now I got data and parse and check
BOOL result = privateParseAndCheck(data, error, ...);
if( result ) {
privateFurtherProcessing();
}
return result;
}
Is re-writing the code the solution? I am also interested in some experiences with the tips/solutions on Xcode Server.
If there is a straightforward way to test what you want only from public methods, do so.
If not, you have a choice: You can expose the method only to test code. This is common practice, but I do not recommend this. It inhibits the other option…
Or, expose the method completely. If this makes you feel uncomfortable, there is probably a class trying to get out. Extract the class (the methods you want to test and whatever else makes sense to go with them.) You can now test that class.
This is especially helpful when coming up with different conditions (such as different errors) is difficult. Extract, and it's easy.
Further reading: Testability, Information Hiding, and the Class Trying to Get Out
I am stuck in a situation, in which i am creating a test project in ASP.Net MVC, here i am testing a Method which is actually using to download a file, so whenever i am trying to test this method it gives
OutputStream is not available when a custom TextWriter is used
Error in Response.BinaryWrite(), everything is fine except this, can anyone tell me how to resolve this exception, i am using MOQ dll for Mocking, Please suggest me to get rid of this situation.
HttpContext.Current.Response.BinaryWrite()
This is the line which is actually generating exception, now i have a question that- Is this good to test a download method or i have to leave it, if it is good then how to resolve this issue.
Thanks.
If you are writing a unit test you generally don't want to be writing a test that have dependencies that you can't control (e.g. writing to the database, filesystem or output stream). You can also assume that the Response.BinaryWrite does what it is supposed to.
You could do something like this to get around the error you are seeing.
public interface IBinaryWriter
{
void BinaryWrite(byte[] buffer);
}
public class ResponseBinaryWriteWrapper : IBinaryWriter
{
public void BinaryWrite(byte[] buffer)
{
HttpContext.Current.Response.BinaryWrite(buffer);
}
}
This will give you the ability to inject the IBinaryWriter into the class you want to test as a mock and then you can check that BinaryWrite is called with the correct byte array. In your production code you then inject your concrete ResponseBinaryWriterWrapper class.
am trying to understand using Mock unit testing and i started with MOQ . this question can be answered in General as well.
Am just trying to reuse the code given in How to setup a simple Unit Test with Moq?
[TestInitialize]
public void TestInit() {
//Arrange.
List<string> theList = new List<string>();
theList.Add("test3");
theList.Add("test1");
theList.Add("test2");
_mockRepository = new Mock<IRepository>();
//The line below returns a null reference...
_mockRepository.Setup(s => s.list()).Returns(theList);
_service = new Service(_mockRepository.Object);
}
[TestMethod]
public void my_test()
{
//Act.
var myList = _service.AllItems();
Assert.IsNotNull(myList, "myList is null.");
//Assert.
Assert.AreEqual(3, myList.Count());
}
Here is my question
1 . In testInitialize we are setting theList count to 3(string) and we are returning the same using MOQ and in the below line we are going to get the same
var myList = _service.AllItems(); //Which we know will return 3
So what we are testing here ?
2 . what are the possible scenarios where the Unit Testing fails ? yes we can give wrong values as 4 and fail the test. But in realtime i dont see any possiblity of failing ?
i guess am little backward in understanding these concepts. I do understand the code but am trying to get the insights !! Hope somebody can help me !
The system under test (SUT) in your example is the Service class. Naturally, the field _service uses the true implementation and not a mock. The method tested here is AllItems, do not confuse with the list() method of IRepository. This latter interface is a dependency of your SUT Service therefore it is mocked and passed to the Service class via constructor. I think you are confused by the fact that AllItems method seems to only return the call from list() method of its dependency IRepository. Hence, there is not a lot of logic involved there. Maybe, reconsider this example and add more expected logic for the AllItems method. For example you may assert that the AllItems returns the same elements provided by the list() method but reordered.
I hope I can help you with this one.
1.) As for this one, your basically testing he count. Sometimes in a collection, the data accumulates so it doesn't necessarily mean that each time you exectue the code is always 3. The next time you run, it adds 3 so it becomes 6 then 9 and so on.
2.) For unit testing, there are a lot of ways to fail like wrong computations, arithmetic overflow errors and such. Here's a good article.
The test is supposed to verify that the Service talks to its Repository correctly. We do this by setting up the mock Repository to return a canned answer that is easy to verify. However, with the test as it is now :
Service could perfectly return any list of 3 made-up strings without communicating with the Repository and the test would still pass. Suggestion : use Verify() on the mock to check that list() was really called.
3 is basically a magic number here. Changes to theList could put that number out of sync and break the test. Suggestion : use theList.Count instead of 3. Better : instead of checking the number of elements in the list, verify that AllItems() returns exactly what was passed to it by the Repository. You can use a CollectionAssert for that.
This means getting theList and _mockRepository out of TestInit() to make them accessible in a wider scope or directly inside the TestMethod, which is probably better anyways (not much use having a TestInitialize here).
The test would fail if the Service somehow stopped talking to its Repository, if it stopped returning exactly what the Repository gives it, or if the Repository's contract changed. More importantly, it wouldn't fail if there was a bug in the real implementation for IRepository - testing small units allows you to point your finger at the exact object that is failing and not its neighbors.
When I design MVC apps, I typcially try to keep almost all logic (as much as possible) out of my app. I try to abstact this into a service layer which interfaces with my repositories and domain entities.
So, my controller methods end up looking something like this:
public ActionResult Index(int id)
{
return View(Mapper.Map<User, UserModel>(_userService.GetUser(id)));
}
So assuming that I have good coverage testing my services, and my action methods are simple like the above example, is it overkill to unit test these controller methods?
If you do build unit tests for methods that look like this, what value are you getting from your tests?
If you do build unit tests for methods that look like this, what value
are you getting from your tests?
You can have unit tests that assert:
That the GetUser method of the _userService was invoked, passing the same int that was passed to the controller.
That the result returned was a ViewResult, instead of a PartialViewResult or something else.
That the result's model is a UserModel instance, not a User instance (which is what gets returned from the service).
Unit tests are as much a help in refactoring as asserting the correctness of the application. Helps you ensure that the results remain the same even after you change the code.
For example, say you had a change come in that the action should return a PartialView or JsonResult when the request is async/ajax. It wouldn't be much code to change in the controller, but your unit tests would probably fail as soon as you changed the code, because it's likely that you didn't mock the controller's context to indicate whether or not the request is ajax. So this then tells you to expand on your unit tests to maintain the assertions of correctness.
Definitely value added IMO for 3 very simple methods which shouldn't take you longer than a couple of minutes each to write.
I have some integration tests written for MsTest. The integration tests have the following structure:
[TestClass]
public class When_Doing_Some_Stuff
{
[TestInitialize]
protected override void TestInitialize()
{
// create the Integration Test Context
EstablishContext();
// trigger the Integration Test
When();
}
protected void EstablishContext()
{
// call services to set up context
}
protected override void When()
{
// call service method to be tested
}
[TestMethod]
public void Then_Result_Is_Correct()
{
// assert against the result
}
}
I need to filter the code coverage results of a function by who is calling it. Namely, I want the coverage to be considered only if the function is called from a function named "When" or which has a certain attribute applied to it.
Now, even if a certain method in the system is called in the EstablishContext part of some tests, the method is marked as visited.
I believe there is no filter for this and I would like to make the changes myself, as OpenCover is... well.. open. But I really have no idea where to start. Can anyone point me in the right direction?
You might be better addressing this with the OpenCover developers; hmmm... that would be me then, if you look on the wiki you will see that coverage by test is one of the eventual aims of OpenCover.
If you look at the forking you will see a branch from mancau - he initially indicated that he was going to try to implement this feature but I do not know how far he has progressed or if he has abandoned his attempt (what he has submitted is just a small re-introduction of code to allow the tracing of calls).
OpenCover tracks by emitting a visit identifier and updating the next element in an array that resides in shared memory (shared between the profiler (C++/native/32-64bit) and the console (C#/managed/any-cpu)). What I suggested to him was (and this will be my approach when I get round to it, if no one else does and is why I emit the visit data in this way) that he may want to add markers into the sequence to indicate that he has entered/left a particular test method (filtered on [TestMethod] attribute perhaps) and then when processing the results in the console this can then be added to the model in some way. Threading may also be a concern as this could cause the interleaving of visit points for tests run in parallel.
Perhaps you will think of a different approach and I look forward to hearing your ideas.