I'm working on a new project where I'm actively trying to honor persistence ignorance. As an example, in my service layer I retrieve an entity from my ORM and I call a routine defined on the entity that may or may not make changes to the entity. Then I rely on my ORM to detect whether or not the entity was modified and it makes the necessary inserts/updates/deletes.
When I run the application it works as intended and it's really neat to see this in action. My business logic is very isolated and my service layer is really thin.
Of course, now I'm adding unit tests and I have noticed that i can no longer write unit tests that verify whether or not certain properties were modified. In my previous solution, I determine whether or not a repository call was made with the object in its expected state.
mockRepository.Verify(mr =>
mr.SaveOrUpdate(It.Is<MyEntity>(x =>
x.Id == 123 && x.MyProp == "newvalue")), Times.Once());
Am I approaching persistence ignorance correctly? Is there a better way to unit test the post-operational state of my entities when I don't explicitly call the repository's save method?
If it helps, I'm using ASP.NET MVC 3, WCF, NHibernate, and NUnit/Moq. My unit tests make calls to my controller actions passing instances of my service classes (which are instantiated with mocked repositories).
You are approaching this correctly in that you have an interface representing your repository and passing in a fake in your tests, I prefer to use an in-memory simulator for my repositories instead of using mocks because I find that stub implementations tend to make my tests less brittle than using mock/verify (as per the mocks aren't stubs article linked above). If your repository has Add/Find/Delete methods, my in-memory implementation will forward those to a member list and then save will set a property called SavedList that I can assert on in my tests.
Oddly enough I just stumbled upon a solution and it is really simple.
[Test]
public void Verify_Widget_Description_Is_Updated()
{
// arrange
var widget = new Widget { };
mockWidgetRepo.Setup(mr => mr.GetWidget()).returns(widget);
var viewModel = new WidgetVM { };
viewModel.Description = "New Desc";
// act
var result = (ViewResult)controller.UpdateWidget(viewModel);
// assert
Assert.AreEqual("New Desc", widget.Description);
}
It's not perfect, but I can assume that if widget.Description matches the value I assigned to my view model, then the ORM would save that change unless evict was called.
UPDATE:
Just came up with another alternative solution. I created a ObjectStateAssertionForTests(Object obj) function in my base repository that does nothing. I can call this function in my code and then check it in my unit tests.
mockRepository.Verify(mr =>
mr.ObjectStateAssertionForTests(It.Is<MyEntity>(x =>
x.Id == 123 && x.MyProp == "newvalue")), Times.Once());
Related
am trying to understand using Mock unit testing and i started with MOQ . this question can be answered in General as well.
Am just trying to reuse the code given in How to setup a simple Unit Test with Moq?
[TestInitialize]
public void TestInit() {
//Arrange.
List<string> theList = new List<string>();
theList.Add("test3");
theList.Add("test1");
theList.Add("test2");
_mockRepository = new Mock<IRepository>();
//The line below returns a null reference...
_mockRepository.Setup(s => s.list()).Returns(theList);
_service = new Service(_mockRepository.Object);
}
[TestMethod]
public void my_test()
{
//Act.
var myList = _service.AllItems();
Assert.IsNotNull(myList, "myList is null.");
//Assert.
Assert.AreEqual(3, myList.Count());
}
Here is my question
1 . In testInitialize we are setting theList count to 3(string) and we are returning the same using MOQ and in the below line we are going to get the same
var myList = _service.AllItems(); //Which we know will return 3
So what we are testing here ?
2 . what are the possible scenarios where the Unit Testing fails ? yes we can give wrong values as 4 and fail the test. But in realtime i dont see any possiblity of failing ?
i guess am little backward in understanding these concepts. I do understand the code but am trying to get the insights !! Hope somebody can help me !
The system under test (SUT) in your example is the Service class. Naturally, the field _service uses the true implementation and not a mock. The method tested here is AllItems, do not confuse with the list() method of IRepository. This latter interface is a dependency of your SUT Service therefore it is mocked and passed to the Service class via constructor. I think you are confused by the fact that AllItems method seems to only return the call from list() method of its dependency IRepository. Hence, there is not a lot of logic involved there. Maybe, reconsider this example and add more expected logic for the AllItems method. For example you may assert that the AllItems returns the same elements provided by the list() method but reordered.
I hope I can help you with this one.
1.) As for this one, your basically testing he count. Sometimes in a collection, the data accumulates so it doesn't necessarily mean that each time you exectue the code is always 3. The next time you run, it adds 3 so it becomes 6 then 9 and so on.
2.) For unit testing, there are a lot of ways to fail like wrong computations, arithmetic overflow errors and such. Here's a good article.
The test is supposed to verify that the Service talks to its Repository correctly. We do this by setting up the mock Repository to return a canned answer that is easy to verify. However, with the test as it is now :
Service could perfectly return any list of 3 made-up strings without communicating with the Repository and the test would still pass. Suggestion : use Verify() on the mock to check that list() was really called.
3 is basically a magic number here. Changes to theList could put that number out of sync and break the test. Suggestion : use theList.Count instead of 3. Better : instead of checking the number of elements in the list, verify that AllItems() returns exactly what was passed to it by the Repository. You can use a CollectionAssert for that.
This means getting theList and _mockRepository out of TestInit() to make them accessible in a wider scope or directly inside the TestMethod, which is probably better anyways (not much use having a TestInitialize here).
The test would fail if the Service somehow stopped talking to its Repository, if it stopped returning exactly what the Repository gives it, or if the Repository's contract changed. More importantly, it wouldn't fail if there was a bug in the real implementation for IRepository - testing small units allows you to point your finger at the exact object that is failing and not its neighbors.
I fully understand the idea of the design of Repository pattern. But why do we need to implement the iDepository interface class? What particular use is this for?
The repository class itself works without the interface class.
I think someone is going to answer me it's for decoupling from the business logic and the data logic.
But even if there is no interface class, isn't the data logic decoupled data logic?
It is so that you can inject a test double of the IRepository class when you are unit testing the business layer. This has the following benefits:
It allows you to easily pinpoint failing tests as being caused by the business layer rather than the repository layer;
It makes your business logic layer tests fast, as they depend neither on data access, which tends to be slow, nor set-up of a database structure and test data, which tends to be very slow.
One way to inject the test doubles when unit testing is by Constructor Injection. Suppose your Repository has the following methods:
void Add(Noun noun);
int NumberOfNouns();
And this is the code of your business class:
public class BusinessClass {
private IRepository _repository;
public BusinessClass(IRepository repository) {
_repository = repository;
}
// optionally, you can make your default constructor create an instance
// of your default repository
public BusinessClass() {
_repository = new Repository();
}
// method which will be tested
public AddNoun(string noun) {
_repository.Add(new Noun(noun));
}
}
To test AddNoun without needing a real Repository, you need to set up a test double. Usually you would do this by using a mocking framework such as Moq, but I'll write a mock class from scratch just to illustrate the concept.
public IRepository MockRepository : IRepository {
private List<Noun> nouns = new List<Noun>();
public void Add(Noun noun) {
nouns.Add(noun);
}
public int NumberOfNouns() {
return nouns.Count();
}
}
Now one of your tests could be this.
[Test]
public void AddingNounShouldIncreaseNounCountByOne() {
// Arrange
var mockRepository = new MockRepository();
var businessClassToTest = new BusinessClass(mockRepository);
// Act
businessClassToTest.Add("cat");
// Assert
Assert.AreEqual(1, mockRepository.NumberOfNouns(), "Number of nouns in repository should have increased after calling AddNoun");
}
What this has achieved is that you have now tested the functionality of your BusinessClass.AddNoun method without needing to touch the database. This means that even if there's a problem with your Repository layer (a problem with a connection string, say) you have assurance that your Business layer is working as expected. This covers point 1 above.
As for point 2 above, whenever you're writing tests which test the database you should make sure it's in a known state before each test. This usually involves deleting all the data at the beginning of every test and re-adding test data. If this isn't done then you can't run assertions against, say, the number of rows in a table, because you won't be sure what that's supposed to be.
Deleting and re-adding test data would normally be done by running SQL scripts, which are slow and vulnerable to breakage whenever the database structure changes. Therefore it's advisable to restrict the use of the database only to the tests of the repository itself, and use mocked out repositories when unit testing other aspects of the application.
As for the use of abstract classes - yes, this would provide the same ability to supply test doubles. I'm not sure which code you would choose to put in the abstract base and which the concrete implementation, though. This answer to an SO question has an interesting discussion on abstract classes vs interaces.
First, you must understand what the Repository pattern is. It's an abstraction layer so that rest of the application do not have to care where the data comes from.
Abstractions in .NET is typically represented by interfaces as no logic (code) can be attached to an interface.
As a bonus that interface also makes it easier for you to test your application since you can mock the interface easily (or create a stub)
The interface also allows you to evolve your data layer. You might for instance start by using a database for all repository classes. But later you want to move some logic behind a web service. Then you only have to replace the DB repository with a WCF repository. You might also discover that an repository is slow and want to implement a simply memory cache within it (by using memcache or something else)
I found a very useful msdn page demonstrating the idea of Repository and Test Driven Development
.
http://blogs.msdn.com/b/adonet/archive/2009/12/17/walkthrough-test-driven-development-with-the-entity-framework-4-0.aspx
I'm building a relatively simple webapp in ASP.NET MVC 4, using Entity Framework to talk to MS SQL Server. There's lots of scope to expand the application in future, so I'm aiming for a pattern that maximises reusability and adaptability in the code, to save work later on. The idea is:
Unit of Work pattern, to save problems with the database by only committing changes at the end of each set of actions.
Generic repository using BaseRepository<T> because the repositories will be mostly the same; the odd exception can extend and add its additional methods.
Dependency injection to bind those repositories to the IRepository<T> that the controllers will be using, so that I can switch data storage methods and such with minimal fuss (not just for best practice; there is a real chance of this happening). I'm using Ninject for this.
I haven't really attempted something like this from scratch before, so I've been reading up and I think I've got myself muddled somewhere. So far, I have an interface IRepository<T> which is implemented by BaseRepository<T>, which contains an instance of the DataContext which is passed into its constructor. This interface has methods for Add, Update, Delete, and various types of Get (single by ID, single by predicate, group by predicate, all). The only repository that doesn't fit this interface (so far) is the Users repository, which adds User Login(string username, string password) to allow login (the implementation of which handles all the salting, hashing, checking etc).
From what I've read, I now need a UnitOfWork class that contains instances of all the repositories. This unit of work will expose the repositories, as well as a SaveChanges() method. When I want to manipulate data, I instantiate a unit of work, access the repositories on it (which are instantiated as needed), and then save. If anything fails, nothing changes in the database because it won't reach the single save at the end. This is all fine. My problem is that all the examples I can find seem to do one of two things:
Some pass a data context into the unit of work, from which they retrieve the various repositories. This negates the point of DI by having my Entity-Framework-specific DbContext (or a class inherited from it) in my unit of work.
Some call a Get method to request a repository, which is the service locator pattern, which is at least unpopular, if not an antipattern, and either way I'd like to avoid it here.
Do I need to create an interface for my data source and inject that into the unit of work as well? I can't find any documentation on this that's clear and/or complete enough to explain.
EDIT
I think I've been overcomplicating it; I'm now folding my repository and unit of work into one - my repository is entirely generic so this just gives me a handful of generic methods (Add, Remove, Update, and a few kinds of Get) plus a SaveChanges method. This gives me a worker class interface; I can then have a factory class that provides instances of it (also interfaced). If I also have this worker implement IDisposable then I can use it in a scoped block. So now my controllers can do something like this:
using (var worker = DataAccess.BeginTransaction())
{
Product item = worker.Get<Product>(p => p.ID == prodName);
//stuff...
worker.SaveChanges();
}
If something goes wrong before the SaveChanges(), then all changes are discarded when it exits the scope block and the worker is disposed. I can use dependency injection to provide concrete implementations to the DataAccess field, which is passed into the base controller constructor. Business logic is all in the controller and works with IQueryable objects, so I can switch out the DataAccess provider object for anything I like as long as it implements the IRepository interface; there's nothing specific to Entity Framework anywhere.
So, any thoughts on this implementation? Is this on the right track?
I prefer to have UnitOfWork or a UnitOfWorkFactory injected into the repositories, that way I need not bother it everytime a new reposiory is added. Responsibility of UnitOfWork would be to just manage the transaction.
Here is an example of what I mean.
I have started thinking about adding some unit tests around some business logic in my project.
The first method that I would like to test is a method in my service layer that returns a list of child nodes for a given node.
The method looks like this:
public List<Guid> GetSubGroupNodes(string rootNode)
{
List<Tree> tree = ssdsContext.Trees.ToList();
Tree root = ssdsContext.Trees.Where(x => x.UserId == new Guid(rootNode)).FirstOrDefault();
return GetChildNodeIds(root, tree);
}
private List<Tree> GetChildNodes(Tree rootNode, List<Tree> tree)
{
kids.Add(rootNode);
foreach (Tree t in FindChilden(rootNode, tree))
{
GetChildNodes(t, tree);
}
return kids;
}
The way I'd imaginge testing something like this is to provide a fake Tree structure and then test that a providing a node returns the correct subnodes.
ssdsContext is an ObjectContext.
I've seen that its possible to extract and interface for the ObjectContext How to mock ObjectContext or ObjectQuery<T> in Entity Framework? but I've also read that mocking a DBContext is a waste of time Unit Testing DbContext.
I have also read that as Entity Framework is an implementation of the repository pattern and unit of work patten here: Generic Repository With EF 4.1 what is the point.
This has all left me a bit confused...is the only real way to test a method like this to create a Repository Layer? Is it even worth unit testing this method?
Wrap the ObjectContext class in a wrapperclass -- let's call it ContextWrapper for fun -- that only exposes what you need from it. Then you can inject an interface of this (IContextWrapper) in to your class with your method. A wrapper can be mocked with no hooks attached to the outside world. The treestructure, as you say, is easy to create, and get from your mock object. Thus making your test TRUE unittests instead of a kind of integration test.
Greetings,
Trying to sort through the best way to provide access to my Entity Manager while keeping the context open through the request to permit late loading. I am seeing a lot of examples like the following:
public class SomeController
{
MyEntities entities = new MyEntities();
}
The problem I see with this setup is that if you have a layer of business classes that you want to make calls into, you end up having to pass the manager as a parameter to these methods, like so:
public static GetEntity(MyEntities entityManager, int id)
{
return entityManager.Series.FirstOrDefault(s => s.SeriesId == id);
}
Obviously I am looking for a good, thread safe way, to provide the entityManager to the method without passing it. The way also needs to be unit testable, my previous attempts with putting it in Session did not work for unit tests.
I am actually looking for the recommended way of dealing with the Entity Framework in ASP .NET MVC for an enterprise level application.
Thanks in advance
Entity Framework v1.0 excels in Windows Forms applications where you can use the object context for as long as you like. In asp.net and mvc in particular it's a bit harder. My solution to this was to make the repositories or entity managers more like services that MVC could communicate with. I created a sort of generic all purpose base repository I could use whenever I felt like it and just stopped bothering too much about doing it right. I would try to avoid leaving the object context open for even a ms longer than is absolutely needed in a web application.
Have a look at EF4. I started using EF in production environment when that was in beta 0.75 or something similar and had no real issues with it except for it being "hard work" sometimes.
You might want to look at the Repository pattern (here's a write up of Repository with Linq to SQL).
The basic idea would be that instead of creating a static class, you instantiate a version of the Repository. You can pass in your EntityManager as a parameter to the class in the constructor -- or better yet, a factory that can create your EntityManager for the class so that it can do unit of work instantiation of the manager.
For MVC I use a base controller class. In this class you could create your entity manager factory and make it a property of the class so deriving classes have access to it. Allow it to be injected from a constructor but created with the proper default if the instance passed in is null. Whenever a controller method needs to create a repository, it can use this instance to pass into the Repository so that it can create the manager required.
In this way, you get rid of the static methods and allow mock instances to be used in your unit tests. By passing in a factory -- which ought to create instances that implement interfaces, btw -- you decouple your repository from the actual manager class.
Don't lazy load entities in the view. Don't make business layer calls in the view. Load all the entities the view will need up front in the controller, compute all the sums and averages the view will need up front in the controller, etc. After all, that's what the controller is for.