How to configure Unit of Work - asp.net-mvc

I am making progress but still struggling with Unit of Work in a multi layer MVC app. Looking at the example here: http://www.asp.net/entity-framework/tutorials/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application the UoW wraps all of the Repositories and provides each with a copy of the same dbcontext. Then the controller can use the Repositories with something like:
var courses = unitOfWork.CourseRepository.Get(includeProperties: "Department");
Now suppose You have a Service layer which accesses the Repositories instead. You could configure it so that it has a dependency on an IUnitOfWork implementation, then pass in an EfUnitOfWork implementation via Unity. Then when the Service completes some task it can call unitOfWork.context.SaveChanges(). But this approach hides the real dependencies for the Service; the repositories it needs. It also means that testing the Service requires you build a full UoW.
So I was thinking there must be a different approach and am wondering if one of the following or what I mentioned above (or something else!) is the correct approach:
Service takes in the same repository arguments and also an IUnitOfWork. The repositories are wired up with a copy of dbContext courtesy of Unity. The EfUnitOfWork is also wired with the same copy. The Service can then use the Repositories as before and once finished use EfUnitOfWork to commit.
Service just takes in an IUnitOfWork but sets up its required Repositories by passing to them a copy of the passed in IUnitOfWork.dbcontext
Please help!
James

A service layer is normally designed to have each method doing a complete operation. The service layer method is responsible for handling the unit of work. Using this approach the unit of work should not span multiple calls to the service layer.
If you have larger blocks of updates that you want to do together you can use transactions. Create a new TransactionScope and then call several service layer methods within it:
using(TransactionScope ts = new TransactionScope())
{
ServiceLayer.DoSomething();
ServiceLayer.DoSomethingElse();
ts.Commit();
}

Right so having explored the issue futher I have come to the following conclusion so thought I would document here to help others or so that i can be corrected should my findings be wrong.
DbContext is a Unit of Work. I only need to pass this Unit of Work into the implemented EFRepository classes. It does not need to go into the Service classes. So how does a Service class call context.SaveChanges() to ensure all related changes are coordinated when it does not have an instance of DbContext? Well it calls EFRepository.Save() which looks like the following:
public void Save()
{
context.SaveChanges();
}
With this approach, Service classes depend only on Repositories. This will be clear and can be mocked for testing. When Unity injects the required Repository objects into a Service, it can provide each Repository with the same DbContext. In addition, only Repositories have access to the DbContext.
All of this may be obvious but it had me stumped. Or it may be plain wrong, in which case please let me know!
James

Related

What particular use is the interface for repository pattern?

I fully understand the idea of the design of Repository pattern. But why do we need to implement the iDepository interface class? What particular use is this for?
The repository class itself works without the interface class.
I think someone is going to answer me it's for decoupling from the business logic and the data logic.
But even if there is no interface class, isn't the data logic decoupled data logic?
It is so that you can inject a test double of the IRepository class when you are unit testing the business layer. This has the following benefits:
It allows you to easily pinpoint failing tests as being caused by the business layer rather than the repository layer;
It makes your business logic layer tests fast, as they depend neither on data access, which tends to be slow, nor set-up of a database structure and test data, which tends to be very slow.
One way to inject the test doubles when unit testing is by Constructor Injection. Suppose your Repository has the following methods:
void Add(Noun noun);
int NumberOfNouns();
And this is the code of your business class:
public class BusinessClass {
private IRepository _repository;
public BusinessClass(IRepository repository) {
_repository = repository;
}
// optionally, you can make your default constructor create an instance
// of your default repository
public BusinessClass() {
_repository = new Repository();
}
// method which will be tested
public AddNoun(string noun) {
_repository.Add(new Noun(noun));
}
}
To test AddNoun without needing a real Repository, you need to set up a test double. Usually you would do this by using a mocking framework such as Moq, but I'll write a mock class from scratch just to illustrate the concept.
public IRepository MockRepository : IRepository {
private List<Noun> nouns = new List<Noun>();
public void Add(Noun noun) {
nouns.Add(noun);
}
public int NumberOfNouns() {
return nouns.Count();
}
}
Now one of your tests could be this.
[Test]
public void AddingNounShouldIncreaseNounCountByOne() {
// Arrange
var mockRepository = new MockRepository();
var businessClassToTest = new BusinessClass(mockRepository);
// Act
businessClassToTest.Add("cat");
// Assert
Assert.AreEqual(1, mockRepository.NumberOfNouns(), "Number of nouns in repository should have increased after calling AddNoun");
}
What this has achieved is that you have now tested the functionality of your BusinessClass.AddNoun method without needing to touch the database. This means that even if there's a problem with your Repository layer (a problem with a connection string, say) you have assurance that your Business layer is working as expected. This covers point 1 above.
As for point 2 above, whenever you're writing tests which test the database you should make sure it's in a known state before each test. This usually involves deleting all the data at the beginning of every test and re-adding test data. If this isn't done then you can't run assertions against, say, the number of rows in a table, because you won't be sure what that's supposed to be.
Deleting and re-adding test data would normally be done by running SQL scripts, which are slow and vulnerable to breakage whenever the database structure changes. Therefore it's advisable to restrict the use of the database only to the tests of the repository itself, and use mocked out repositories when unit testing other aspects of the application.
As for the use of abstract classes - yes, this would provide the same ability to supply test doubles. I'm not sure which code you would choose to put in the abstract base and which the concrete implementation, though. This answer to an SO question has an interesting discussion on abstract classes vs interaces.
First, you must understand what the Repository pattern is. It's an abstraction layer so that rest of the application do not have to care where the data comes from.
Abstractions in .NET is typically represented by interfaces as no logic (code) can be attached to an interface.
As a bonus that interface also makes it easier for you to test your application since you can mock the interface easily (or create a stub)
The interface also allows you to evolve your data layer. You might for instance start by using a database for all repository classes. But later you want to move some logic behind a web service. Then you only have to replace the DB repository with a WCF repository. You might also discover that an repository is slow and want to implement a simply memory cache within it (by using memcache or something else)
I found a very useful msdn page demonstrating the idea of Repository and Test Driven Development
.
http://blogs.msdn.com/b/adonet/archive/2009/12/17/walkthrough-test-driven-development-with-the-entity-framework-4-0.aspx

What is the right granularity for dependencies while doing constructor or setter injection?

I am trying to define some dependency injection guidelines for myself. What should be the right granularity while defining dependencies for a class that are to be injected either via constructor or setter injection? The class could be a service, repository, etc. Suppose there is a repository class, which looks like following:
public class ProductRepository
{
//Option-A
public ProductRepository(DataSource dataSource)
{
}
//Option-B
public ProductRepository(SqlSession sqlSession)
{
}
//Option-C
public ProductRepository(SqlSessionTemplate sqlSessionTemplate)
{
}
}
The minimum dependency required by the above class is DataSource interface. The repository class internally makes use of the SqlSessionTemplate (implementation of the SqlSession interface). As shown in the code, there are 3 choices for constructor for doing constructor injection. The following is my understanding:
Option-A (DataSource dependency)
This is the minimum dependency of the repository class. From consumer point of view this constructor is the right choice but it is not suitable from unit testing point of view because DataSource is internally consumed by the SqlSessionTemplate in the repository implementation.
Options-B (SqlSession dependency)
This is the right choice from unit testing point of view but not from the consumer point of view. Additionally the repository implementation is tightly coupled with specific implementation of the interface which is SqlSessionTemplate. Hence it will not work if the consumer passes some different SqlSession interface other than SqlSessionTemplate.
Options-C (SqlSessionTemplate dependency)
SqlSessionTemplate being an implementation and not an interface does not seem to be good for unit testing. Also, it is not good for the consumer as instantiating SqlSessionTemplate is more involved as compared to DataSource. Hence discarding this option.
Option-A and Option-B seems to be the available choices. But, there is a trade-off between consumer point of view and unit testing point of view and vice versa.
I am new to dependency injection. I seek advice from the DI experts. What is the right solution (if any)? What would you do in the above situation?
Thanks.
This is the minimum dependency of the repository class.
I think this is the starting point for figuring out the right amount of coupling. You should be injecting no more or less than is needed to fulfill the requirements.
That's a very general guideline, which is almost the equivalent of "it depends", but it's a good way to start thinking about it. I don't know enough about DataSource, SqlSession, or SqlSessionTemplate to answer in context.
The repository class internally makes use of the SqlSessionTemplate
(implementation of the SqlSession interface)
Why can't the repository simply use the interface as a dependency? Does the interface not cover all the public methods of the implementation? If it doesn't, is the interface even a useful abstraction?
I can't completely piece together what you are trying to do and how the dependencies work, but my best guess is either:
You need both SqlSession and DataSource injected via the constructor, or
You need SqlSession injected via the Repository's constructor, and DataSource injected into the SqlSessionTemplate's constructor
You are talking about unit testing your repository, but that would typically be rather useless because a repository is your gateway to the database and has a strong coupling with it. Unit testing should be done in isolation, but a repository can only be tested with the database. Thus: an integration test.
If you were able to abstract the database specific logic from the repository (as you seem to be doing) there would be nothing left to test, since the responsibility of the repository is to communicate with the database. And if there still IS a lot left to test, well... in that case your repository classes are probably violating the Single Responsibility Principle, which makes your repositories hard to maintain.
So since you would typically test a repository itself using a database, from a testing perspective it doesn't really matter what you inject, since you will have to construct an repository in such way that it will be able to connect to a database anyway.

Unit of Work with Dependency Injection

I'm building a relatively simple webapp in ASP.NET MVC 4, using Entity Framework to talk to MS SQL Server. There's lots of scope to expand the application in future, so I'm aiming for a pattern that maximises reusability and adaptability in the code, to save work later on. The idea is:
Unit of Work pattern, to save problems with the database by only committing changes at the end of each set of actions.
Generic repository using BaseRepository<T> because the repositories will be mostly the same; the odd exception can extend and add its additional methods.
Dependency injection to bind those repositories to the IRepository<T> that the controllers will be using, so that I can switch data storage methods and such with minimal fuss (not just for best practice; there is a real chance of this happening). I'm using Ninject for this.
I haven't really attempted something like this from scratch before, so I've been reading up and I think I've got myself muddled somewhere. So far, I have an interface IRepository<T> which is implemented by BaseRepository<T>, which contains an instance of the DataContext which is passed into its constructor. This interface has methods for Add, Update, Delete, and various types of Get (single by ID, single by predicate, group by predicate, all). The only repository that doesn't fit this interface (so far) is the Users repository, which adds User Login(string username, string password) to allow login (the implementation of which handles all the salting, hashing, checking etc).
From what I've read, I now need a UnitOfWork class that contains instances of all the repositories. This unit of work will expose the repositories, as well as a SaveChanges() method. When I want to manipulate data, I instantiate a unit of work, access the repositories on it (which are instantiated as needed), and then save. If anything fails, nothing changes in the database because it won't reach the single save at the end. This is all fine. My problem is that all the examples I can find seem to do one of two things:
Some pass a data context into the unit of work, from which they retrieve the various repositories. This negates the point of DI by having my Entity-Framework-specific DbContext (or a class inherited from it) in my unit of work.
Some call a Get method to request a repository, which is the service locator pattern, which is at least unpopular, if not an antipattern, and either way I'd like to avoid it here.
Do I need to create an interface for my data source and inject that into the unit of work as well? I can't find any documentation on this that's clear and/or complete enough to explain.
EDIT
I think I've been overcomplicating it; I'm now folding my repository and unit of work into one - my repository is entirely generic so this just gives me a handful of generic methods (Add, Remove, Update, and a few kinds of Get) plus a SaveChanges method. This gives me a worker class interface; I can then have a factory class that provides instances of it (also interfaced). If I also have this worker implement IDisposable then I can use it in a scoped block. So now my controllers can do something like this:
using (var worker = DataAccess.BeginTransaction())
{
Product item = worker.Get<Product>(p => p.ID == prodName);
//stuff...
worker.SaveChanges();
}
If something goes wrong before the SaveChanges(), then all changes are discarded when it exits the scope block and the worker is disposed. I can use dependency injection to provide concrete implementations to the DataAccess field, which is passed into the base controller constructor. Business logic is all in the controller and works with IQueryable objects, so I can switch out the DataAccess provider object for anything I like as long as it implements the IRepository interface; there's nothing specific to Entity Framework anywhere.
So, any thoughts on this implementation? Is this on the right track?
I prefer to have UnitOfWork or a UnitOfWorkFactory injected into the repositories, that way I need not bother it everytime a new reposiory is added. Responsibility of UnitOfWork would be to just manage the transaction.
Here is an example of what I mean.

Repository Pattern in asp.net mvc with linq to sql

I have been reading though the code of the NerdDinner app and specifically the Repository Pattern...
I have one simple question though, regarding this block
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
What if each Dinner also had, say, a Category... my question is
Would you also initialize the category Repository in the constructor of the class??
Im sure it would work but Im not sure if the correct way would be to initialize the repository inside the method that is going to use that repository or just in the constructor of the class??
I would appreciate some insight on this issue
Thanks.
What you're looking at here is actually not so much to do with the repository pattern, per se, and more to do with "dependency injection," where the outside things on which this class depends are "injected" from without, rather rather than instantiated within (by calling new Repository(), for example).
This specific example shows "constructor injection," where the dependencies are injected when the object is created. This is handy because you can always know that the object is in a particular state (that it has a repository implementation). You could just as easily use property injection, where you provide a public setter for assigning the repository or other dependency. This forfeits the stated advantage of constructor injection, and is somewhat less clear when examining the code, but an inversion-of-control container can handle the work of instantiating objects and injecting dependencies in the constructor and/or properties.
This fosters proper encapsulation and improves testability substantially.
The fact that you aren't instantiating collaborators within the class is what improves testability (you can isolate the behaviour of a class by injecting stub or mock instances when testing).
The key word here when it comes to the repository pattern is encapsulation. The repository pattern takes all that data access stuff and hides it from the classes consuming the repository. Even though an ORM might be hiding all the actual CRUD work, you're still bound to the ORM implementation. The repository can act as a facade or adapter -- offering an abstract interface for accessing objects.
So, when you take these concepts together, you have a controller class that does not handle data access itself and does not instantiate a repository to handle it. Rather the controller accepts an injected repository, and knows only the interface. What is the benefit? That you can change your data access entirely and never ever touch the controller.
Getting further to your question, the repository is a dependency, and it is being provided in the constructor for the reasons outlined above. If you have a further dependency on a CategoryRepository, then yes, by all means inject that in the constructor as well.
Alternatively, you can provide factory classes as dependencies -- again classes that implement some factory interface, but instead of the dependency itself, this is a class that knows how to create the dependency. Maybe you want a different IDinnerRepository for different situations. The factory could accept a parameter and return an implementation according to some logic, and since it will always be an IDinnerRepository, the controller needs be none the wiser about what that repository is actually doing.
To keep your code decoupled and your controllers easily testable you need to stick with dependency injection so either:
public DinnersController()
: this(new DinnerRepository(), new CategoryRepository()) {
}
or the less elegant
public DinnersController()
: this(new DinnerRepository(new CategoryRepository())) {
}
I would have my dinner categories in my dinner repository personally. But if they had to be seperate the id put them both in the ctor.
You'd want to pass it in to the constructor. That said, I probably wouldn't create any concrete class like it's being done there.
I'm not familiar with the NerdDinner app, but I think the preferred approach is to define an IDinnerRepository (and ICategoryRepository). If you code against interfaces and wanted to switch to say, an xml file, MySQL database or a web service you would not need to change your controller code.
Pushing this out just a little further, you can look at IoC containers like ninject. The gist of it is is that you map your IDinnerRepository to a concrete implementation application wide. Then whenever a controller is created, the concrete repository (or any other dependency you might need) is provided for you even though you're coding against an interface.
It depends on whether you will be testing your Controllers (, which you should be doing). Passing the repositories in by the constructor, and having them automatically injected by your IOC container, is combining convenience with straightforward testing. I would suggest putting all needed repositories in the constructor.
If you seem to have a lot of different repositories in your constructors, it might be a sign that your controller is trying to do too many unrelated things. Might; sometimes using multiple repositories is legitimate.
Edit in response to comment:
A lot of repositories in one controller constructor might be considered a bad code smell, but a bad smell is not something wrong; it is something to look at because there might be something wrong. If you determine that having these activities handled in the same controller makes for the highest overall simplicity in your solution, then do that, with as many repositories as you need in the constructor.
I can use myself as an example as to why many repositories in a controller is a bad smell. I tend to get too cute, trying to do too many things on a page or controller. I always get suspicious when I see myself putting a lot of repositories in the constructor, because I sometimes do try to cram too much into a controller. That doesn't mean it's necessarily bad. Or, maybe the code smell does indicate a deeper problem, but it not one that is too horrible, you can fix it right now, and maybe you won't ever fix it: not the end of the world.
Note: It can help minimize repositories when you have one repository per Aggregate root, rather than per Entity class.

Access to Entity Manager in ASP .NET MVC

Greetings,
Trying to sort through the best way to provide access to my Entity Manager while keeping the context open through the request to permit late loading. I am seeing a lot of examples like the following:
public class SomeController
{
MyEntities entities = new MyEntities();
}
The problem I see with this setup is that if you have a layer of business classes that you want to make calls into, you end up having to pass the manager as a parameter to these methods, like so:
public static GetEntity(MyEntities entityManager, int id)
{
return entityManager.Series.FirstOrDefault(s => s.SeriesId == id);
}
Obviously I am looking for a good, thread safe way, to provide the entityManager to the method without passing it. The way also needs to be unit testable, my previous attempts with putting it in Session did not work for unit tests.
I am actually looking for the recommended way of dealing with the Entity Framework in ASP .NET MVC for an enterprise level application.
Thanks in advance
Entity Framework v1.0 excels in Windows Forms applications where you can use the object context for as long as you like. In asp.net and mvc in particular it's a bit harder. My solution to this was to make the repositories or entity managers more like services that MVC could communicate with. I created a sort of generic all purpose base repository I could use whenever I felt like it and just stopped bothering too much about doing it right. I would try to avoid leaving the object context open for even a ms longer than is absolutely needed in a web application.
Have a look at EF4. I started using EF in production environment when that was in beta 0.75 or something similar and had no real issues with it except for it being "hard work" sometimes.
You might want to look at the Repository pattern (here's a write up of Repository with Linq to SQL).
The basic idea would be that instead of creating a static class, you instantiate a version of the Repository. You can pass in your EntityManager as a parameter to the class in the constructor -- or better yet, a factory that can create your EntityManager for the class so that it can do unit of work instantiation of the manager.
For MVC I use a base controller class. In this class you could create your entity manager factory and make it a property of the class so deriving classes have access to it. Allow it to be injected from a constructor but created with the proper default if the instance passed in is null. Whenever a controller method needs to create a repository, it can use this instance to pass into the Repository so that it can create the manager required.
In this way, you get rid of the static methods and allow mock instances to be used in your unit tests. By passing in a factory -- which ought to create instances that implement interfaces, btw -- you decouple your repository from the actual manager class.
Don't lazy load entities in the view. Don't make business layer calls in the view. Load all the entities the view will need up front in the controller, compute all the sums and averages the view will need up front in the controller, etc. After all, that's what the controller is for.

Resources