Taking my MVC to the next level: DI and Unit of Work - asp.net-mvc

I have looked at simpler applications like Nerddinner and ContactManager as well as more complicated ones like Kigg. I understand the simpler ones and now I would like to understand the more complex ones.
Usually the simpler applications have repository classes and interfaces (as loosely coupled as they can get) on top of either LINQtoSQL or the Entity Framework. The repositories are called from the controllers to do the necessary data operations.
One common pattern I see when I examine more complicated applications like Kigg or Oxite is the introduction of (I am only scratching the surface here but I have to start somewhere):
IOC DI (in Kigg's case Unity)
Web Request Lifetime manager
Unit of Work
Here are my questions:
I understand that in order to truly have a loosely coupled application you have to use something like Unity. But it also seems like the moment you introduce Unity to the mix you also have to introduce a Web Request Lifetime Manager. Why is that? Why is it that sample applications like Nerddinner do not have a Web Request Lifetime Manager? What exactly does it do? Is it a Unity specific thing?
A second pattern I notice is the introduction of Unit of Work. Again, same question: Why does Nerddinner or ContactManager not use Unit of Work? Instead these applications use the repository classes on top of Linq2Sql or Entity Framework to do the data manipulation. No sign of any Unit of Work. What exactly is it and why should it be used?
Thanks
Below is a example of DI in Nerddiner at the DinnersController level:
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
So am I right to assume that because of the first constructor the controller "owns" the DinnerRepository and it will therefore depend on the lifetime of the controller since it is declared there?

With Linq-to-SQL is used directly, your controller owns the reference to the data context. It's usually a private reference inside the controller, and so is created as part of its construction. There's no need in lifetime management, since it's in one place.
However, when you use IoC container, your data repository are created outside your controller. Since IoC container that creates it for you doesn't know how and how long you're going to use the created object, a lifetime strategy is introduced.
For example, data context (repository) is usually created at the beginning of the web request and destroyed at the end. However, for components that work with external web service, or some static mapper (e.g. logger) there's no need to create them each time. So you may want to say to create them once (i.e. singletone lifestyle).
All this happen because IoC container (like Unity) are designed to handle many situations, and they don't know your specific needs. For example, some applications use "conversation" transactions where NHibernate (or Entity Framework maybe) may last during several pages / web requests. IoC containers allow you to tweak objects lifetime to suit your needs. But as said this comes at price - since there's no single predefined strategy, you have to select one yourself.
Why NerdDinner and other applications do not use more advanced techniques is simply because they are intended to demonstrate MVC features, not advanced usages of some other libraries. I remember an article written to demonstrate one IoC container advanced functionality - this article broke some approved design patterns like separation of concerns - but this wasn't that important because design patterns were not the goal of the article. Same with simple MVC-demonstration-applications - they do not want you, the MVC newcomer, to be lost in IoC labyrinths.
And I would not recommend to look at Oxite as a design reference example:
http://codebetter.com/blogs/karlseguin/archive/2008/12/15/oxite-oh-dear-lord-why.aspx
http://ayende.com/Blog/archive/2008/12/19/oxite-open-exchangable-informative-troubled-engine.aspx

Most if not all of the DI containers touch the concept of life times, I believe. Depending on the scenario involved, you may want the container to always return the same instance of a registered component, while for another component, you may want it to always return a new instance. Most containers also allow you to specify that within a particular context, you want it to return the same instance, etc..
I don't know Unity very well (so far I have used Windsor and Autofac), but I suspect the web request lifetime manager to be an implementation of lifetime strategies where the same instance is provided by the container during the lifetime of a single web request. You will find similar strategies in containers like Windsor.
Finally, I suppose you are referring to Unit of Work. A Unit of Work is in essence a group of actions that you want to succeed or fail as one atomic business transaction. For a more formal description, look at Martin Fowler's definition. It is a concept that has gained more popularity in the context of Domain Driven Design. A unit of work keeps track of the changes you apply in such a transaction, and when the time is right, it commits these changes in one ACID transaction. In NHibernate e.g., the session supports the notion of unit of work and more specifically the change tracking, while in Linq2SQL it is the Context ...

Related

Entity Framework and ASP.NET MVC - how to directly make use of DBContext and DBSet to behave as Repository and Unity of Work patterns?

I am in the process of making project decisions on development patterns for a solution, which involves the use of Entity Framework 6 as the ORM choice,
and ASP.NET MVC 5.
I need insight on how transactions and business logic will be implemented. In respect to layers, I came to an initial assumption for the design where
Entity Framework on top of SQL Server can be considered the Data Access Layer (DAL). On top of Entity Framework, there will be a Service Layer, where business logic and validation will be implemented. On top of the Service Layer, I will have ASP.NET MVC Controllers consuming what the service layer offers.
Let me ellaborate on this initial conclusion drawn as a starting point for defining the architecture:
I want to follow principles to achieve the minimum complexity scenario possible, in respect to layers, abstractions and all the solution components responsibilities. As an excercise, with this "simplicity" in mind, I could just embrace the template "proposed" by Microsoft as when you just create a new Visual Studio ASP.NET MVC Web application, but I believe that does not fit the minimum design scenario needed for an enterprise application, since in Microsoft's template, the controller directly makes use of Entity Framework DbContext and consumes the Data Access Layer, besides the fact that no service layer is present. This leads to several issues, such as extremely tight coupling
between the presentation and data access layer, as well as the so called "fat controller" problem, where controllers become the bloated piece of the software with all the added responsibilities of business logic, transactions, and so on, making it truly a mess to software maintainability with, for example, the most basic principle of DRY (don't repeat yourself) being violated since you would get duplicated code and logic all over your fat controllers.
Ramping up the next stage on the path from simplicity to complexity, I assume it is fair to add a Service Layer to the design, because this way ASP.NET MVC controllers would talk only to this service layer, who would be responsible for all CRUD and validation of CRUD operations, and all other more complex business logic operations. This Service Layer then would talk to the data access layer being represented by Entity Framework.
I would stop there and say the design with these proposed layers is enough, but that's where I need more insight on how to proceed. I need to resolve the question on how would transactions be implemented, if you think of them as a wrapper for a series of individual operations performed by methods responsible for validation and business logic residing in classes inside the service layer. In terms of implementation using Entity Framework, if I get every individual operation performed by a service layer method to issue a .SaveChanges(), I would lose the ability of having DBContext to behave like a Unit of Work, wrapping up a single .SaveChanges() for many individual DBSet operations. In this case, the DBSets may behave like repositories. Many people argue that Entity Framework's DBContext and DBSet are implementations if Unit of Work and Repository pattern, respectively.
In a more objective question then, how can I implement these patterns using directly DBContext and DBSet, without any further abstraction into new generic or specific entity repository classes and unit of work generic class? The implementation needs to rely on the consumption of a service layer for the reasons I have already stated.
I think an answer to that would be just the last complexity leap I feel necessary to get my "least complex viable design".
Let me put a more concrete example to illustrate:
In my service layer, I have 2 methods to implement validation logic for 2 insert operations, with a programmer defined method Insert such as:
EntityOneService.Insert
EntityTwoService.Insert
Each of these methods in their corresponding service layer classes would have access to a DBContext and use DBSet.Add to signal they should be persisted,
in case all validation and/or business logic passes. The desired scenario is that I can use each service layer method call in an isolated way, and/or in groups, such as in a new different service layer class method, such as:
OperationOnePlusTwoService.Insert
This new method would implement calls to EntityOneService.Insert and EntityTwoService.Insert, IN A TRANSACTION-LIKE FASHION.
By transaction-like I mean that all calls must succeed, not violating any validation or business rule, in order to have the persistence layer to commit the operations.
DBContext.SaveChanges() apparently would have to be called only once for this to happen, OUTSIDE of any service layer Insert method implementation. In the
context of an ASP.NET Controller consuming service layer methods, how could that be achieved, without actual implementation of a Unit of Work and Repostory abstraction over DBContext and DBSet?
Any advice please would be very much appreciated. I am not posting this to argue the value of a real abstraction and implementation of Repository and Unit of Work patterns, or if Entity Framework's DBContext and DBSet are or are not equivalent to proper Repository and Unit of Work patterns, that's not the point. My project requirements do not involve in any way the need to decouple the application from Entity Framework, or to ideally and fully promote testability.
These are not concerns and I am well aware of consequences and future maintainability impacts on not adopting full fledged implementations of half a dozen layers and all design patterns possible to make a big world-class enterprise solution.
desired scenario is that I can use each service layer method call in an isolated way ... but that behave IN A TRANSACTION-LIKE FASHION.
This is rather simple with EF, assuming your services are not remote (in which case transactions are not advisable in the first place).
In the simplest implementation, each service instance requires a DbContext to be passed in its constructor, and contributes its changes to that DbContext. The orchestrating controller code can control the lifetime of the DbContext and control its use of transactions.
In practice interfaces, rather than concrete service types are typically used, and Dependency Injection is often used instead of constructors. But the pattern is the same.
eg
using (var db = new MyDbContext())
using (var s1 = new SomeService(db))
using (var s2 = new SomeOtherService(db))
using (var tran = db.Database.BeginTransaction())
{
s1.DoStuff();
s2.DoStuff();
tran.Commit();
}
David

Designing repositories for DI (constructor injection) for service layer

I'm building an MVC3 app, trying to use IoC and constructor injection. My database has (so far) about 50 tables. I am using EF4 (w/ POCO T4 template) for my DAC code. I am using the repository pattern, and each table has its own repository. My service classes in my service layer are injected w/ these repositories.
Problem: My service classes are growing in the number of repositories they need. In some cases, I am approaching 10 repositories, and it's starting to smell.
Is there a common approach for designing repositories and service classes such that the services don't require so many repositories?
Here are my thoughts, I'm just not sure which one is right:
1) This is a sign I should consider combining/grouping my repositories into related sections of tables, reducing the number or dependent repositories per service class. The problem with this approach, though, is that it will bloat and complicate my repositories, and will keep me from being able to use a common interface for all repositories (standard methods for data retrieval/update).
2) This is a sign I should consider breaking my services into groups based on my repositories (tables). Problem with this is that some of my service methods share common implementation, and breaking these across classes may complicate my dependencies.
3) This is a sign that I don't know what I'm doing, and have something fundamentally wrong that I'm not even able to see.
UPDATE: For an idea of how I'm implementing EF4 and repositories, check out this sample app on codeplex (I used version 1). However, looking at some of the comments there (and here), looks like I need to do a bit more reading to make sure this is the route I want to take -- sounds like it may not be.
Chandermani is right that some of your tables might not be core domain classes. This means you would never search for that data except in terms of a single type of parent entity. In those cases you can reference them as "complex types" rather than full-blown entities, and EF will still take care of you.
I am using the repository pattern, and each table has its own repository
I hope you're not writing these yourself from scratch.
The EF 4.1 already implements the Repository Pattern (DbSet), and the Unit of Work pattern (DbContext). The older versions do too, though the DbContext template can easily be tweaked to provide a clean mockable implementation by changing those properties to an IDbSet.
I've seen several tutorial articles where people still write their own, though. It is strange to me, because they usually don't provide a justification, other than the fact that they are "implementing the Repository Pattern".
Writing wrappers for these repositories for access methods like FindById make it slightly easier to access, but as you've seen is a big amount of effort potentially little payback. Personally, unless I find that there is interesting domain logic or complex queries to be encapsulated, I don't even bother and just use Linq directly against the IDbSet.
My service classes in my service layer are injected w/ these repositories.
Even if you choose to use custom query wrappers, you might choose to simply inject the DbContext, and let the service code instantiate the wrappers it needs. You'd still be able to mock your data access layer, you just wouldn't be able to mock up the wrapper code. I'd still recommend you inject less generic ones though, because complex implementation is exactly the type of thing you'd like to be able to factor out in maintenance, or replace with mocks.
If you look at DDD Aggregate Root pattern and try to see you data in this perspective you would realize that many of the table do not have a independent existence at all. Their data is only valid in context of their parent. Most of the operations on them require you to get the parent as well. If you can group such tables and find the parent entity\repository all other child repository can be removed. The complexity of associating the parent child which till now you would be doing in your business layer (assuming you are retrieving parent and child using independent repo) not would be shifted to the DAL
Refactoring the Service interface is also a viable option, and any common functionality can be moved into a base class and\or can be itself defined as a service which is consumed by all your existing services (Is A vs Has A)
#Chandermani has a good point about aggregate roots. Repositories should not, necessary have a 1:1 mapping to tables.
Getting large numbers of dependencies injected in is a good sign your services are doing too much. Follow the Single Responsibility Principle, and refactor them into more manageable pieces.
are your services writing to all of the repositories? i find that my services line up pretty closely with repositories, that they provide the business logic around the CRUD operations that the repository expose.

IOC Container type resolution and injection location

Is it best practise to resolve and inject concrete types at the edge of the domain model and then have these fall down through the domain? For example, having the container inject concrete types into MVC controller constructors in a web app, or service endpoints in a service based app?
My understanding of container object graph wire up is a little ropey.
Is it ever appropriate to do the equivalent of Container.Resolve() within the domain?
DI is really only a means to an end: loose coupling. It is a way to enable loose coupling by injecting interfaces (or base classes) into consumers so that you can vary both independently of each other.
As a general rule, nothing much is gained by injecting a concrete type. You can't swap the type with another type, so the main advantage of DI is lost.
You could argue that this means that you'd just as well just create the concrete instances from within the consumers, but a better alternative is to extract interfaces from those types (and then inject them).
And no: it's never appropriate to pull from the container from within the Domain Model. That is the Service Locator anti-pattern. The Hollywood Principle applies here as well:
Don't call the container; it'll call you
(That said, even with a concrete type there are some secondary benefits from injecting it. If it's non-sealed and has one or more virtual members, you can still override a bit of its behavior, and even if it's sealed, you still get to control its lifetime if you inject it - e.g. you can share the same instance between multiple consumers. However, these benefits are purely secondary and usually not the main reason we decide to inject anything.)
Another question (and the one you seem to be actually asking) is whether it's appropriate to inject services just to be passing them on to other services. No, it's not, since it would violate the Single Responsibility Principle and lead to Constructor Over-Injection.
It's better to wrap fine-grained service in more coarse-grained services. I call these Aggregate Services or Abstract Facades. While these in themselves will have dependencies (like the service endpoints you mention), these will be implementation details. From the point of view of the top-level consumer, they don't exist.
Not only does this nicely solve the issue around too many dependencies in the constructor, it also helps you have better isolation between application layers.
Check out Krzysztof Koźmic's blog post(s) about the subject - I think he has some great opinions about this, and they pretty much sum up what seems to be the current "best practice".

Different ways to inject dependencies in ASP.NET MVC Controllers?

In most samples I have seen on the web, DI in MVC Controllers is done like this
public ProductController(IProductRepository Rep)
{
this._rep = Rep;
}
A custom ControllerFactory is used and it utilizes the DI framework of choice and the repository is injected.
Why is the above considered better than
public ProuctController()
{
this._rep = ObjectFactory.GetInstance<IProductRepository>();
}
This will get the same results but doesn't require a custom controller factory.
As far as testing is concerned the Test App can have a separate BootStrapper. That way when the controllers are being tested they can get the fake repositories and when they are used for real they will get the real ones.
Constructor injection (the first approach) is better than the service locator pattern (the second approach) for several reasons.
First, service locator hides dependencies. In your second example, looking at the public interface alone, there's no way to know that ProductControllers need repositories.
What's more, I've got to echo OdeToCode. I think
IProductRepository repository = Mockery.NewMock<IProductRepository>();
IProductController controller = new ProductController(repository);
is clearer than
ObjectFactory.SetFactory(IProductRepository, new MockRepositoryFactory())
IProductController controller = new ProductController();
Especially if the ObjectFactory is configured in a test fixture's SetUp method.
Finally, the service locator pattern is demonstrably sub-optimal in at least one particular case: when you're writing code that will be consumed by people writing applications outside of your control. I wager that people generally prefer constructor injection (or one of the other DI methods) because it's applicable for every scenario. Why not use the method that covers all cases?
(Martin Fowler offers a much more thorough analysis in "Inversion of Control Containers and the Dependency Injection Pattern", particularly the section "Service Locator vs Dependency Injection").
The primary drawback to the second constructor is now your IoC container has to be properly configured for each test. This setup can become a real burden as the code base grows and the test scenarios become more varied. The tests are generally easier to read and maintain when you explicitly pass in a test double.
Another concern is coupling a huge number of classes to a specific DI/IoC framework. There are ways to abstract it away, of course, but you still have code littered throughout your classes to retrieve dependencies. Since all the good frameworks can figure out what dependencies you need by looking at the constructor, it’s a lot of wasted effort and duplicated code.
When you use the second approach, disadvantages are:
Huge and unreadable test setup/context methods are needed
The container is coupled to the controller
You will need to write a lot more code
Why do you want to use an ioc container anyway when you don't want dependency injection?

When to use Dependency Injection

I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.

Resources