Different ways to inject dependencies in ASP.NET MVC Controllers? - asp.net-mvc

In most samples I have seen on the web, DI in MVC Controllers is done like this
public ProductController(IProductRepository Rep)
{
this._rep = Rep;
}
A custom ControllerFactory is used and it utilizes the DI framework of choice and the repository is injected.
Why is the above considered better than
public ProuctController()
{
this._rep = ObjectFactory.GetInstance<IProductRepository>();
}
This will get the same results but doesn't require a custom controller factory.
As far as testing is concerned the Test App can have a separate BootStrapper. That way when the controllers are being tested they can get the fake repositories and when they are used for real they will get the real ones.

Constructor injection (the first approach) is better than the service locator pattern (the second approach) for several reasons.
First, service locator hides dependencies. In your second example, looking at the public interface alone, there's no way to know that ProductControllers need repositories.
What's more, I've got to echo OdeToCode. I think
IProductRepository repository = Mockery.NewMock<IProductRepository>();
IProductController controller = new ProductController(repository);
is clearer than
ObjectFactory.SetFactory(IProductRepository, new MockRepositoryFactory())
IProductController controller = new ProductController();
Especially if the ObjectFactory is configured in a test fixture's SetUp method.
Finally, the service locator pattern is demonstrably sub-optimal in at least one particular case: when you're writing code that will be consumed by people writing applications outside of your control. I wager that people generally prefer constructor injection (or one of the other DI methods) because it's applicable for every scenario. Why not use the method that covers all cases?
(Martin Fowler offers a much more thorough analysis in "Inversion of Control Containers and the Dependency Injection Pattern", particularly the section "Service Locator vs Dependency Injection").

The primary drawback to the second constructor is now your IoC container has to be properly configured for each test. This setup can become a real burden as the code base grows and the test scenarios become more varied. The tests are generally easier to read and maintain when you explicitly pass in a test double.
Another concern is coupling a huge number of classes to a specific DI/IoC framework. There are ways to abstract it away, of course, but you still have code littered throughout your classes to retrieve dependencies. Since all the good frameworks can figure out what dependencies you need by looking at the constructor, it’s a lot of wasted effort and duplicated code.

When you use the second approach, disadvantages are:
Huge and unreadable test setup/context methods are needed
The container is coupled to the controller
You will need to write a lot more code
Why do you want to use an ioc container anyway when you don't want dependency injection?

Related

what's the point of Dependency Injection Frameworks?

I am sure that I am somewhat lost in this area... my understanding is that Dependency Injection means initializing something that is required by a class..so for instance.
If my controller is going to need a service and I want to be able to test it then I should define two Constructor methods for it... so, my question is... why do people use Frameworks to achieve this?? Im lost
public class CompaniesController : Controller
{
private ICompaniesService _service;
public CompaniesController()
{
_service = new CompaniesService();
}
public CompaniesController(ICompaniesService service)
{
_service = service;
}
A major reason is to better support unit testing and mocking out objects to create controlled tests.
By not specifying the implementation inside the class, you can 'inject' an implementation at run time. (In your example, the ICompaniesService interface).
At runtime, using an inversion of control/dependency injection container such as StructureMap, Unity or Castle Windsor, you can say "hey, anytime someone wants an instance of ICompaniesService give them a new CompaniesService object".
To unit test this class, you can mock our a ICompaniesService and supply it yourself to the constructor. This allows you to setup controlled methods on the mock object. If you couldn't do this, your unit tests for CompaniesController would be limited to using only the one implementation of your companies service, which could hit a live database etc, making your unit tests both slow and inconsistent.
People don't use a Dependency Injection Framework to generate the code that you provided in your example. That's still the work of the developer.
The Dependency Injection Framework is used when somebody calls the constructor. The Framework will Inject the concrete implementation of the ICompaniesService rather than the developer explicitly calling the constructor.
While it is a specific product, the nInject Homepage actually has some really good examples.
From Wikipedia:
Without the concept of dependency
injection, a consumer who needs a
particular service "ICompaniesService" in order to
accomplish a certain task would be
responsible for handling the
life-cycle (instantiating, opening and
closing streams, disposing, etc.) of
that service. Using the concept of
dependency injection, however, the
life-cycle of a service is handled by
a dependency provider/framework (typically a
container) rather than the consumer.
The consumer would thus only need a
reference to an implementation of the
service "ICompaniesService" that it needed in order to
accomplish the necessary task.
Read this one too:
What is dependency injection?
People use Dependency Injection frameworks because otherwise you have to write a metric ton of boring, repetitive factory classes (if using dependency injection, that is).
It can be done, it's just very, very annoying.
I think your understanding is only partly correct. Dependency Injection is "injecting"
a dependency of a component into it. Its a more specific form of inversion of control. During
this process some dependencies can also be initialized before injecting.
With DI, the onus of looking up the dependency is not on the component
(as in ServiceLoacator pattern) but upto the container in which the component is
running. The pattern has following advantages:
Dependency lookup code can be eliminated from the component.
Some frameworks providing auto-wiring, injecting saving you from manually creating your component
hierarchies (Answers one of your questions)
Allows you to interchange implementations of your dependencies. (without using Factories)
Testing (replacing dependencies with mock implementations)
In your code example, a dependency can be injected by a DI container at runtime via the second
constructor. Other forms of injections are also possible (depending on the DI container). e.g.
Field injection, setter injection, etc.
There's a good article by martin fowler who coined the DI term
You dont have to have a DI framework, but at some point in your codebase a concrete implementation is going to need to be instantiated and injected into your constructors/properties. It can get very messy if you dont have a DI framework, I recommend looking at Castle Windsor although as mentioned there are others that will perform the same functionality. Or if you could role your own....:)
I'll pitch in:
You can accomplish dependency injection simply by having a parameterized function definition.
However, in order to make that work consistently, everyone has to actually do that. Many find that's its easier to enforce the convention by using a factory design pattern.
Dependency injection frameworks solve the problem of reducing the boilerplate of writing those factories.
I would say that dependency injection through a factory, is non-ideal. In my opinion factories add an extra layer of indirection and in a sense make deterministic functions in-deterministic in the sense that they are now a function of the inputs, plus state from the rest of the program (your di setup) which you cannot see from the function definition alone. Factories make code harder as they add an extra layer of indirection I'd argue that in many cases it's really not too hard to follow the convention and manually inject classes via the function arguments. Again though, for a large codebase, it's probably easier to enforce the rule through a factory.
..that being said sometimes I do wonder if these large codebases would even be this large if they didn't write so many things which try to preemptively solve problems that they don't have in the first place.

Utility of IoC and Dependency Injection

There are some cases in which unit test don't work for the project.
I'm studing the Inversion of Control and Dependency Injection utilities and I would like to know if there are good reasons for use it than for make unit tests easier.
--update
Ok, let's analysis 1 of the cited advanteges: less coupling.
You take out the coupling from the child type, and you add coupling to the handler type who need to create the objects to inject.
Without unit testing, what's the advantage in this coupling transfer (not coupling eliminate).
IOC/DI bring some very important features to your application
Plugability: with DI you can inject dependency into the code without explicitly knowing how the functionality is actually working.
For example: your class might get a ILog interface injected so that it can write logs. Since the class works with the ILog interface, it would be possible to implement a FileLog, MemoryLog or a DatabaseLog & inject this into your class. Any of these implementation will work fine as long as they implement the ILog interface
Testability: With DI in your class, you can inject mock objects to test the behaviour of your class without actually needing the concrete implementation.
For example: consider a Controller class which needs a Repository to perform data operations. In this case, the repository can be DI for the controller. If you need to write tests on the Controller class, you can pass a DI'd mock version of the repository without having to work with the actual repository class
Configurability: Some of the common DI frameworks like Castle Windor, Unity, Spring etc., allow doing DI along with lifetime management of the object created. This is a very powerful feature & allow you to manage the dependencies & their lifetime via configuration. For example consider your application needs a ICache dependency. Via the configuration for lifetime & object management, you will be able to configure the cache to be a Per-application or per-session or per-request etc. without having to explicitly bake the implementation in your code.
HTH
IoC reduces coupling, which has a correlation with defect rates in some studies. (If that really long link doesn't work, it's to Software Engineering Quality Practices by Ronald Kirk Kandt.)
Sure, here are a few reasons:
Dynamic generation of proxies for remoting and transactons
Aspect oriented programming
Layering using interfaces and separation of implementation
Enough?
From the IoC wikipedia article:
There is a decoupling of the execution of a certain task from implementation.
Every system can focus on what it is designed for.
Every system does not make assumptions about what other systems do or should do.
Replacing systems will have no side effect on other systems.
While I would call the above feature list a bit vague, you see most of the above benefits, even without testing.
If I had to say it in a nutshell, I would say that IoC significantly improves separation of concerns which is a valuable goal in software development.
Yes, dependency injection helps you make your classes more focused, clearer*, and easier to change, because it makes it easier to adhere to the single-responsibility principle.
It also makes it easier to vary parts of your application independently of one another.
When you use constructor injection in particular, it's easier to tell what your code needs to do its job. If the WeatherUpdater class requires an IWeatherRepository in its constructor no one is surprised that it uses a database.
* Again, constructor injection only.

Taking my MVC to the next level: DI and Unit of Work

I have looked at simpler applications like Nerddinner and ContactManager as well as more complicated ones like Kigg. I understand the simpler ones and now I would like to understand the more complex ones.
Usually the simpler applications have repository classes and interfaces (as loosely coupled as they can get) on top of either LINQtoSQL or the Entity Framework. The repositories are called from the controllers to do the necessary data operations.
One common pattern I see when I examine more complicated applications like Kigg or Oxite is the introduction of (I am only scratching the surface here but I have to start somewhere):
IOC DI (in Kigg's case Unity)
Web Request Lifetime manager
Unit of Work
Here are my questions:
I understand that in order to truly have a loosely coupled application you have to use something like Unity. But it also seems like the moment you introduce Unity to the mix you also have to introduce a Web Request Lifetime Manager. Why is that? Why is it that sample applications like Nerddinner do not have a Web Request Lifetime Manager? What exactly does it do? Is it a Unity specific thing?
A second pattern I notice is the introduction of Unit of Work. Again, same question: Why does Nerddinner or ContactManager not use Unit of Work? Instead these applications use the repository classes on top of Linq2Sql or Entity Framework to do the data manipulation. No sign of any Unit of Work. What exactly is it and why should it be used?
Thanks
Below is a example of DI in Nerddiner at the DinnersController level:
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
So am I right to assume that because of the first constructor the controller "owns" the DinnerRepository and it will therefore depend on the lifetime of the controller since it is declared there?
With Linq-to-SQL is used directly, your controller owns the reference to the data context. It's usually a private reference inside the controller, and so is created as part of its construction. There's no need in lifetime management, since it's in one place.
However, when you use IoC container, your data repository are created outside your controller. Since IoC container that creates it for you doesn't know how and how long you're going to use the created object, a lifetime strategy is introduced.
For example, data context (repository) is usually created at the beginning of the web request and destroyed at the end. However, for components that work with external web service, or some static mapper (e.g. logger) there's no need to create them each time. So you may want to say to create them once (i.e. singletone lifestyle).
All this happen because IoC container (like Unity) are designed to handle many situations, and they don't know your specific needs. For example, some applications use "conversation" transactions where NHibernate (or Entity Framework maybe) may last during several pages / web requests. IoC containers allow you to tweak objects lifetime to suit your needs. But as said this comes at price - since there's no single predefined strategy, you have to select one yourself.
Why NerdDinner and other applications do not use more advanced techniques is simply because they are intended to demonstrate MVC features, not advanced usages of some other libraries. I remember an article written to demonstrate one IoC container advanced functionality - this article broke some approved design patterns like separation of concerns - but this wasn't that important because design patterns were not the goal of the article. Same with simple MVC-demonstration-applications - they do not want you, the MVC newcomer, to be lost in IoC labyrinths.
And I would not recommend to look at Oxite as a design reference example:
http://codebetter.com/blogs/karlseguin/archive/2008/12/15/oxite-oh-dear-lord-why.aspx
http://ayende.com/Blog/archive/2008/12/19/oxite-open-exchangable-informative-troubled-engine.aspx
Most if not all of the DI containers touch the concept of life times, I believe. Depending on the scenario involved, you may want the container to always return the same instance of a registered component, while for another component, you may want it to always return a new instance. Most containers also allow you to specify that within a particular context, you want it to return the same instance, etc..
I don't know Unity very well (so far I have used Windsor and Autofac), but I suspect the web request lifetime manager to be an implementation of lifetime strategies where the same instance is provided by the container during the lifetime of a single web request. You will find similar strategies in containers like Windsor.
Finally, I suppose you are referring to Unit of Work. A Unit of Work is in essence a group of actions that you want to succeed or fail as one atomic business transaction. For a more formal description, look at Martin Fowler's definition. It is a concept that has gained more popularity in the context of Domain Driven Design. A unit of work keeps track of the changes you apply in such a transaction, and when the time is right, it commits these changes in one ACID transaction. In NHibernate e.g., the session supports the notion of unit of work and more specifically the change tracking, while in Linq2SQL it is the Context ...

When to use Dependency Injection

I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.

When do you use dependency injection?

I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.

Resources