Configuring mapping behavior that's non-framework specific - dependency-injection

I'm trying to configure my Object Mapper without knowing which mapper I'm using. :/
This might sound a bit strange. The reason for this is that I'm trying out the Onion Architecture so my UI cannot know about my Object Mapper located in my Infrastructure. See this solution for an example.
I'm having some trouble figuring out how I should "delegate" the none default mapping behavior.
Stuff like:
Mapper
.CreateMap<MyModel, MyDestViewModel>()
.ForMember(
dest => dest.SomeDestinationProperty,
opt => opt.MapFrom(src => src.SomeSourceProperty)
);
I've setup a class in my MVC project which is called from Global.asax and this is where I want to configure my mappings.
public static class MapConfig
{
public static void RegisterMaps()
{
}
}
I was thinking I could do something like the following. (IMapper is a self defined interface located in Domain)
public static void RegisterMaps(HttpConfiguration config)
{
var mapper = config.DependencyResolver.GetService(IMapper);
mapper.CreateMap<MyModel, MyViewModel>();
}
Now... how would I go about setting up special behavior like the .ForMember? Keeping in mind that it cannot be AutoMapper specific.
I was thinking something along these lines mapper.CreateMap<MyModel, MyViewModel>(Expression<Func<T>>) where the Func would do some black magic that I cannot figure out right now :( - Am I on the right path or have I missed something essential?

Onion Architecture isn't about the configuration being implementation-agnostic, it's about the execution.
Just create an IMapper interface for the execution of mappings, but don't worry about the configuration. This applies to your ORM, IoC container and everything else.
Also, Onion Architecture isn't about project structure, it's about the direction of your dependencies. Just call CreateMap in your UI. You can then define an IMapper interface all the way down in Core, with the other pieces implementing a version that delegates to AutoMapper.

you're abstracting away useful functionality that will cost you more time than you initially realize. Why not spend the time choosing a mapper and sticking with it?
Why is it so important that your UI doesnt know about your mapper? Assuming that you are using MVC, you are going to be flexing a lot of your chosen mappers functionality to flatten our your domain models to view models anyway.
Its the same kind of nonsense where people use generic repository implementations 'just in case' they decide to switch ORM mid project.
Choose your infrastructure carefully and stick with it.

Related

Dependency injection: Is it ok to instatiate a concrete object from a concrete factory

I am fairly new to Dependency Injection, and I wrote a great little app that worked exactly like Mark Seemann told me it would and the world was great. I even added some extra complexity to it just to see if I could handle that using DI. And I could, happy days.
Then I took it to a real world application and spent a long time scratching my head. Mark tells me that I am not allowed to use the 'new' keyword to instantiate objects, and I should instead let the IoC do this for me.
However, say that I have a repository and I want it to be able to return me a list of things, thusly:
public interface IThingRepository
{
public IEnumerable<IThing> GetThings();
}
Surely at least one implementation of this interface will have to instantiate some Thing's? And it doesn't seem so bad being allowing ThingRepository to new up some Things as they are related anyway.
I could instead pass round a POCO instead, but at some point I'm going to have to convert the POCO in to a business object, which would require me to new something up.
This situation seems to occur every time I want a number of things which is not knowable in the Composition Root (ie we only find out this information later - for example when querying the database).
Does anyone know what the best practice is in these kinds of situations?
In addition to Steven's answer, I think it is ok for a specific factory to new up it's specific matching-implementation that it was created for.
Update
Also, check this answer, specifically the comments, which say something about new-ing up instances.
Example:
public interface IContext {
T GetById<T>(int id);
}
public interface IContextFactory {
IContext Create();
}
public class EntityContext : DbContext, IContext {
public T GetById<T>(int id) {
var entity = ...; // Retrieve from db
return entity;
}
}
public class EntityContextFactory : IContextFactory {
public IContext Create() {
// I think this is ok, since the factory was specifically created
// to return the matching implementation of IContext.
return new EntityContext();
}
}
Mark tells me that I am not allowed to use the 'new' keyword to instantiate objects
That's not what Mark Seemann tells you, or what he means. You must make the clear separation between services (controlled by your composition root) at one side and primitives, entities, DTOs, view models and messages on the other side. Services are injectables and all other types are newables. You should only prevent using new on service types. It would be silly to prevent newing up strings for instance.
Since in your example the service is a repository, it seems reasonable to assume that the repository returns domain objects. Domain objects are newables and there's no reason not to new them manually.
Thanks for the answers everybody, they led me to the following conclusions.
Mark makes a distinction between stable and unstable dependencies in the book I am reading ( "Dependency injection in .NET"). Stable dependencies (eg Strings) can be created at will. Unstable dependencies should be moved behind a seam / interface.
A dependency is anything that is in a different assembly from the one that we are writing.
An unstable dependency is any of the following
It requires a run time environment to be set up such as a database, web server, maybe even the file system (otherwise it won't be extensible or testable, and it means we couldn't do late binding if we wanted to)
It doesn't exist yet (otherwise we can't do parallel development)
It requires something that isn't installed on all machines (otherwise it can cause test difficulties)
It contains non deterministic behaviour (otherwise impossible to test well)
So this is all well and good.
However, I often hide things behind seams within the same assembly. I find this extremely helpful for testing. For example if I am doing a complex calculation it is impossible to test the entire calculation well in one go. If I split the calculation up into lots of smaller classes and hide these behind seams, then I can easily inject any arbirtary intermediate results into a calculating class.
So, having had a good old think about it, these are my conclusions:
It is always OK to create a stable dependency
You should never create unstable dependencies directly
It can be useful to use seams within an assembly, particularly to break up big classes and make them more easily testable.
And in answer to my original question, it is ok to instatiate a concrete object from a concrete factory.

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

Dependency Injection with Ninject, MVC 3 and using the Service Locator Pattern

Something that has been bugging me since I read an answer on another stackoverflow question (the precise one eludes me now) where a user stated something like "If you're calling the Service Locator, you're doing it wrong."
It was someone with a high reputation (in the hundred thousands, I think) so I tend to think this person might know what they're talking about. I've been using DI for my projects since I first started learning about it and how well it relates to Unit Testing and what not. It's something I'm fairly comfortable with now and I think I know what I'm doing.
However, there are a lot of places where I've been using the Service Locator to resolve dependencies in my project. Once prime example comes from my ModelBinder implementations.
Example of a typical model binder.
public class FileModelBinder : IModelBinder {
public object BindModel(ControllerContext controllerContext,
ModelBindingContext bindingContext) {
ValueProviderResult value = bindingContext.ValueProvider.GetValue("id");
IDataContext db = Services.Current.GetService<IDataContext>();
return db.Files.SingleOrDefault(i => i.Id == id.AttemptedValue);
}
}
not a real implementation - just a quick example
Since the ModelBinder implementation requires a new instance when a Binder is first requested, it's impossible to use Dependency Injection on the constructor for this particular implementation.
It's this way in a lot of my classes. Another example is that of a Cache Expiration process that runs a method whenever a cache object expires in my website. I run a bunch of database calls and what not. There too I'm using a Service Locator to get the required dependency.
Another issue I had recently (that I posted a question on here about) was that all my controllers required an instance of IDataContext which I used DI for - but one action method required a different instance of IDataContext. Luckily Ninject came to the rescue with a named dependency. However, this felt like a kludge and not a real solution.
I thought I, at least, understood the concept of Separation of Concerns reasonably well but there seems to be something fundamentally wrong with how I understand Dependency Injection and the Service Locator Pattern - and I don't know what that is.
The way I currently understand it - and this could be wrong as well - is that, at least in MVC, the ControllerFactory looks for a Constructor for a Controller and calls the Service Locator itself to get the required dependencies and then passes them in. However, I can understand that not all classes and what not have a Factory to create them. So it seems to me that some Service Locator pattern is acceptable...but...
When is it not acceptable?
What sort of pattern should I be on the look out for when I should rethink how I'm using the Service Locator Pattern?
Is my ModelBinder implementation wrong? If so, what do I need to learn to fix it?
In another question along the lines of this one user Mark Seemann recommended an Abstract Factory - How does this relate?
I guess that's it - I can't really think of any other question to help my understanding but any extra information is greatly appreciated.
I understand that DI might not be the answer to everything and I might be going overboard in how I implement it, however, it seems to work the way I expect it to with Unit Testing and what not.
I'm not looking for code to fix my example implementation - I'm looking to learn, looking for an explanation to fix my flawed understanding.
I wish stackoverflow.com had the ability to save draft questions. I also hope whoever answers this question gets the appropriate amount of reputation for answering this question as I think I'm asking for a lot. Thanks, in advance.
Consider the following:
public class MyClass
{
IMyInterface _myInterface;
IMyOtherInterface _myOtherInterface;
public MyClass(IMyInterface myInterface, IMyOtherInterface myOtherInterface)
{
// Foo
_myInterface = myInterface;
_myOtherInterface = myOtherInterface;
}
}
With this design I am able to express the dependency requirements for my type. The type itself isn't responsible for knowing how to instantiate any of the dependencies, they are given to it (injected) by whatever resolving mechanism is used [typically an IoC container]. Whereas:
public class MyClass
{
IMyInterface _myInterface;
IMyOtherInterface _myOtherInterface;
public MyClass()
{
// Bar
_myInterface = ServiceLocator.Resolve<IMyInterface>();
_myOtherInterface = ServiceLocator.Resolve<IMyOtherInterface>();
}
}
Our class is now dependent on creating the specfic instances, but via delegation to a service locator. In this sense, Service Location can be considered an anti-pattern because you're not exposing dependencies, but you are allowing problems which can be caught through compilation to bubble up into runtime. (A good read is here). You hiding complexities.
The choice between one or the other really depends on what your building on top of and the services it provides. Typically if you are building an application from scratch, I would choose DI all the time. It improves maintainability, promotes modularity and makes testing types a whole lot easier. But, taking ASP.NET MVC3 as an example, you could easily implement SL as its baked into the design.
You can always go for a composite design where you could use IoC/DI with SL, much like using the Common Services Locator. You component parts could be wired up through DI, but exposed through SL. You could even throw composition into the mix and use something like the Managed Extensibility Framework (which itself supports DI, but can also be wired to other IoC containers or service locators). It's a big design choice to make, generally my recommendation would be for IoC/DI where possible.
Your specific design I wouldn't say is wrong. In this instance, your code is not responsible for creating an instance of the model binder itself, that's up to the framework so you have no control over that but your use of the service locator could probably be easily changed to access an IoC container. But the action of calling resolve on the IoC container...would you not consider that service location?
With an abstract factory pattern the factory is specialised at creating specific types. You don't register types for resolution, you essentially register an abstract factory and that builds any types that you may require. With a Service Locator it is designed to locate services and return those instances. Similar from an convention point of view, but very different in behaviour.

Which dependencies should I inject?

When using dependency injection which dependencies do you inject?
I have previously injected all dependencies but have found when doing TDD there are typically two types of dependency:
Those which are genuine external dependencies which may change e.g. ProductRepository
Those which exist purely for testability e.g. Part of the behaviour of the class that has been extracted and injected just for testability
One approach is to inject ALL dependencies like this
public ClassWithExternalDependency(IExternalDependency external,
IExtractedForTestabilityDependency internal)
{
// assign dependencies ...
}
but I've found this can cause dependency bloat in the DI registry.
Another approach is to hide the "testability dependency" like this
public ClassWithExternalDependency(IExternalDependency external)
: this (external, new ConcreteClassOfInternalDependency())
{}
internal ClassWithExternalDependency(IExternalDependency external,
IExtractedForTestabilityDependency internal)
{
// assign dependencies ...
}
This is more effort but seems to make a lot more sense. The downside being not all objects are configured in the DI framework, thereby breaking a "best practice" that I've heard.
Which approach would you advocate and why?
I believe you're better off injecting all of your dependencies. If it starts to get a little unwieldy, that's probably an indication that you need to simplify things a bit or move the dependencies into another object. Feeling the "pain" of your design as you go can be really enlightening.
As for dependency bloat in the registry, you might consider using some sort of conventional binding technique, rather than registering each dependency by hand. Some IoC containers have convention-based type-scanning bindings built into them. For example, here's part of a module I use in a Caliburn WPF application that uses Ninject:
public class AppModule : NinjectModule
{
public override void Load()
{
Bind<IShellPresenter>().To<ShellPresenter>().InSingletonScope();
BindAllResults();
BindAllPresenters();
}
/// <summary>
/// Automatically bind all presenters that haven't already been manually bound
/// </summary>
public void BindAllPresenters()
{
Type[] types = Assembly.GetExecutingAssembly().GetTypes();
IEnumerable<Type> presenterImplementors =
from t in types
where !t.IsInterface
&& t.Name.EndsWith("Presenter")
select t;
presenterImplementors.Run(
implementationType =>
{
if (!Kernel.GetBindings(implementationType).Any())
Bind(implementationType).ToSelf();
});
}
Even though I have dozens of results and presenters running around, I don't have to register them explicitly.
I certainly won't inject all dependencies, because were to stop? Do you want to inject your string dependencies? I only invert the dependencies that I need for unit testing. I want to stub my database (see this example for instance). I want to stub the sending of e-mail messages. I want to stub the system clock. I want to stub writing to the file system.
The thing about inverting as many dependencies as you can, even those that you don't need for testing, is that make unit testing a lot harder and the more you stub out the less you really test how the system really acts. This makes your tests much less reliable. It also complicates your DI configuration in the application root.
I would wire all my non-external dependencies by hand and 'register' only external dependencies. When I say non-external, I mean the objects which belong to my component and which were extracted out to interfaces just for the sake of single responsibility/testability I would never have any other implementations of such interfaces ever. External dependencies are stuff like DB connections, web services, interfaces which don't belong to my component. I would register them as interfaces because their implementations can be switched to stubbed ones for integration testing. Having a small number of components registered in a DI container makes the DI code easier to read and bloat free.

ASP.NET MVC IoC usability

How often do you use IoC for controllers/DAL in real projects?
IoC allows to abstract application from concrete implementation with additional layer of interfaces that should be implemented. But how often concrete implementation changes? Should we really have to do job twice adding method to interface then the implementation if implementation hardly will ever be changed? I took part in about 10 asp.net projects and DAL (ORM-like and not) was never rewritten completely.
Watching lots of videos I clearly understand that IoC "is cool" and the really nice way to program, but does it really needed?
Added a bit later:
Yes, IoC allows prepare better testing environment, but we also have nice way to test DAL without IoC. We wrap DAL calls to database into uncommited transactions without risk to make data unstable.
IoC isn't a pattern only for writing modular programs; it also allows for easier testing, by being able to swap in mock objects that implement the same interface as the components they stand in for.
Plus, it actually makes code much easier to maintain down the road.
It's not IOC that allows you to abstract application from concrete implementation with additional layer of interfaces, this is how you should design your application in order to be more modular and reusable. Another important benefit is that once you've designed your application this way it will be much easier to test the different parts in isolation without depending on concrete database access for example.
There's much more about IoC except ability to change implementation:
testing
explicit dependencies - not hidden inside private DataContext
automatic instantiation - you declare in constructor that you need something, and you get it - with all deep nested dependencies resolved
separation of assemblies - take a look at S#arp Architecture to see how IoC allows to avoid referencing NHibernate and other specific assemblies, which otherwise you'll have to reference
management of lifetime - ability to specify per request / singleton / transitive lifetime of objects and change it in one place, instead of in dozens of controllers
ability to do dynamic stuff, like, getting correct data context in model binders, because with IoC you now have metadata about your dependencies; and this shows that maybe IoC does to your object dependencies what reflection does to C# programming - a lot of new possibilities, that you never even thought about
And so on, I'm sure I missed a lot of positive stuff. While the only "bad" thing that I can think about (and that you mentioned) is duplication of interface, which is non-issue with modern IDEs support for refactoring.
Well, if your data interfaces change every day, and you have hundreds of them - you may want to avoid IoC.
But, do you avoid good design practices just because it's harder to follow them? Do you copy and paste code instead of extracting a method/class, just because it takes more time and more code to do so? Do you place business logic in views just because it's harder to create view models and sync them with domain models? If yes, then you can avoid IoC, no problem.
You're arguing that using IOC takes MORE code than not using it. I disagree.
Here is the entire DAL IOC configuration for one of my projects using LinqToSql. The ContextProvider class is simply a thread safe LinqToSql context factory.
container.Register(Component.For<IContextProvider<LSDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<LSDataContext>>();
container.Register(Component.For<IContextProvider<WorkSheetDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<WorkSheetDataContext>>();
container.Register(Component.For<IContextProvider<OffersReadContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<OffersReadContext>>();
Here is the entire DAL configuration for one of my projects using NHibernate and the repository pattern:
container.Register(Component.For<NHSessionBuilder>().LifeStyle.Singleton);
container.Register(Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHRepositoryBase<>)));
Here is how I consume the DAL in my BLL (w/ dependency injection):
public class ClientService
{
private readonly IRepository<Client> _Clients;
public ClientService(IRepository<Client> clients)
{
_Clients = clients;
}
public IEnumerable<Client> GetClientsWithGoodCredit()
{
return _Clients.Where(c => c.HasGoodCredit);
}
}
Note that my IRepository<> interface inherits IQueryable<> so this code is very trivial!
Here's how I can test my BLL without connecting to a DB:
public void GetClientsWithGoodCredit_ReturnsClientWithGoodCredit()
{
var clientWithGoodCredit = new Client() {HasGoodCredit = true};
var clientWithBadCredit = new Client() {HasGoodCredit = false};
var clients = new List<Client>() { clientWithGoodCredit, clientWithBadCredit }.ToTestRepository();
var service = new ClientService(clients);
var clientsWithGoodCredit = service.GetClientsWithGoodCredit();
Assert(clientsWithGoodCredit.Count() == 1);
Assert(clientsWithGoodCredit.First() == clientWithGoodCredit);
}
ToTestRepository() is an extension method that returns a fake IRepository<> that uses an in-memory list.
There is no possible way you can argue that this is more complicated than newing up your DAL all over your BLL.
The only way you could have ever written the above test is by connecting to a DB, saving some test clients, and then querying. I guarantee that takes 100+ times longer to execute than this did. (Times that by 1000 tests and you can go get some coffee while you're waiting.)
Also, by using uncommitted transactions for testing you introduce debugging nightmares resulting from ORMs that don't query over uncommitted entities.

Resources