Dependency injection just for testing or production? - dependency-injection

Are dependency injection frameworks used just for testing or is it used in production code? Seems once you start using a framework like ninject, you would not want to reverse everything out. Also, is there a performance hit to using something like ninject?

Dependency Injection is an architecture pattern, not a testing one. So it is intended to be used when building production code. In fact, a really good framework introduces really tiny overhead if any. Can't say for sure if ninject or unity are like so, I used to implement my own.
Possible real downsides of DI are mostly around coding process, not production performance. One that should be probably mentioned: when using DI you lose an ability to walk through the code with 'Go to Definition' - it always brings you to interface which is pretty logical but still unusable.

You definitely use DI in a production environment. In a testing environment it is ok to just wire things up for the purposes of the test, indeed, if you are using mock objects, you might have to wire the classes up yourself.

Use it for everything! Why reverse it out? The performance hit for a quality framework should only happen on start up. Plus there might be a minor penalty until the VM can inline the extra layers. Either way you see a decrease in bugs and an increase in developer productivity.
DI is especially nice for production. In our software, we use the exact same configuration in production as we do in tests.
When the tests are run, the test configuration just flags certain services as being mocked out. This means that the test configuration is:
list out mocked out services
specify return values for the various method calls on the mocked out service methods.
Test Benefits:
No config differences between production and test means no possibility of a latent config dependent bug
Services are injected in to the Test class as well.
Test configuration is trivial (only needed really for cases where services are mocked out)
Test configuration is always same as production. ( easier to maintain )
Service initialization is never manual and hacked together.
Production Benefits:
Clean start up code - no hard coded dependencies
Circular dependencies are not an issue.
Ability to "turn off" a local copy of a service and route the requests to a different box. ( Used for background batch tasks )

Related

How best to organise my Visual Studio solution for Unit Tests and Dependency Injection

I have a fresh Visual Studio 2012 solution which consists of the following projects:
x1 Asp.Net Web API project (This holds my MVC/API controllers)
x1 Services project (based on the ASP.NET Empty Web Application template)
I've created two additional projects for the above (based on the ASP.NET Empty Web Application template) which I've labelled as my test projects.
I'm slowly getting my head around the whole TDD and DI approach to development hence my brain is a little swiss cheesed at this late hour.
I would like guidance on where I should setup DI and my unit tests. The approach I'm taking is to put as many of my methods in to my 'services' project. My main Web API project has a reference to my services project, so I can access all my public methods from that class library (I think that's the correct terminology).
I've now hit a brain block! I want to unit test as much as possible all the methods I expose in my services project. However my DI container (Castle Windsor) is only setup in my main API project, given that it has controllers etc.
So question:
For my services project do I just forget DI and write my unit tests direct to the concrete classes/interfaces? The reason I ask is because the Castle Windsor examples I've seen have been around setting up a container for my controllers/MVC web application and then instantiating the container via the application startup in global.ascx. If I should be using DI in my services project, how would I instantiate a container and where?
Following on from question 1, I wanted to gauge peoples opinions on where/how to structure all my unit tests. The easiest place to stick my tests would be in my main Web API project, given that everything I'm doing is either directly coded in that project or pulled in as a reference (my services project in this case).
That said I would imagine it would be better to have tests written exclusively for my services project, of which are contained in their own project. That way I can test my methods without the Web API project being involved at any level. Who knows, the Web API project might get canned but the services project is recycled for a different platform (hopefully you can see where I'm going with this).
The end goal of all this is to have most of my methods held in a services project, reusable for other projects in whatever form they may come. To have my Web API and other future MVC web applications pull functionality from my services library. All the while I have maintainable, manageable and above all 'testable' projects.
I'm at the start of my development cycle and doing all that I can in reading articles and reaching out to those in the know to get best advise so I can create solid foundations for my project.
To answer your questions: use an IOC container like Castle Windsor where you wire up your components. If you don't wire up components in your services project, there's no need for an IOC container there. It IS however a good idea to apply DI in your services project.
Sidebar: I think you might be confusing DI with IOC containers: DI is just a principle that states that you don't just "new up" dependencies, but rather asks them nicely (through a constructor, setter, whatever). IOC containers are a step further and will do much of the heavy lifting once manually managing your dependencies becomes a burden. I like to handwrite my unit tests and NOT use a container like castle windsor there so I really feel the pain of having too many dependencies in a class and can do something about that smell instead of having the container act like a deodorant. There are some cases where I make an exception and do use an IOC container in my unit tests, for example for legacy code with huuuuuge lists of difficult dependencies.
On your unit test question: some people like to stick them in the same project. I personally like to put them in a separate project that only contains tests so I can't "cheat". By cheating I mean the following: one of the reasons to write unit tests is to highlight painful parts in your code (this is what's called "listening to your tests"). By putting tests in the same project as your production code you can more easily reach into the internals of your objects (through the "internal" visibility keyword, for example). That's almost always a design smell and this smell is made more clear when your tests can't access your production code through its internals and have to use the public API.
Finally: it's super awesome that you're thinking about this stuff at the start of your project. This is when it's easiest to introduce practices like unit testing because you don't have a large -untestable- legacy codebase yet. Don't give up, it's definitely worth the effort once you get the hang of it!
You wire up your IOC container during application startup. Generally speaking, the code that initialises the container should be the only production code that knows what container you’re using. There can be exceptions, but they should be few and far between. Since you’re creating a binary dependency between your MVC project and your service project, you should only have a single container, created in your MVC project.
There are two main approaches you can follow for setting up your container. You can either setup all of your mappings/mapping rules within the MVC project, or you can split the responsibility so that your MVC project calls out to each library it depends on with the library having responsibility for the mappings within it. Which approach is best, is going to depend on various things including the complexity of your required mappings.
As I’ve said, with either approach to registering your mappings, the majority of your code shouldn’t know about your IOC. You should be writing your code to depend upon interfaces, which in production will be injected in by your container. When you are writing your unit tests then you get to decide what it makes the most sense to inject. Depending what you’re trying to test, you can then inject a real dependency or a Mocked/Stubbed dependency in order to allow you to have more control over the flow through your code under test.
I would strongly recommend not putting your tests in the same project as your production code. Whilst it may be slightly easier to get it up and running, it comes with some downsides / risks. Your assembly will need to add references to anything that your tests rely on (testing frameworks, mocking frameworks etc.). The last thing I would want to do is deploy test code to a production server, at the very least it adds bulk to your binary at worst it introduces security vulnerabilities. You also run the risk of accidently creating a dependency on one of your test classes (such as a helper method). If you put your tests in a separate class library that depends on your production code then these dependency issues go away.
I tend to create a unit test project for each production project. Depending on what I’m writing, I might make use of the InternalsVisibleTo property to allow my test project to see internal classes. I might also create one or more integration test projects. Having an integration test project that uses something like Selenium would allow you to validate that your MVC project actually works end to end which is an important aspect of testing to consider. It’s very easy to get carried away with dependency injection and unit testing. You write a whole bunch of code, paired with unit tests and then it falls over when you run it for the first time because you haven’t set your container up properly.

Zend Di vs ServiceManager dependency injection containers

What is DI for and what is its use case, when we have ServiceManager?
They appear to be similar since in configuration files for both zend-di and zend-servicemanager we can set up some options such as aliases and invokables.
I am trying to get a better understanding of what is happening behind the scenes with these components, and documentation did not give me enough info.
Could you please tell me what the difference is and when I should use Di instead of ServiceManager?
Zend\DI relies on magic, like reflections, to detect and inject dependencies while service manager uses user provided factories. That is main difference.
Di sort of deprecated in community in favor of SM due to complexity, debugging and performance issues.
It supposed to be good for RAD, but you need above average knowledge to use it properly.
On the other hand SM have pretty verbose and explicit wiring, you can open your code year later and easily figure out what is going on.
Zend\Di takes care of wiring your classes together, whereas with Zend\ServiceManager you have to wire things manually and write a factory closure for every class you want to instantiate.
Zend\ServiceManager is much faster since it does not rely on the slow Reflection API. On the other hand, writing closures for large applications with hundreds of classes becomes very tedious. Keeping your closures up-to-date will get trickier as your application grows.
To address this problem, I have written a Zend Framework 2 module called ZendDiCompiler. It relies on Zend\Di to scan your code and auto-generates factory code to instantiate your classes. You get the best of both components: the power of Zend\Di and the performance of Zend\ServiceManager.
I have put quite a bit of work into the documentation of ZendDiCompiler and some easy and more advanced usage examples are provided, too.
Basically the difference is as follows:
Zend\ZerviceManager = Factory driven IoC Container
Zend\Di = Autowiring IoC implementation
Zend\Di was Refactored for Version 3. Its behaviour now more solid and predictable than v2 and it is designed to integrate seamlessly into zend-servicemanager to provide auto-wiring capabilities (no more odd magic). Since it uses PHP's reflection api to resolve dependencies it is slower than a factory driven approach. Therefore version 3 comes with an AoT compiler to create a pre-resolved Injector that omits the use of Reflection. An additional benefit: The generated factories can also be used with Zend\ServiceManager directly.
There is a guide for using AoT with both components: https://zendframework.github.io/zend-di/cookbook/aot-guide/

What's wrong with doing Dependency Injection configuration in code?

XML seems to be the language of the day, but it's not type safe (without an external tool to detect problems) and you end up doing logic in XML. Why not just do it in the same language as the rest of the project. If it's java you could just build a config jar and put it on the classpath.
I must be missing something deep.
The main downside to configuration DI in code is that you force a recompilation in order to change your configuration. By using external files, reconfiguring becomes a runtime change. XML files also provide extra separation between your code and configuration, which many people value highly.
This can make it easier for testing, maintainability, updating on remote systems, etc. However, with many languages, you can use dynamic loading of the code in question and avoid some of the downsides, in which case the advantages diminish.
Martin Fowler covered this decision pretty well here:
http://martinfowler.com/articles/injection.html
Avoiding the urge to plagiarize... just go read the section "Code or configuration files".
There's nothing intrinsically wrong with doing the configuration in code, it's just that the tendency is to use XML to provide some separation.
There's a widespread belief that somehow having your configuration in XML protects you from having to rebuild after a change. The reality in my experience is that you need to repackage and redeploy the application to deliver the changed XML files (in the case of web development anyway), so you could just as easily change some Java "configuration" files instead. Yo could just drop the XML files onto the web server and refresh, but in the environment I work, audit would have a fit if we did.
The main thing that using XML configuration achieves in my opinion is forcing developers to think about dependency injection and separation of concerns. in Spring (amongst others), it also provides a convenient hook to hang your AOP proxies and suchlike. Both of these can be achieved in Java configuration, it is just less obvious where the lines are drawn and the tendency may be to reintroduce direct dependencies and spaghetti code.
For information, there is a Spring project to allow you to do the configuration in code.
The Spring Java Configuration project (JavaConfig for short) provides a type-safe, pure-Java option for configuring the Spring IoC container. While XML is a widely-used configuration approach, Spring's versatility and metadata-based internal handling of bean definitions means alternatives to XML config are easy to implement.
In my experience, close communication between the development team and the infrastructure team can be fostered by releasing frequently. The more you release, the more you actually know what the variability between your environments are. This also allows you to remove unnecessary configurability.
A corollary to conway's law applies here - your config files will resemble the variety of environments your app is deployed to (planned or actual).
When I have a team deploying internal applications, I tend to drive towards config in code for all architectural concerns (connection pools, etc), and config in files for all environmental config (usernames, connection strings, ip addresses). If there different architectural concerns across different environments, then I'll encapsulate those into one class, and make that classname part of the config files - e.g.
container.config=FastInMemoryConfigurationForTesting
container.config=ProductionSizedConfiguration
Each one of these will use some common configuration, but will override/replace those parts of the architecture that need replacing.
This is not always appropriate however. There are several things that will affect your choice:
1) how long it takes after releasing a new drop before it is deployed successfully in each production environment and you receive feedback on that environment (cycle time)
2) The variability in deployed environments
3) The accuracy of feedback garnered from the production environments.
So, when you have a customer who distributes your app to their dev teams for deployment, you are going to have to make your app much more configurable than if you push it live yourself. You could still rely on config in code, but that requires the target audience to understand your code. If you use a common configuration approach (e.g. Spring), you make it easier for the end users to adapt and workaround issues in their production.
But a rubric is: configurability is a substitute for communication.
XML is not meant to have logic, and it's far from being a programming language.
XML is used to store DATA in a way easy to understand and modify.
Has you say, it's often used to store definitions, not business logic.
You mentioned Spring in a comment to your question, so that suggests you may be interested in the fact that Spring 3 lets you express your application contexts in Java rather XML.
It's a bit of a brain-bender, but the definition of your beans, and their inter-dependencies, can be done in Java. It still keeps a clean separation between configuration and logic, but the line becomes that bit more blurred.
XML is mostly a data (noun) format. Code is mostly a processing (verb) format. From the design perspective, it makes sense to have your configuration in XML if it's mostly nouns (addresses, value settings, etc) and code if it's mostly verbs (processing flags, handler configurations, etc).
Its bad because it makes testing harder.
If you're writing code and using methods like getApplicationContext() to obtain the dependencies, you're throwing away some of the benefits of dependency injection.
When your objects and services don't need to know how to create or acquire the resources on which they depend, they're more loosely coupled to those dependencies.
Loose coupling means easier unit testing. Its hard to get something into a junit if you need to instantiate all its dependencies. When a class omits assumptions about its dependencies, its easy to use mock objects in place of real ones for the purpose of testing.
Also, if you can resist the urge to use getApplicationContext() and other code based DI techniques, then you can (sometimes) rely on spring autowiring which means means even less configuration work. Configuration work whether in code or in XML is tedious, right?

Dependency Injection Frameworks: Why do I care?

I was reading over Injection by Hand and Ninjection (as well as Why use Ninject ). I encountered two pieces of confusion:
The inject by hand technique I am already familiar with, but I am not familiar with Ninjection, and thus am not sure how the complete program would work. Perhaps it would help to provide a complete program rather than, as is done on that page, showing a program broken up into pieces
I still don't really get how this makes things easier. I think I'm missing something important. I can kind of see how an injection framework would be helpful if you were creating a group of injections and then switching between two large groups all at once (this is useful for mocking, among other things), but I think there is more to it than that. But I'm not sure what. Or maybe I just need more examples of why this is exciting to drive home the point.
When injecting your dependencies without a DI framework you end up with arrow code all over your application telling classes how to build their dependencies.
public Contact()
: this(new DataGateWay())
{
}
But if you use something like Ninject, all the arrow code is in one spot making it easier to change a dependency for all the classes using it.
internal class ProductionModule : StandardModule
{
public override void Load()
{
Bind<IDataGateway>().To<DataGateWay>();
}
}
I still don't really get how this makes things easier. I think I'm missing something important.
Wouldn't it would be great if we only had to develop discrete components where each provided distinct functionality we could easily understand, re-use and maintain. Where we only worked on components.
What prevents us from doing so, is we need some infrastructure that can somehow combine and manage these components into a working application automatically. Infrastructure that does this is available to us - an IOC framework.
So an IOC framework isn't about managing dependencies or testing or configuration. Instead it is about managing complexity, by enabling you to only work and think about components.
It allows you to easily test your code by mocking the interfaces that you need for a particular code block. It also allows you to easily swap functionality without breaking other parts of the code.
It's all about cohesion and coupling.
You probably won't see the benefit on small projects, but once you get past small it becomes really apparent when you have to make changes to the system. It's a breeze when you've used DI.
I really like the autowiring aspect of some frameworks ... when you do not have to care about what your types need to be instantiated.
EDIT:
I read this article by Ayende # Rahien. And I really support his point.
Dependency injection using most frameworks can be configured at runtime, without requiring a recompile.
Dependency injection can get really interesting if you get your code to the point where there are very few dependencies in the code at all. Some dependency injection frameworks will allow you define your dependencies in a configuration file. This can be very useful if you need a really flexible piece of software that needs to be changed without modifying the code. For example, workflow software is a prime candidate for this type of solution.
Dependency Injection is essential for the Component Driven Development. The latter allows to build really complex applications in a much more efficient and reliable manner.
Also, it allows to separate common cross-cutting concerns cleanly from the other code (this results in more reusable and flexible codebase).
Related links:
Inversion of Control and Dependency Injection - Wiki
Component-Driven Development - Wiki
Cross-cutting concerns - Wiki

IoC Container Configuration/Registration

I absolutely need to use an IoC container for decoupling dependencies in an ever increasingly complex system of enterprise services. The issue I am facing is one related to configuration (a.k.a. registration). We currently have 4 different environments -- development to production and in between. These environments have numerous configurations that slightly vary from environment to environment; however, in all cases that I can currently think of, dependencies between components do not differ from environment to environment, though I could have missed something and/or this could obviously change.
So, the ultimate question is, does anybody have a similar experience using an IoC framework? Or, can anybody recommend one framework over another that would provide flexible registration be it through some sort of convention or simplified configuration information? Would I still be able to benefit from a fluent interface or am I stuck with XML -- I'd like to avoid XML-hell.
Edit: This is a .Net environment and I have been looking at Windsor, Ninject and Autofac. They all seem to now support both methods of registration (fluent and XML), though Autofac's support for lambda expressions seems to be a little different than the others. Anybody use that in a similar multi-deployment environment?
If you want to abstract your container, and be able to use different ones, look into having it injectable in a way I tried to do it here
I use Ninject. I like the fact that I don't have to use Xml to configure the dependencies. I can just use straight up C# code. There are multiple ways of doing it also. I know other libraries have that feature, but Ninject offers fast instantiation, it is pretty lightweight, it has conditional binding, supports compact framework, and it supports Silverlight, 2.0. I also use a wrapper on top of it, in case I do switch it out for another framework in the future. You should definitely try Ninject when deciding on a framework.
I'm not sure whether it will suit your particular case, you didn't mention what platform you're working in, but I've had great success with Castle Windsor's IOC framework. The dependencies are setup in the config file (it's a .NET framework)
Look at Ayendes rhino commons. He uses an abstraction over the IoC container. So that you can switch containers whenever you want. Something like container.Resolve is always there in every container.
I use Structuremap to do the dirty work it has a fluent interface and the XML things and it is powerfull enough for most things you want to do. Each one has it's own pros and cons so a little abstraction so you can easily switch (you never know how long they are going to be around) is good. For the rest I think Spring.Net, Castle windsor, Ninject and StructureMap aren't that far apart anymore.

Resources