What preferred way to wire dependencies using IoC container? - dependency-injection

I believe that most IoC containers allow you to wire dependencies with XML configuration file. What are cons and pros for using configuration file vs. registering dependencies in code?

These pros and cons are based on my work with spring. It may be slightly different for other containers.
XML
pro
flexible
more powerful than annotations in some areas
very explicit modelling of the dependencies of your classes
con
verbose
difficulties with refactoring
switching between several files
Annotations
pro
less file switching
auto configuration for simple wirings
less verbose than xml
con
more deployment specific data in your code (usually you can override this with xml configs)
xml is almopst(?) always needed, at least to set up the annonation based config
annotation magic may lead to confusion when searching for the class that is used as dependency
Code
pro
Can take advantage of strongly-typed languages (e.g. C#, Java)
Some compile-time checking (can't statically check dependencies, though)
Can take advantage of DSLs (e.g. Binsor, Fluent interfaces)
Less verbose than XML (e.g. you don't need to always specify the whole assembly qualified name (when talking .net))
con
wiring via code may lead to complex wirings
hard dependencies to IOC container in the codebase
I am using a mix of XML+Annotation. Some things especially regarding database access are always configured via xml, while things like the controllers or services are mostly configured via annotations in the code.
[EDIT: I have borrowed Mauschs code PROs]

XML pros:
Can change wiring and parameters without recompiling. Sometimes this is nice to have when switching environments (e.g. you can switch a fake email sender used in dev to the real email sender in production)
Code pros:
Can take advantage of strongly-typed languages (e.g. C#, Java)
Some compile-time checking (can't statically check dependencies, though)
Refactorable using regular refactoring tools.
Can take advantage of DSLs (e.g. Binsor, Fluent interfaces)
Less verbose than XML (e.g. you don't need to always specify the whole assembly qualified name (when talking .net))

I concur. I have found ioc containers to give me very little, however they can very easily make it harder to do something. I can solve most of the problems I face just by using my programming language of choice, which have allways turned out to be simpler easier to maintain and easier to navigate.

I'm assuming that by "registering dependencies in code" you mean "use new".
'new' is an extraordinarily powerful dependency injection framework. It allows you to "inject" your "dependencies" at the time of object creation - meaning no forgotten parameters, or half-constructed objects.
The other major potential benefit is that when you use refactoring tools (say in Resharper, or IntelliJ), the calls to new change too
Otherwise you can use some XML nonsense and refactor with XSL.

Related

Castle Windsor - How can a low-level Container install from top-level Installers?

I have several different WCF service host processes. Each of these is dependent on a single, lower level, business logic DLL.
The DLL currently makes use of a home-grown dependency injection mechanism that's based on XML files. There is a lot of variance in the component implementations between the top-level processes.
I would like to replace this DI mechanism with a proper DI tool: Castle Windsor. I would also like to shift from XML based configuration to explicit configuration in code via WindsorInstallers.
One last constraint is that it is not feasible (at this time) to lift the DI container from the low-level DLL to the top-level processes.
So my question is - Given that a WindsorContainer will reside in the low-level DLL, how can it discover implementations of IWindsorInstaller provided by a top-level process that will run it?
I would prefer a solution based on Windsor's API but I've not been able to make successful use of its FromAssembly feature.
The by far preferred option would be to lift it to the host process, but if it's not feasible at this time, as you said, I'd probably recommend the FromAssembly approach.
Alternatively, although I'd normally not recommend it, it might be worthwhile to look at XML configuration for installers.

When using a DI framework, how does a new service know what other services are available?

In a large project that is using a DI framework (such as Ninject in my case), what options exist when implementing a new "service" to find out what other "services" are available to be used as dependencies. Before using DI I have noticed a tendency in our code base to get a reference to a "god" object that pretty much gave access to all the available functionality and then Visual Studio's IntelliSense would become very helpful to discover what all was available (obviously this approach was only possible because of poor architectural decisions of having such an object in the first place).
I can some possible answers and am interested what has worked for others:
You should know the overall system you are working in well enough
to know what other classes/services exist (for example, if I had
static classes I would just have to know that they exist to be able
to use them).
You maintain good external documentation of your
code base so all classes/services are understand by all developers
(this imposes a large documentation burden, it would seem to me).
Create an API to query the DI container (Ninject kernel) for a list
of all bindings to see what services are available (perhaps only
Singletons). This could also be done as part of the build system to
generate a document automatically upon each build that developers
could reference.
Has this ever been an issue for other developers?
Usually you don't want to see all services exist in a system and then choose one of them. You want to access a functionallity. Structure your classes with namespaces in a way so that it is obvious where to look for it.
E.g. If I want to know what collections are available in .NET I type System.Collections.Generic. and the IntelliSense gives me a list of options.
I tend to organise my codebase so that I have a central 'Interface' project to which all other projects have a reference. Then my Logger is available only through the ILogger interface, and the logging module can choose which concrete ILogger to provide. You should not be requesting concrete classes - this defeats the purpose of DI.
In general when you are implementing a new service you should already know what dependencies you need. If you don't know what you should use, ask someone who does. This is the equivalent to having adequate documentation - relying on intellisense will give you a very shallow idea of what you should take as a dependency. Instead you should consult either the documentation or someone who understands the area.

Zend Di vs ServiceManager dependency injection containers

What is DI for and what is its use case, when we have ServiceManager?
They appear to be similar since in configuration files for both zend-di and zend-servicemanager we can set up some options such as aliases and invokables.
I am trying to get a better understanding of what is happening behind the scenes with these components, and documentation did not give me enough info.
Could you please tell me what the difference is and when I should use Di instead of ServiceManager?
Zend\DI relies on magic, like reflections, to detect and inject dependencies while service manager uses user provided factories. That is main difference.
Di sort of deprecated in community in favor of SM due to complexity, debugging and performance issues.
It supposed to be good for RAD, but you need above average knowledge to use it properly.
On the other hand SM have pretty verbose and explicit wiring, you can open your code year later and easily figure out what is going on.
Zend\Di takes care of wiring your classes together, whereas with Zend\ServiceManager you have to wire things manually and write a factory closure for every class you want to instantiate.
Zend\ServiceManager is much faster since it does not rely on the slow Reflection API. On the other hand, writing closures for large applications with hundreds of classes becomes very tedious. Keeping your closures up-to-date will get trickier as your application grows.
To address this problem, I have written a Zend Framework 2 module called ZendDiCompiler. It relies on Zend\Di to scan your code and auto-generates factory code to instantiate your classes. You get the best of both components: the power of Zend\Di and the performance of Zend\ServiceManager.
I have put quite a bit of work into the documentation of ZendDiCompiler and some easy and more advanced usage examples are provided, too.
Basically the difference is as follows:
Zend\ZerviceManager = Factory driven IoC Container
Zend\Di = Autowiring IoC implementation
Zend\Di was Refactored for Version 3. Its behaviour now more solid and predictable than v2 and it is designed to integrate seamlessly into zend-servicemanager to provide auto-wiring capabilities (no more odd magic). Since it uses PHP's reflection api to resolve dependencies it is slower than a factory driven approach. Therefore version 3 comes with an AoT compiler to create a pre-resolved Injector that omits the use of Reflection. An additional benefit: The generated factories can also be used with Zend\ServiceManager directly.
There is a guide for using AoT with both components: https://zendframework.github.io/zend-di/cookbook/aot-guide/

What's wrong with doing Dependency Injection configuration in code?

XML seems to be the language of the day, but it's not type safe (without an external tool to detect problems) and you end up doing logic in XML. Why not just do it in the same language as the rest of the project. If it's java you could just build a config jar and put it on the classpath.
I must be missing something deep.
The main downside to configuration DI in code is that you force a recompilation in order to change your configuration. By using external files, reconfiguring becomes a runtime change. XML files also provide extra separation between your code and configuration, which many people value highly.
This can make it easier for testing, maintainability, updating on remote systems, etc. However, with many languages, you can use dynamic loading of the code in question and avoid some of the downsides, in which case the advantages diminish.
Martin Fowler covered this decision pretty well here:
http://martinfowler.com/articles/injection.html
Avoiding the urge to plagiarize... just go read the section "Code or configuration files".
There's nothing intrinsically wrong with doing the configuration in code, it's just that the tendency is to use XML to provide some separation.
There's a widespread belief that somehow having your configuration in XML protects you from having to rebuild after a change. The reality in my experience is that you need to repackage and redeploy the application to deliver the changed XML files (in the case of web development anyway), so you could just as easily change some Java "configuration" files instead. Yo could just drop the XML files onto the web server and refresh, but in the environment I work, audit would have a fit if we did.
The main thing that using XML configuration achieves in my opinion is forcing developers to think about dependency injection and separation of concerns. in Spring (amongst others), it also provides a convenient hook to hang your AOP proxies and suchlike. Both of these can be achieved in Java configuration, it is just less obvious where the lines are drawn and the tendency may be to reintroduce direct dependencies and spaghetti code.
For information, there is a Spring project to allow you to do the configuration in code.
The Spring Java Configuration project (JavaConfig for short) provides a type-safe, pure-Java option for configuring the Spring IoC container. While XML is a widely-used configuration approach, Spring's versatility and metadata-based internal handling of bean definitions means alternatives to XML config are easy to implement.
In my experience, close communication between the development team and the infrastructure team can be fostered by releasing frequently. The more you release, the more you actually know what the variability between your environments are. This also allows you to remove unnecessary configurability.
A corollary to conway's law applies here - your config files will resemble the variety of environments your app is deployed to (planned or actual).
When I have a team deploying internal applications, I tend to drive towards config in code for all architectural concerns (connection pools, etc), and config in files for all environmental config (usernames, connection strings, ip addresses). If there different architectural concerns across different environments, then I'll encapsulate those into one class, and make that classname part of the config files - e.g.
container.config=FastInMemoryConfigurationForTesting
container.config=ProductionSizedConfiguration
Each one of these will use some common configuration, but will override/replace those parts of the architecture that need replacing.
This is not always appropriate however. There are several things that will affect your choice:
1) how long it takes after releasing a new drop before it is deployed successfully in each production environment and you receive feedback on that environment (cycle time)
2) The variability in deployed environments
3) The accuracy of feedback garnered from the production environments.
So, when you have a customer who distributes your app to their dev teams for deployment, you are going to have to make your app much more configurable than if you push it live yourself. You could still rely on config in code, but that requires the target audience to understand your code. If you use a common configuration approach (e.g. Spring), you make it easier for the end users to adapt and workaround issues in their production.
But a rubric is: configurability is a substitute for communication.
XML is not meant to have logic, and it's far from being a programming language.
XML is used to store DATA in a way easy to understand and modify.
Has you say, it's often used to store definitions, not business logic.
You mentioned Spring in a comment to your question, so that suggests you may be interested in the fact that Spring 3 lets you express your application contexts in Java rather XML.
It's a bit of a brain-bender, but the definition of your beans, and their inter-dependencies, can be done in Java. It still keeps a clean separation between configuration and logic, but the line becomes that bit more blurred.
XML is mostly a data (noun) format. Code is mostly a processing (verb) format. From the design perspective, it makes sense to have your configuration in XML if it's mostly nouns (addresses, value settings, etc) and code if it's mostly verbs (processing flags, handler configurations, etc).
Its bad because it makes testing harder.
If you're writing code and using methods like getApplicationContext() to obtain the dependencies, you're throwing away some of the benefits of dependency injection.
When your objects and services don't need to know how to create or acquire the resources on which they depend, they're more loosely coupled to those dependencies.
Loose coupling means easier unit testing. Its hard to get something into a junit if you need to instantiate all its dependencies. When a class omits assumptions about its dependencies, its easy to use mock objects in place of real ones for the purpose of testing.
Also, if you can resist the urge to use getApplicationContext() and other code based DI techniques, then you can (sometimes) rely on spring autowiring which means means even less configuration work. Configuration work whether in code or in XML is tedious, right?

IoC Container Configuration/Registration

I absolutely need to use an IoC container for decoupling dependencies in an ever increasingly complex system of enterprise services. The issue I am facing is one related to configuration (a.k.a. registration). We currently have 4 different environments -- development to production and in between. These environments have numerous configurations that slightly vary from environment to environment; however, in all cases that I can currently think of, dependencies between components do not differ from environment to environment, though I could have missed something and/or this could obviously change.
So, the ultimate question is, does anybody have a similar experience using an IoC framework? Or, can anybody recommend one framework over another that would provide flexible registration be it through some sort of convention or simplified configuration information? Would I still be able to benefit from a fluent interface or am I stuck with XML -- I'd like to avoid XML-hell.
Edit: This is a .Net environment and I have been looking at Windsor, Ninject and Autofac. They all seem to now support both methods of registration (fluent and XML), though Autofac's support for lambda expressions seems to be a little different than the others. Anybody use that in a similar multi-deployment environment?
If you want to abstract your container, and be able to use different ones, look into having it injectable in a way I tried to do it here
I use Ninject. I like the fact that I don't have to use Xml to configure the dependencies. I can just use straight up C# code. There are multiple ways of doing it also. I know other libraries have that feature, but Ninject offers fast instantiation, it is pretty lightweight, it has conditional binding, supports compact framework, and it supports Silverlight, 2.0. I also use a wrapper on top of it, in case I do switch it out for another framework in the future. You should definitely try Ninject when deciding on a framework.
I'm not sure whether it will suit your particular case, you didn't mention what platform you're working in, but I've had great success with Castle Windsor's IOC framework. The dependencies are setup in the config file (it's a .NET framework)
Look at Ayendes rhino commons. He uses an abstraction over the IoC container. So that you can switch containers whenever you want. Something like container.Resolve is always there in every container.
I use Structuremap to do the dirty work it has a fluent interface and the XML things and it is powerfull enough for most things you want to do. Each one has it's own pros and cons so a little abstraction so you can easily switch (you never know how long they are going to be around) is good. For the rest I think Spring.Net, Castle windsor, Ninject and StructureMap aren't that far apart anymore.

Resources