What's wrong with doing Dependency Injection configuration in code? - dependency-injection

XML seems to be the language of the day, but it's not type safe (without an external tool to detect problems) and you end up doing logic in XML. Why not just do it in the same language as the rest of the project. If it's java you could just build a config jar and put it on the classpath.
I must be missing something deep.

The main downside to configuration DI in code is that you force a recompilation in order to change your configuration. By using external files, reconfiguring becomes a runtime change. XML files also provide extra separation between your code and configuration, which many people value highly.
This can make it easier for testing, maintainability, updating on remote systems, etc. However, with many languages, you can use dynamic loading of the code in question and avoid some of the downsides, in which case the advantages diminish.

Martin Fowler covered this decision pretty well here:
http://martinfowler.com/articles/injection.html
Avoiding the urge to plagiarize... just go read the section "Code or configuration files".

There's nothing intrinsically wrong with doing the configuration in code, it's just that the tendency is to use XML to provide some separation.
There's a widespread belief that somehow having your configuration in XML protects you from having to rebuild after a change. The reality in my experience is that you need to repackage and redeploy the application to deliver the changed XML files (in the case of web development anyway), so you could just as easily change some Java "configuration" files instead. Yo could just drop the XML files onto the web server and refresh, but in the environment I work, audit would have a fit if we did.
The main thing that using XML configuration achieves in my opinion is forcing developers to think about dependency injection and separation of concerns. in Spring (amongst others), it also provides a convenient hook to hang your AOP proxies and suchlike. Both of these can be achieved in Java configuration, it is just less obvious where the lines are drawn and the tendency may be to reintroduce direct dependencies and spaghetti code.
For information, there is a Spring project to allow you to do the configuration in code.
The Spring Java Configuration project (JavaConfig for short) provides a type-safe, pure-Java option for configuring the Spring IoC container. While XML is a widely-used configuration approach, Spring's versatility and metadata-based internal handling of bean definitions means alternatives to XML config are easy to implement.

In my experience, close communication between the development team and the infrastructure team can be fostered by releasing frequently. The more you release, the more you actually know what the variability between your environments are. This also allows you to remove unnecessary configurability.
A corollary to conway's law applies here - your config files will resemble the variety of environments your app is deployed to (planned or actual).
When I have a team deploying internal applications, I tend to drive towards config in code for all architectural concerns (connection pools, etc), and config in files for all environmental config (usernames, connection strings, ip addresses). If there different architectural concerns across different environments, then I'll encapsulate those into one class, and make that classname part of the config files - e.g.
container.config=FastInMemoryConfigurationForTesting
container.config=ProductionSizedConfiguration
Each one of these will use some common configuration, but will override/replace those parts of the architecture that need replacing.
This is not always appropriate however. There are several things that will affect your choice:
1) how long it takes after releasing a new drop before it is deployed successfully in each production environment and you receive feedback on that environment (cycle time)
2) The variability in deployed environments
3) The accuracy of feedback garnered from the production environments.
So, when you have a customer who distributes your app to their dev teams for deployment, you are going to have to make your app much more configurable than if you push it live yourself. You could still rely on config in code, but that requires the target audience to understand your code. If you use a common configuration approach (e.g. Spring), you make it easier for the end users to adapt and workaround issues in their production.
But a rubric is: configurability is a substitute for communication.

XML is not meant to have logic, and it's far from being a programming language.
XML is used to store DATA in a way easy to understand and modify.
Has you say, it's often used to store definitions, not business logic.

You mentioned Spring in a comment to your question, so that suggests you may be interested in the fact that Spring 3 lets you express your application contexts in Java rather XML.
It's a bit of a brain-bender, but the definition of your beans, and their inter-dependencies, can be done in Java. It still keeps a clean separation between configuration and logic, but the line becomes that bit more blurred.

XML is mostly a data (noun) format. Code is mostly a processing (verb) format. From the design perspective, it makes sense to have your configuration in XML if it's mostly nouns (addresses, value settings, etc) and code if it's mostly verbs (processing flags, handler configurations, etc).

Its bad because it makes testing harder.
If you're writing code and using methods like getApplicationContext() to obtain the dependencies, you're throwing away some of the benefits of dependency injection.
When your objects and services don't need to know how to create or acquire the resources on which they depend, they're more loosely coupled to those dependencies.
Loose coupling means easier unit testing. Its hard to get something into a junit if you need to instantiate all its dependencies. When a class omits assumptions about its dependencies, its easy to use mock objects in place of real ones for the purpose of testing.
Also, if you can resist the urge to use getApplicationContext() and other code based DI techniques, then you can (sometimes) rely on spring autowiring which means means even less configuration work. Configuration work whether in code or in XML is tedious, right?

Related

require all namespaces by prefix

I have an application that can be extended with defmethod calls. The application should be extended in runtime by adding new namespaces to the classpath that contain additional defmethod calls.
I am looking for a dependency injection solution. The question is: how will my application know what namespaces it should require so that the defmethod calls can take effect?
One solution is to have a central configuration file that contains the names of the namespaces that can be required. A drawback is that I need to edit the configurations by hand when I want to enable a plugin.
An other way is to somehow dynamically scan the classpath for additional namespaces and require them based on a predicate (for example a namespce name prefix).
I found only these two solutions but I wonder what other ways may be around to do runtime dependency injection in Clojure. And what libraries are commonly used for this purpose?
Thank you in advance.
There are 3 dependency injection frameworks commonly used in Clojure land:
Component
Mount
Integrant
Of these, Integrant will probably fit your way of thinking best. However, in the past I've thought that I had the problem you are describing, and gone with the scan for namespaces that need to be required approach. But in the fullness of time I realised that I was structuring by code in a sub-optimal way, and thinking about it differently made the code easier to follow and fixed this backwards dependency problem at the same time. Your situation may well be different. The searching for namespaces to load does work though ;)

How best to organise my Visual Studio solution for Unit Tests and Dependency Injection

I have a fresh Visual Studio 2012 solution which consists of the following projects:
x1 Asp.Net Web API project (This holds my MVC/API controllers)
x1 Services project (based on the ASP.NET Empty Web Application template)
I've created two additional projects for the above (based on the ASP.NET Empty Web Application template) which I've labelled as my test projects.
I'm slowly getting my head around the whole TDD and DI approach to development hence my brain is a little swiss cheesed at this late hour.
I would like guidance on where I should setup DI and my unit tests. The approach I'm taking is to put as many of my methods in to my 'services' project. My main Web API project has a reference to my services project, so I can access all my public methods from that class library (I think that's the correct terminology).
I've now hit a brain block! I want to unit test as much as possible all the methods I expose in my services project. However my DI container (Castle Windsor) is only setup in my main API project, given that it has controllers etc.
So question:
For my services project do I just forget DI and write my unit tests direct to the concrete classes/interfaces? The reason I ask is because the Castle Windsor examples I've seen have been around setting up a container for my controllers/MVC web application and then instantiating the container via the application startup in global.ascx. If I should be using DI in my services project, how would I instantiate a container and where?
Following on from question 1, I wanted to gauge peoples opinions on where/how to structure all my unit tests. The easiest place to stick my tests would be in my main Web API project, given that everything I'm doing is either directly coded in that project or pulled in as a reference (my services project in this case).
That said I would imagine it would be better to have tests written exclusively for my services project, of which are contained in their own project. That way I can test my methods without the Web API project being involved at any level. Who knows, the Web API project might get canned but the services project is recycled for a different platform (hopefully you can see where I'm going with this).
The end goal of all this is to have most of my methods held in a services project, reusable for other projects in whatever form they may come. To have my Web API and other future MVC web applications pull functionality from my services library. All the while I have maintainable, manageable and above all 'testable' projects.
I'm at the start of my development cycle and doing all that I can in reading articles and reaching out to those in the know to get best advise so I can create solid foundations for my project.
To answer your questions: use an IOC container like Castle Windsor where you wire up your components. If you don't wire up components in your services project, there's no need for an IOC container there. It IS however a good idea to apply DI in your services project.
Sidebar: I think you might be confusing DI with IOC containers: DI is just a principle that states that you don't just "new up" dependencies, but rather asks them nicely (through a constructor, setter, whatever). IOC containers are a step further and will do much of the heavy lifting once manually managing your dependencies becomes a burden. I like to handwrite my unit tests and NOT use a container like castle windsor there so I really feel the pain of having too many dependencies in a class and can do something about that smell instead of having the container act like a deodorant. There are some cases where I make an exception and do use an IOC container in my unit tests, for example for legacy code with huuuuuge lists of difficult dependencies.
On your unit test question: some people like to stick them in the same project. I personally like to put them in a separate project that only contains tests so I can't "cheat". By cheating I mean the following: one of the reasons to write unit tests is to highlight painful parts in your code (this is what's called "listening to your tests"). By putting tests in the same project as your production code you can more easily reach into the internals of your objects (through the "internal" visibility keyword, for example). That's almost always a design smell and this smell is made more clear when your tests can't access your production code through its internals and have to use the public API.
Finally: it's super awesome that you're thinking about this stuff at the start of your project. This is when it's easiest to introduce practices like unit testing because you don't have a large -untestable- legacy codebase yet. Don't give up, it's definitely worth the effort once you get the hang of it!
You wire up your IOC container during application startup. Generally speaking, the code that initialises the container should be the only production code that knows what container you’re using. There can be exceptions, but they should be few and far between. Since you’re creating a binary dependency between your MVC project and your service project, you should only have a single container, created in your MVC project.
There are two main approaches you can follow for setting up your container. You can either setup all of your mappings/mapping rules within the MVC project, or you can split the responsibility so that your MVC project calls out to each library it depends on with the library having responsibility for the mappings within it. Which approach is best, is going to depend on various things including the complexity of your required mappings.
As I’ve said, with either approach to registering your mappings, the majority of your code shouldn’t know about your IOC. You should be writing your code to depend upon interfaces, which in production will be injected in by your container. When you are writing your unit tests then you get to decide what it makes the most sense to inject. Depending what you’re trying to test, you can then inject a real dependency or a Mocked/Stubbed dependency in order to allow you to have more control over the flow through your code under test.
I would strongly recommend not putting your tests in the same project as your production code. Whilst it may be slightly easier to get it up and running, it comes with some downsides / risks. Your assembly will need to add references to anything that your tests rely on (testing frameworks, mocking frameworks etc.). The last thing I would want to do is deploy test code to a production server, at the very least it adds bulk to your binary at worst it introduces security vulnerabilities. You also run the risk of accidently creating a dependency on one of your test classes (such as a helper method). If you put your tests in a separate class library that depends on your production code then these dependency issues go away.
I tend to create a unit test project for each production project. Depending on what I’m writing, I might make use of the InternalsVisibleTo property to allow my test project to see internal classes. I might also create one or more integration test projects. Having an integration test project that uses something like Selenium would allow you to validate that your MVC project actually works end to end which is an important aspect of testing to consider. It’s very easy to get carried away with dependency injection and unit testing. You write a whole bunch of code, paired with unit tests and then it falls over when you run it for the first time because you haven’t set your container up properly.

Dependency injection just for testing or production?

Are dependency injection frameworks used just for testing or is it used in production code? Seems once you start using a framework like ninject, you would not want to reverse everything out. Also, is there a performance hit to using something like ninject?
Dependency Injection is an architecture pattern, not a testing one. So it is intended to be used when building production code. In fact, a really good framework introduces really tiny overhead if any. Can't say for sure if ninject or unity are like so, I used to implement my own.
Possible real downsides of DI are mostly around coding process, not production performance. One that should be probably mentioned: when using DI you lose an ability to walk through the code with 'Go to Definition' - it always brings you to interface which is pretty logical but still unusable.
You definitely use DI in a production environment. In a testing environment it is ok to just wire things up for the purposes of the test, indeed, if you are using mock objects, you might have to wire the classes up yourself.
Use it for everything! Why reverse it out? The performance hit for a quality framework should only happen on start up. Plus there might be a minor penalty until the VM can inline the extra layers. Either way you see a decrease in bugs and an increase in developer productivity.
DI is especially nice for production. In our software, we use the exact same configuration in production as we do in tests.
When the tests are run, the test configuration just flags certain services as being mocked out. This means that the test configuration is:
list out mocked out services
specify return values for the various method calls on the mocked out service methods.
Test Benefits:
No config differences between production and test means no possibility of a latent config dependent bug
Services are injected in to the Test class as well.
Test configuration is trivial (only needed really for cases where services are mocked out)
Test configuration is always same as production. ( easier to maintain )
Service initialization is never manual and hacked together.
Production Benefits:
Clean start up code - no hard coded dependencies
Circular dependencies are not an issue.
Ability to "turn off" a local copy of a service and route the requests to a different box. ( Used for background batch tasks )

Comaprison of Liferay ServiceBuilder to other Code generation tools like AndroMDA

I started digging into the liferay 6.x ServiceBuilder framework and really liked its code generation approach. A simple service.xml file can generate ready to use powerful services without even writing a single line of code.
I also tried looking into AndroMDA which can generate similar services from the UML model, which sounds even more interesting since it will link my business model directly without me needing to learn a new xml config for service.xml (in case of liferay ServiceBuilder)
now I am in the process of deciding which tool should I use. Based on your experience with any of these tools Please let me know what are Pros/Cons of using any of this library,
I am interested to know these aspects, along with your own thoughts
Which is better to keep my development more productive in long term.
If I use ServiceBuilder will I be able to use the services outside portal env (lets say running same service from a non-portal app server.
Is UML driven approach always good or there are some practical cons/challenges of it.
Do you know of any other code generation library which is better than these two for liferay 6.x development? I also checked these SO Threads
Do You Use Code Generators
Java Code Generation
Following few problems I have experienced with Servicebuilder (I am using liferay 5.2.3) :
Not able to make use ORM framework. There is no way to generate
relations among objects. Because of this I am effectively working
just object mapper. It is not generating onetomany kind of relations
Can not use basic object oriented things like inheritance with domain or services
It is quite hard to write unit test cases
I still didn't understand what is the need of complex domain structure
I feel the code it is generating can be quickly written using an IDE
But definitely it has its own benefits like Egar said, it is specifically made for Liferay. So it can quickly generate everything that is needed for liferay. I heard in latest versions of liferay few of above problems are fixed.
Overall it depends on your requirement. If you need more control over your ORM layer and you have complex business logic which needs quite a lot of unit testing, go for normal spring services which can be exposed as webservices or REST services to your portlets.
Otherwise service builder is also good for simple portlets. Other approach could be using both. All complex services as a separate project and simple ones with service builder.
There is an important fact that you should be aware of. ServiceBuilder has been used to help building the portal itself and it is tightly integrated into it. You cannot use it outside of Liferay...I mean it probably could be taken and modified for general usage, but I doubt it would make sense.
Most importantly because Portal and each plugin that you are developing have their own web application context in a servlet container - each has its own classloader. Plugins are using Portal classloader and portal services, etc. etc.
Simply put, ServiceBuilder generated code and spring context can exist only if there is a webapp/ROOT/ which is Liferay Portal with portal classloader etc.
AndroMDA is a MDA framework for general usage. I don't know it much, so that I'm rather not going to make comparisons. The power of ServiceBuilder is that it is not a framework for general usage - the more powerful it is for liferay plugin development.

What preferred way to wire dependencies using IoC container?

I believe that most IoC containers allow you to wire dependencies with XML configuration file. What are cons and pros for using configuration file vs. registering dependencies in code?
These pros and cons are based on my work with spring. It may be slightly different for other containers.
XML
pro
flexible
more powerful than annotations in some areas
very explicit modelling of the dependencies of your classes
con
verbose
difficulties with refactoring
switching between several files
Annotations
pro
less file switching
auto configuration for simple wirings
less verbose than xml
con
more deployment specific data in your code (usually you can override this with xml configs)
xml is almopst(?) always needed, at least to set up the annonation based config
annotation magic may lead to confusion when searching for the class that is used as dependency
Code
pro
Can take advantage of strongly-typed languages (e.g. C#, Java)
Some compile-time checking (can't statically check dependencies, though)
Can take advantage of DSLs (e.g. Binsor, Fluent interfaces)
Less verbose than XML (e.g. you don't need to always specify the whole assembly qualified name (when talking .net))
con
wiring via code may lead to complex wirings
hard dependencies to IOC container in the codebase
I am using a mix of XML+Annotation. Some things especially regarding database access are always configured via xml, while things like the controllers or services are mostly configured via annotations in the code.
[EDIT: I have borrowed Mauschs code PROs]
XML pros:
Can change wiring and parameters without recompiling. Sometimes this is nice to have when switching environments (e.g. you can switch a fake email sender used in dev to the real email sender in production)
Code pros:
Can take advantage of strongly-typed languages (e.g. C#, Java)
Some compile-time checking (can't statically check dependencies, though)
Refactorable using regular refactoring tools.
Can take advantage of DSLs (e.g. Binsor, Fluent interfaces)
Less verbose than XML (e.g. you don't need to always specify the whole assembly qualified name (when talking .net))
I concur. I have found ioc containers to give me very little, however they can very easily make it harder to do something. I can solve most of the problems I face just by using my programming language of choice, which have allways turned out to be simpler easier to maintain and easier to navigate.
I'm assuming that by "registering dependencies in code" you mean "use new".
'new' is an extraordinarily powerful dependency injection framework. It allows you to "inject" your "dependencies" at the time of object creation - meaning no forgotten parameters, or half-constructed objects.
The other major potential benefit is that when you use refactoring tools (say in Resharper, or IntelliJ), the calls to new change too
Otherwise you can use some XML nonsense and refactor with XSL.

Resources