Java EE 6 : #Inject and Instance<T> - dependency-injection

I have a question about the #Inject annotation in java ee 6 :
What is the difference between :
#Inject
private TestBean test;
#Inject
private Instance<TestBean> test2;
To have the reference :
test2.get();
Some infos about Instance : http://docs.oracle.com/javaee/6/api/javax/enterprise/inject/Instance.html
Maybe it's doesnt create the object until it's called by get() ? I just wanted to know which one is better for the jvm memory. I think direct #Inject will directly create an instance of the object , even if it's not used by the appplication...
Thank you !

The second is what's called deferred injection or initialization. Your container will elect do do the work of locating, initializing, and injecting the proper object for TestBean until you call get() in most circumstances.
As far as "which one is better", you should defer to the rules of optimization. Don't optimize until you have a problem, and use a profiler.
Another words, use the first one unless you can definitively prove the second one is saving you significant amounts of memory and cpu.
Let me know if that answers your question!

Further information on use cases for Instance can be found in documentation:
In certain situations, injection is not the most convenient way to obtain a contextual reference. For example, it may not be used when:
the bean type or qualifiers vary dynamically at runtime
there may be no bean which satisfies the type and qualifiers
we would like to iterate over all beans of a certain type
This is pretty cool so you can do something like
#Inject #MyQualifier Instance<MyType> allMycandidates;
So you can obtain an Iterator from allMyCandidates and iterate over all the qualified objects.

Related

Can CDI dependency injection be optional?

In Spring DI, declaring an autowired field as Optional enables a client to not inject any value to it. Is this possible using Java EE's CDI? I tried Optional and it fails. I want to know if there is an equivalent mechanism I can use.
Here is what I tried:
public class OmeletteMaker implements EggMaker{
public static void main(String[] args){
WeldContainer container = new Weld().initialize();
OmeletteMaker omeletteMaker = container.instance().select(OmeletteMaker.class).get();
}
#Inject
Optional<Vegetable> vegetable;
}
I get an error message:
Exception in thread "main" org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [Optional] with qualifiers [#Default] at injection point [[BackedAnnotatedField] #Inject cafeteria.OmeletteMaker.vegetable]
There are many questions lurking in this seemingly simple question. I'll try to answer them bearing in mind the spirit of the question.
First, as a general rule, if you #Inject a Fred, that Fred cannot be null unless Fred is in #Dependent scope, and even then a producer method or custom bean will have to explicitly be written to return null. There are edge cases but in all modern CDI implementations this is a good rule of thumb to bear in mind.
Second, Optional isn't special. From the standpoint of CDI, an Optional is just another Java object, so see my first statement above. If you have something that produces an Optional (like a producer method) then it cannot make a null Optional (unless, again, the production is defined to be in the #Dependent scope—and if you were writing such a method to make Optional instances and returning null you are definitely going to confuse your users). If you are in control of producing Optional instances, then you can make them any way you like.
Third, in case you want to test to see if there is a managed bean or a producer of some kind for a Fred, you can, as one of the comments on your question indicates, inject a Provider<Fred> or an Instance<Fred>. These are "made" by the container automatically: you don't have to write anything special to produce them yourself. A Provider<Fred> is an accessor of Fred instances and does not attempt to acquire an instance until its get() method is called.
An Instance is a Provider and an Iterable of all known Freds and can additionally tell you whether (a) it is "unsatisfied"—there are no producers of Fred at all—and (b) it is "resolvable"—i.e. there is exactly one producer of Fred.
Fourth, the common idiom in cases where you want to see if something is there is to inject an Instance parameterized with the type you want, and then check its isResolvable() method. If that returns true, then you can call its get() method and trust that its return value will be non-null (assuming the thing it makes is not in #Dependent scope).
I hope this is helpful!

CDI and multiple instances

I'm just investigating possibilities of DI frameworks and I made some stupid example for it. I have simple service.
public class Service implements ServiceI {
private Source source;
private Translator translator;
#Inject
public Service(Translator translator, Source source) {
this.translator = translator;
this.source = source;
}
I want to have two instances of this service one which is initiated with TranslatorA and SourceA and second which will be injected with different values.
How can one have two instances with different beans injected inside?
I'm interested in ways how to achieve this in both Guice and Weld CDI.
So far I created multiple Guice modules and specify bind-to in it as I like. But I'm not completely sure if it is correct way. And this completely fails in CDI as there are no modules.
I thing that having multiple instances must be pretty common case or am I wrong?
The way you would do this with CDI is by setting up a producers for translator and source. It's the only way to control which implementations are used for injection at runtime. The implementation details may vary based on your exact needs but something like this should get you on the right track
#Produces
public Translator produceTranslator(#Dependent TranslatorA implA, #Dependent TranslatorB implB) {
return checkRuntimeCondition() ? implA : implB;
}
And the same for the source. That way when you inject Service, CDI'll call the producer method for each parameter and use a runtime condition to select the implementation. YMMV on the details, you may need to set up additional qualifiers to avoid ambiguity.

Mark Seemann's conflicting statements about Bastard Injection. Need some clarifications

I'm reading his book Dependency Injection in Net.
1) Here he's saying that Bastard Injection occurs only when we use Foreign Default.
But in his book, the illustration on page 148 shows that Bastard Injection occurs when the default implementation of the dependency is either Foreign Default or Local Default:
So does Bastard Injection anti-pattern also occur when default implementation of dependency is a Local Default?
2) Here ( and also in his book ) he notes that it's ok for a class to have an optional dependency, provided that a default implementation of this dependency is a good Local Default:
But in next article he seems to object to having optional dependencies at all, even if the default implementation is a Local Default:
private readonly ILog log;
public MyConsumer(ILog log)
{
this.log = log ??LogManager.GetLogger("My");
}
In terms of encapsulation, the main problem with such an approach is
that it seems like the MyConsumer class can't really make up its mind
whether or not it controls the creation of its log dependency. While
this is a simplified example, this could become a problem if the ILog
instance returned by LogManager wraps an unmanaged resource which
should be disposed when it's no longer needed.
Are his arguments in the above excerpt also valid when default implementation of dependency is local? If so, then optional dependencies with local defaults should also be avoided?
3)
Pg. 147:
The main problem with Bastard Injection is its use of a FOREIGN
DEFAULT ... , we can no longer freely reuse the class because it drags
along a dependency we may not want. It also becomes more difficult to
do parallel development because the class depends strongly on its
DEPENDENCY.
Foreign Default is an implementation of a dependency that's used as a default and is defined in a different assembly than its consumer. Thus with Foreign Default, consumer's assembly will also drag along dependency's assembly.
Is he also implying that Foreign Default makes parallel development more difficult, while Local Default doesn't? If he is, then that doesn't make sense, since I would assume that what makes parallel development difficult is not so much that consumer's assembly has hard reference to dependency's assembly, but rather the fact that consumer class depends on a concrete implementation of a dependency?
thanks
Since there are many questions here, I'll first attempt to provide an synthesis on my view on the subject, and then answer each question explicitly based on this material.
Synthesis
When I wrote the book, I first and foremost attempted to describe the patterns and anti-patterns I'd witnessed in the wild. Thus, the patterns and anti-patterns in the book are first and foremost descriptive, and only to a lesser degree prescriptive. Obviously, dividing them into patterns and anti-patterns imply a certain degree of judgement :)
There are problems with Bastard Injection on multiple levels:
Package dependency
Encapsulation
Ease of use
The most dangerous problem is related to package dependencies. This is the concept I've attempted to make more actionable by the introduction of the terms Foreign Default versus Local Default. The problem with Foreign Defaults is that they drag along hard-coupled dependencies, which makes (de/re)composition impossible. A good resource that deals more explicitly with package management is Agile Principles, Patterns, and Practices.
On the level of encapsulation, code like this is difficult to reason about:
private readonly ILog log;
public MyConsumer(ILog log)
{
this.log = log ??LogManager.GetLogger("My");
}
Although it protects the class' invariants, the problem is that in this case, null is an acceptable input value. This is not always the case. In the example above, LogManager.GetLogger("My") may only introduce a Local Default. From this code snippet, we have no way to know if this is true, but for the sake of argument, let's assume this for now. If the default ILog is indeed a Local Default, a client of MyConsumer can pass in null instead of ILog. Keep in mind that encapsulation is about making it easy for a client to use an object without understanding all the implementation details. This means that this is all a client sees:
public MyConsumer(ILog log)
In C# (and similar languages) it's possible to pass null instead of ILog, and it's going to compile:
var mc = new MyConsumer(null);
With the above implementation, not only will this compile, but it also works at run-time. According to Postel's law, that's a good thing, right?
Unfortunately, it isn't.
Consider another class with a required dependency; let's call it a Repository, simply because this is such a well-known (albeit overused) pattern:
private readonly IRepository repository;
public MyOtherConsumer(IRepository repository)
{
if (repository == null)
throw new ArgumentNullException("repository");
this.repository = repository;
}
In keeping with encapsulation, a client only sees this:
public MyOtherConsumer(IRepository repository)
Based on previous experience, a programmer may be inclined to write code like this:
var moc = new MyOtherConsumer(null);
This still compiles, but fails at runtime!
How do you distinguish between these two constructors?
public MyConsumer(ILog log)
public MyOtherConsumer(IRepository repository)
You can't, but currently, you have inconsistent behaviour: in one case, null is a valid argument, but in another case, null will cause a runtime exception. This will decrease the trust that every client programmer will have in the API. Being consistent is a better way forward.
In order to make a class like MyConsumer easier to use, you must stay consistent. This is the reason why accepting null is a bad idea. A better approach is to use constructor chaining:
private readonly ILog log;
public MyConsumer() : this(LogManager.GetLogger("My")) {}
public MyConsumer(ILog log)
{
if (log == null)
throw new ArgumentNullException("log");
this.log = log;
}
The client now sees this:
public MyConsumer()
public MyConsumer(ILog log)
This is consistent with MyOtherConsumer because if you attempt to pass null instead of ILog, you will get a runtime error.
While this is technically still Bastard Injection, I can live with this design for Local Defaults; in fact, I sometimes design APIs like this because it's a well-known idiom in many languages.
For many purposes, this is good enough, but still violates an important design principle:
Explicit is better than implicit
While constructor chaining enables a client to use MyConsumer with a default ILog, there's no easy way to figure out what the default instance of ILog would be. Sometimes, that's important too.
Additionally, the presence of a default constructor exposes a risk that a piece of code is going to invoke that default constructor outside of the Composition Root. If that happens, you've prematurely coupled to objects to each other, and once you've done that, you can't decouple them from within the Composition Root.
Thus, there's less risk involved in using plain Constructor Injection:
private readonly ILog log;
public MyConsumer(ILog log)
{
if (log == null)
throw new ArgumentNullException("log");
this.log = log;
}
You can still compose MyConsumer with the default logger:
var mc = new MyConsumer(LogManager.GetLogger("My"));
If you want to make the Local Default more discoverable, you can expose it as a Factory somewhere, e.g. on the MyConsumer class itself:
public static ILog CreateDefaultLog()
{
return LogManager.GetLogger("My");
}
All this sets the stage for answering the specific sub-questions in this question.
1. Does Bastard Injection anti-pattern also occur when default implementation of dependency is a Local Default?
Yes, technically, it does, but the consequences are less severe. Bastard Injection is first and foremost a description that will enable you to easily identify it when you encounter it.
Please note that the above illustration from the book describes how to refactor away from Bastard Injection; not how to identify it.
2. [Should] optional dependencies with local defaults [...] also be avoided?
From a package dependency perspective, you don't need to avoid those; they are relatively benign.
From a usage perspective, I still tend to avoid them, but it depends on what I'm building.
If I create a reusable library (e.g. an OSS project) that many people will use, I may still choose Constructor Chaining in order to make it easier to get started with the API.
If I'm creating a class only for use within a particular code base, I tend to entirely avoid optional dependencies, and instead compose everything explicity in the Composition Root.
3. Is he also implying that Foreign Default makes parallel development more difficult, while Local Default doesn't?
No, I don't. If you have a default, the default must be in place before you can use; it doesn't matter whether it's Local or Foreign.

How to instantiate a MEF exported object using Ninject?

My application is using MEF to export some classes from an external assembly. These classes are setup for constructor injection. The issue I am facing is that
MEF is attempting to instantiate the classes when I try to access them. Is there a way to have Ninject take care of the instantiation of the class?
IEnumerable<Lazy<IMyInterface>> controllers =
mefContainer.GetExports<IMyInterface>();
// The following line throws an error because MEF is
// trying to instantiate a class that requires 5 parameters
IMyInterface firstClass = controllers.First().Value;
Update:
There are multiple classes that implement IMyInterface and I would like to select the one that has a specific name and then have Ninject create an instance of it. I'm not really sure if I want laziness.
[Export(typeof(IMyInterface))]
public class MyClassOne : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
[Export(typeof(IMyInterface))]
public class MyClassTwo : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
Using MEF, I would like to get either MyClassOne or MyClassTwo and then have Ninject provide an instance of MyRepository and YourRepository (Note, these two are bound in a Ninject module in the main assembly and not the assembly they are in)
You could use the Ninject Load mechanism to get the exported classes into the mix, and the you either:
kernel.GetAll<IMyInterface>()
The creation is lazy (i.e., each impl of IMyInterface is created on the fly as you iterate over the above) IIRC, but have a look at the tests in the source (which is very clean and readable, you have no excuse :P) to be sure.
If you dont need the laziness, use LINQ's ToArray or ToList to get a IMyInterface[] or List<IMyInterface>
or you can use the low-level Resolve() family of methods (again, have a look in the tests for samples) to get the eligible services [if you wanted to do some filtering or something other than just using an instance - though binding metadata is probably the solution there]
Finally, if you can edit in an explanation of whether you need laziness per se or are doing it to illustrate a point. (and have a search for Lazy<T> here and in general wrt both Ninject and autofac for some samples - cant recall if there are any examples in the source - think not as it's still on 3.5)
EDIT: In that case, you want a bind that has:
Bind<X>().To<>().In...().Named( "x" );
in the registrations in your modules in the child assembly.
Then when you're resolving in the parent assembly, you use the Kernel.Get<> overload that takes a name parameter to indicate the one you want (no need for laziness, arrays or IEnumerable). The Named mechanism is a specific (just one or two helper extensions implement it in terms of the generalised concept) application of the binding metadata concept in Ninject - there's plenty room to customise it if somethng beyond a simple name is insufficient.
If you're using MEF to construct the objects, you could use the Kernel.Inject() mechanism to inject properties. The problem is that either MEF or Ninject
- has to find the types (Ninject: generally via Bind() in Modules or via scanning extensions, after which one can do a Resolve to subset the bindings before instantiation - though this isnt something you normally do)
- has to instantiate the types (Ninject: typically via a Kernel.Get(), but if you discovered the types via e.g. MEF, you might use the Kernel.Get(Type) overloads )
- has to inject the types (Ninject: typically via a Kernel.Inject(), or implicit in the `Kernel.Get())
What's not clear to me yet is why you feel you need to mix and mangle the two - ultimately sharing duties during construction and constructor injection is not a core use case for either lib, even if they're both quite composable libraries. Do you have a constraint, or do you have critical benefits on both sides?
You can use ExportFactory to create Instances
see docs here:
http://mef.codeplex.com/wikipage?title=PartCreator
Your case would be slitly different
I would use Metadata and a custom attribute also
[ImportMany(AllowRecomposition=true)]
IEnumerable<ExportFactory<IMyInterFace, IMyInterfaceMetaData>> Controllers{ get; set; }
public IMyInterface CreateControllerFor(string parameter)
{
var controller = Controllers.Where(v => v.Metadata.ControllerName == parameter).FirstOrDefault().CreateExport().Value;
return controller;
}
or use return Controllers.First() without the Metadata
Then you can code the ninject parts around that or even stick with MEF
Hope this helps

How should I order my ctor parameters for DI/IOC?

I'm a bit of a DI newbie, so forgive me if this is the wrong approach or a silly question.
Let's say I have a form which creates/updates an order, and I know it's going to need to retrieve a list of products and customers to display. I want to pass in the Order object that it's editing, but I also want to inject the ProductsService and CustomersService as dependencies.
So I will want my IoC container (whichever one I go with) to supply the services, but it'll be up to the calling code to supply the Order object to edit.
Should I declare the constructor as taking the Order object as the first parameter and the ProductsService and CustomersService after that, eg:
public OrderForm(Order order, ProductsService prodsSvc, CustomersService custsSvc)
... or should the dependencies come first and the Order object last, eg:
public OrderForm(ProductsService prodsSvc, CustomersService custsSvc, Order order)
Does it matter? Does it depend on which IoC container I use? Or is there a "better" way?
Matt, you shouldn't mix normal parameters with dependencies. Since your object will be created in the internals of IoC container, how are you going to specify necessary arguments?
Mixing dependency and normal arguments will make logic of your program more complicated.
In this case it would be better to declare dependency properties (i.e. remove dependencies from constructor) or initialize order field after IoC constructed OrderForm and resolved it's dependencies (i.e. remove normal parameters from constructor).
Also you can declare all of your parameters, including order as dependencies.
I disagree with #aku's answer.
I think what you're doing is fine and there are also other ways to do it that are no more or less right. For instance, one may question whether this object should be depending on services in the first place.
Regardless of DI, I feel it is helpful to clarify in your mind at least the kind of state each object holds, such as the real state (Order), derived state (if any), and dependencies (services):
http://tech.puredanger.com/2007/09/18/spelunking/
On any constructor or method, I prefer the real data to be passed first and dependencies or external stuff to be passed last. So in your example I'd prefer the first.
I feel a bit uneasy about allowing an instance of OrderForm to be instantiated without the required reference to an Order instance. One reason might be that this would prevent me from doing upfront checking for null orders. Any further thoughts?
I suppose I could take some comfort in knowing that OrderForm objects will only be instantiated by a Factory method that ensures the Order property is set after making the call to the IoC framework.
I am just a bit late to the party, but I would suggest using a factory in this case: the factory constructor would take the required *Service dependencies which the DI system would resolve and inject, while its Build() method would accept any additional state parameter - like Order. The factory would be, of course, registered itself in the DI system and would play just nicely with it.
public class OrderFormFactory
{
private readonly ProductsService _prodsSvc;
private readonly CustomersService _custsSvc;
public OrderFormFactory(ProductsService prodsSvc, CustomersService custsSvc)
{
_prodsService = prodsService ?? throw new ArgumentNullException(nameof(prodsService));
_custsSvc = custsSvc ?? throw new ArgumentNullException(nameof(custsSvc));
}
public OrderForm Build(Order order)
{
// TODO: Any additional logic
return new OrderForm(_prodsService, _custsSvc, order);
}
}

Resources