Access static Java variables from js code in Nashorn engine - rhino

While trying to port old code running Rhino engine to Nashorn in Java 8, I got the trouble, static properties/methods cannot be accessed from running js script. If I use Rhino, it runs perfectly. I don't know what happens with the implementation of the new Nashorn engine.
import javax.script.*;
public class StaticVars {
public static String myname = "John\n";
public static void main(String[] args) {
try{
ScriptEngine engine;
ScriptEngineManager manager = new ScriptEngineManager();
engine=System.getProperty("java.version").startsWith("1.8")?
manager.getEngineByName("Nashorn") : //j1.8_u51
manager.getEngineByName("JavaScript"); //j1.7
engine.put("staticvars", new StaticVars());
engine.eval("print(staticvars.myname);");
//print "John" if ran with java 7
//print "undefined" if ran with java 8
} catch(Exception e){e.printStackTrace();}
}
}

In Nashorn, you can't access class static members through class instances. There are multiple ways to get at statics. You can obtain a type object that acts as both a constructor and as a static namespace, much like a type name acts in Java:
var StaticVars = Java.type("StaticVars"); // use your full package name if you have it
print(StaticVars.myname);
Or, pass in a java.lang.Class object and use the .static pseudo-property to access the statics:
engine.put("StaticVarsClass", StaticVars.class);
followed by:
var StaticVars = StaticVarsClass.static;
print(StaticVars.myname);
in the script. In general, .static is the inverse operation to .class:
var BitSet = Java.type("java.util.BitSet");
var bitSetClass = BitSet.class; // produces a java.lang.Class object, just like in Java
print(BitSet === bitSetClass.static); // prints true
var bitSet = new BitSet(); // type object creates a new bitset when used as a constructor
var thisWontWork = new bitSetClass(); // java.lang.Class can't be used as a constructor.
As you can see, we distinguish three concepts:
the runtime class objects, which are instances of java.lang.Class. They aren't special, and you only can use the Class API on them (.getSuperclass(), .getName(), etc.)
instances of classes themselves (normal objects that you can access instance members on)
type objects, which are both namespaces for static members of classes they represent, as well as constructors. The closest equivalent in Java to them is the name of the class as used in source code; in JavaScript they are actual objects.
This actually produces least ambiguity, as everything is in its place, and maps most closely to Java platform idioms.

This is not the way js should work. I think this is a design bug in Nashorn. Assume you have a mixed util passing vars from some java runtime system to the js script. This object contains one static method fmtNumber(someString) and one object method jutil.getFormVar(someString). The users don't need to know that Java is serving this platform. You simply tell them jutil is a "system hook" belonging to the framework foo. As a user of this framework I don't care about if its static or not. I am a js developer, i don't know about static or not. I want to script something real quick. This is how the code in rhino looks like.
var x = jutil.getFormVar("x");
print(jutil.fmtNumber(x));
Now in nashorn I have to distinguish between them. Even worse I even have to educate my users to distinguish between them and teach them java terms, which they might not know because this is what an abstraction layer is all about: a self containing system without the need to know the underlying mechanisms. This distinction is way to much cognitive overload and you did not think about other usecases than java developers scripting for them self which they probably wont to because the already know a good language called Java. You are thinking form your implementation as a Java developer when instead you should think how you could use the power of the Java Plattform in the background, hiding all the nasty details from JS developers. What would a webdeveloper say if he needs to distinguish between the static C++ implementation in the browser?

Related

How to inject an e4view with Guice injection

I am working on an existing Eclipse RCP based on Luna which consists of 99% 3.x API. We want to change this in an ongoing process; so when I was given the task of creating a new view, I wanted to use the new (in Luna, anyways) e4view element for the org.eclipse.ui.views extension point.
My problem is that part of the RCP uses xtext and thus, several components are available by using Guice.
I am now stranded with something like this
public class MyViewPart
{
#Inject // <- should be injected via Guice (I used #com.google.inject.Inject, otherwise E4DI would complain)
ISomeCustomComponent component;
#PostConstruct // <- should be called and injected via E4 DI
public void createView(Composite parent)
{
// ...
}
}
To get this injected with Guice, I would usually use an AbstractGuiceAwareExecutableExtensionFactory (as usually done in Xtext contexts) like this:
<plugin>
<extension
point="org.eclipse.ui.views">
<e4view
class="my.app.MyExecutableExtensionFactory:my.app.MyViewPart"
id="my.app.view"
name="my view"
restorable="true">
</e4view>
</extension>
</plugin>
But I did not expect this to work, because I thought it would bypass the E4 mechanism (actually, it seems to be the other way round and the e4view.class element seems to ignore the extension factory and just uses my.app.MyViewPart to inject it with E4DI. To be sure, I have set a class loading breakpoint to MyViewPart which is hit from ContextInjectionFactory.make()).
As I said, I didn't expect both DI frameworks to coexist without conflict, so I think the solution to my problem would be to put those object which I need injected into the E4 context.
I have googled a bit but I have found multiple approaches, and I don't know which one is the "correct" or "nice" one.
Among the approaches I have found, there are:
providing context functions which delegate to the guice injector
retrieving the objects from guice and configure them as injector bindings
retrieving the objects from guice, obtain a context and put them in the context
(The first two approaches are mentioned in the "Configure Bindings" section of https://wiki.eclipse.org/Eclipse4/RCP/Dependency_Injection)
And of course I could get the objects from Guice in the MyViewPart implementation, but that's not what I want...
[Edit:] In the meantime I have explored the options above a bit more:
Context Functions
I tried to register the context functions as services in the Bundle Activator with this utility method:
private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
IContextFunction func = new ContextFunction()
{
#Override
public Object compute(final IEclipseContext context, final String contextKey)
{
return guiceInjector.getInstance(clazz);
}
};
ServiceRegistration<IContextFunction> registration =
context.registerService(IContextFunction.class, func,
new Hashtable<>(Collections.singletonMap(
IContextFunction.SERVICE_CONTEXT_KEY, clazz.getName()
)));
}
and called registerGuiceDelegatingInjection() in the BundleActivator's start() method for each class I needed to be retrieved via Guice.
For some reason, however, this did not work. The service itself was registered as expected (I checked via the OSGi console) but the context function was never called. Instead I got injection errors that the objects could not be found during injection. Maybe the context functions cannot be contributed dynamically but have to be contributed via declarative services, so they are known as soon as the platform starts?
(Answer here is: yes. As the JavaDoc to IContextFunction says: Context functions can optionally be registered as OSGi services [...] to seed context instances with initial values. - and since the application context already exists when my bundle is started, the dynamically registered service is not seen by the ContextFactory in time).
Injector Bindings
I quickly found out that this solution does not work for me, because you can only specify an interface-class to implementation-class mapping in the form
InjectorFactory.getDefault().addBinding(IMyComponent.class).implementedBy(MyComponent.class)
You obviously cannot configure instances or factories this way, so this is not an option, because I need to delegate to Guice and get Guice-injected instances of the target classes...
Putting the objects in the context
This currently works for me, but is not very nice. See answer below.
[Edit 2:] As I have reported, putting the objects in the (application) context works for me. The downside is that having the objects in the application context is too global. If I had two or more bundles which would require injection of object instances for another DSL, I would have to take care (e.g., by using #Named annotations) to not get the wrong instance injected.
What I would like better is a way to extend the Part's context with which my e4view is created and injected directly. But so far I have not found a way to explicitly target that context when putting in my instances ...
Thanks for any further hints...
Try the processor mechanism of E4: You should be using a (Pre or Post) Processor (along with the PostContextCreate annotation) to register your POJOs into the (global) IEclipseContext.
The solution that worked for me best so far was getting the IEclipseContext and put the required classes there myself during the bundle activator's start() method.
private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
IServiceLocator s = PlatformUI.getWorkbench();
IEclipseContext ctx = (IEclipseContext) s.getService(IEclipseContext.class);
ctx.set(clazz.getName(), guiceInjector.getInstance(clazz));
}
This works at least for now. I am not sure how it works out in the future if more bundles would directly put instances in the context; maybe in the long-term named instances would be needed. Also, for me this works, because the injected objects are singletons, so it does not do any harm to put single instances in the context.
I would have liked the context function approach better, but I could not get it to work so far.

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

How to organize DI Framework usage in an application?

EDIT: I forgot to move the kernel into a non-generic parent class here and supply a virtual method to access it. I do realize that the example below, as is, would create a plethora of kernel instances.
I just learned how to do injection this past week and here's how I've got things set up currently:
using Ninject;
using System.Reflection;
namespace Infrastructure
{
public static class Inject<T>
{
static bool b = Bootstrap();
static IKernel kernel;
static bool Bootstrap()
{
kernel = new StandardKernel();
kernel.Load(Assembly.GetExecutingAssembly());
return true;
}
public static T New() { return kernel.Get<T>(); }
}
}
And then I plan to make the various ninject module classes part of the Infrastructure namespace so that this will load them.
I haven't been able to find anything on here or Google that gives examples of how to actually organize the usage of Ninject in your project, but this seems right to me as it allows me to only need the Ninject reference in this assembly. Is this a more or less 'correct' way or is there a better design?
There are a few problems with how you are doing things now.
Let me first start with the obvious C# problem: Static class variables in generic classes are shared on a per T basis. In other words, Inject<IUserRepository> and Inject<IOrderRepository> will each get their own IKernel instance, which is unlikely what you really want, since it is most likely you need a single IKernel for the life time of your application. When you don't have a single IKernel for the application, there is no way to register types as singleton, since singleton is always scoped at the container level, not at the application level. So, you better rewrite the class as non-generic and move the generic type argument to the method:
Inject.New<T>()
The second problem is one concerned dependency injection. It seems to me you are trying to use the Service Locator anti-pattern, since you are probably explicitly calling Inject.New<T> from within your application. A DI container should only be referenced in the start-up path of the application and should be able to construct a complete object graph of related objects. This way you can ask the container to get a root level object for you (for instance a Controller in the context of MVC) and the rest of the application will be oblivious to the use of any DI technology. When you doing this, there is no need to abstract the use of the container away (as you did with your Inject class).
Not all application or UI technologies allow this BTW. I tend to hide my container (just as you are doing) when working with a Web Forms application, because it is impossible to do proper dependency injection on Page classes, IHttpHandler objects, and UserControl classes.

How to instantiate a MEF exported object using Ninject?

My application is using MEF to export some classes from an external assembly. These classes are setup for constructor injection. The issue I am facing is that
MEF is attempting to instantiate the classes when I try to access them. Is there a way to have Ninject take care of the instantiation of the class?
IEnumerable<Lazy<IMyInterface>> controllers =
mefContainer.GetExports<IMyInterface>();
// The following line throws an error because MEF is
// trying to instantiate a class that requires 5 parameters
IMyInterface firstClass = controllers.First().Value;
Update:
There are multiple classes that implement IMyInterface and I would like to select the one that has a specific name and then have Ninject create an instance of it. I'm not really sure if I want laziness.
[Export(typeof(IMyInterface))]
public class MyClassOne : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
[Export(typeof(IMyInterface))]
public class MyClassTwo : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
Using MEF, I would like to get either MyClassOne or MyClassTwo and then have Ninject provide an instance of MyRepository and YourRepository (Note, these two are bound in a Ninject module in the main assembly and not the assembly they are in)
You could use the Ninject Load mechanism to get the exported classes into the mix, and the you either:
kernel.GetAll<IMyInterface>()
The creation is lazy (i.e., each impl of IMyInterface is created on the fly as you iterate over the above) IIRC, but have a look at the tests in the source (which is very clean and readable, you have no excuse :P) to be sure.
If you dont need the laziness, use LINQ's ToArray or ToList to get a IMyInterface[] or List<IMyInterface>
or you can use the low-level Resolve() family of methods (again, have a look in the tests for samples) to get the eligible services [if you wanted to do some filtering or something other than just using an instance - though binding metadata is probably the solution there]
Finally, if you can edit in an explanation of whether you need laziness per se or are doing it to illustrate a point. (and have a search for Lazy<T> here and in general wrt both Ninject and autofac for some samples - cant recall if there are any examples in the source - think not as it's still on 3.5)
EDIT: In that case, you want a bind that has:
Bind<X>().To<>().In...().Named( "x" );
in the registrations in your modules in the child assembly.
Then when you're resolving in the parent assembly, you use the Kernel.Get<> overload that takes a name parameter to indicate the one you want (no need for laziness, arrays or IEnumerable). The Named mechanism is a specific (just one or two helper extensions implement it in terms of the generalised concept) application of the binding metadata concept in Ninject - there's plenty room to customise it if somethng beyond a simple name is insufficient.
If you're using MEF to construct the objects, you could use the Kernel.Inject() mechanism to inject properties. The problem is that either MEF or Ninject
- has to find the types (Ninject: generally via Bind() in Modules or via scanning extensions, after which one can do a Resolve to subset the bindings before instantiation - though this isnt something you normally do)
- has to instantiate the types (Ninject: typically via a Kernel.Get(), but if you discovered the types via e.g. MEF, you might use the Kernel.Get(Type) overloads )
- has to inject the types (Ninject: typically via a Kernel.Inject(), or implicit in the `Kernel.Get())
What's not clear to me yet is why you feel you need to mix and mangle the two - ultimately sharing duties during construction and constructor injection is not a core use case for either lib, even if they're both quite composable libraries. Do you have a constraint, or do you have critical benefits on both sides?
You can use ExportFactory to create Instances
see docs here:
http://mef.codeplex.com/wikipage?title=PartCreator
Your case would be slitly different
I would use Metadata and a custom attribute also
[ImportMany(AllowRecomposition=true)]
IEnumerable<ExportFactory<IMyInterFace, IMyInterfaceMetaData>> Controllers{ get; set; }
public IMyInterface CreateControllerFor(string parameter)
{
var controller = Controllers.Where(v => v.Metadata.ControllerName == parameter).FirstOrDefault().CreateExport().Value;
return controller;
}
or use return Controllers.First() without the Metadata
Then you can code the ninject parts around that or even stick with MEF
Hope this helps

What is the difference between Early and Late Binding?

What is the difference between early and late binding?
The short answer is that early (or static) binding refers to compile time binding and late (or dynamic) binding refers to runtime binding (for example when you use reflection).
In compiled languages, the difference is stark.
Java:
//early binding:
public create_a_foo(*args) {
return new Foo(args)
}
my_foo = create_a_foo();
//late binding:
public create_something(Class klass, *args) {
klass.new_instance(args)
}
my_foo = create_something(Foo);
In the first example, the compiler can do all sorts of neat stuff at compile time. In the second, you just have to hope that whoever uses the method does so responsibly. (Of course, newer JVMs support the Class<? extends Foo> klass structure, which can greatly reduce this risk.)
Another benefit is that IDEs can hotlink to the class definition, since it's declared right there in the method. The call to create_something(Foo) might be very far from the method definition, and if you're looking at the method definition, it might be nice to see the implementation.
The major advantage of late binding is that it makes things like inversion-of-control easier, as well as certain other uses of polymorphism and duck-typing (if your language supports such things).
Similar but more detailed answer from Herbert Schildt C++ book:-
Early binding refers to events that occur at compile time. In essence, early binding occurs when all information needed to call a function is known at compile time. (Put differently, early binding means that an object and a function call are bound during compilation.) Examples of early binding include normal function calls (including standard library functions), overloaded function calls, and overloaded operators. The main advantage to early binding is efficiency. Because all information necessary to call a function is determined at compile time, these types of function calls are very fast.
The opposite of early binding is late binding. Late binding refers
to function calls that are not resolved until run time. Virtual functions are used to achieve late binding. As you know, when access is via a base pointer or reference, the virtual function actually called is determined by the type of object pointed to by the pointer. Because in most cases this cannot be determined at compile time, the object and the function are not linked until run time. The main advantage to late binding is flexibility. Unlike early binding, late binding allows you to create programs that can respond to events occurring while the program executes without having to create a
large amount of "contingency code." Keep in mind that because a function call is not resolved until run time, late binding can make for somewhat slower execution times.
However today, fast computers have significantly reduced the execution times related to late binding.
Taken directly from http://word.mvps.org/fAQs/InterDev/EarlyvsLateBinding.htm
There are two ways to use Automation (or OLE Automation) to
programmatically control another application.
Late binding uses CreateObject to create and instance of the
application object, which you can then control. For example, to create
a new instance of Excel using late binding:
Dim oXL As Object
Set oXL = CreateObject("Excel.Application")
On the other hand, to manipulate an existing instance of Excel (if
Excel is already open) you would use GetObject (regardless whether
you're using early or late binding):
Dim oXL As Object
Set oXL = GetObject(, "Excel.Application")
To use early binding, you first need to set a reference in your
project to the application you want to manipulate. In the VB Editor of
any Office application, or in VB itself, you do this by selecting
Tools + References, and selecting the application you want from the
list (e.g. “Microsoft Excel 8.0 Object Library”).
To create a new instance of Excel using early binding:
Dim oXL As Excel.Application
Set oXL = New Excel.Application
In either case, incidentally, you can first try to get an existing
instance of Excel, and if that returns an error, you can create a new
instance in your error handler.
In interpreted languages, the difference is a little more subtle.
Ruby:
# early binding:
def create_a_foo(*args)
Foo.new(*args)
end
my_foo = create_a_foo
# late binding:
def create_something(klass, *args)
klass.new(*args)
end
my_foo = create_something(Foo)
Because Ruby is (generally) not compiled, there isn't a compiler to do the nifty up-front stuff. The growth of JRuby means that more Ruby is compiled these days, though, making it act more like Java, above.
The issue with IDEs still stands: a platform like Eclipse can look up class definitions if you hard-code them, but cannot if you leave them up to the caller.
Inversion-of-control is not terribly popular in Ruby, probably because of its extreme runtime flexibility, but Rails makes great use of late binding to reduce the amount of configuration necessary to get your application going.
public class child()
{ public void method1()
{ System.out.println("child1");
}
public void method2()
{ System.out.println("child2");
}
}
public class teenager extends child()
{ public void method3()
{ System.out.println("teenager3");
}
}
public class adult extends teenager()
{
public void method1()
{ System.out.println("adult1);
super.method1();
}
}
//In java
public static void main(String []args)
{ ((teenager)var).method1();
}
This will print out
adult1
child1
In early binding the compiler will have access to all of the methods
in child and teenager
but in late binding (at runtime), it will check for methods that are overridden
at runtime.
Hence method1(from child -- early binding) will be overridden by the method1 from adult at runtime(late binding)
Then it will implement method1 from child since there is no method1 in method1 in teenager.
Note that if child did not have a method1 then the code in the main would not compile.
The compile time polymorphism also called as the overloading or early binding or static binding when we have the same method name with different behaviors. By implementing the multiple prototype of the same method and different behavior occurs in it. Early binding refers first compilation of the program .
But in late binding object is runtime occurs in program. Also called as Dynamic binding or overriding or Runtime polymorphism.
The easiest example in java:
Early (static or overloading) binding:
public class Duck {
public static void quack(){
System.out.println("Quack");
}
}
public class RubberDuck extends Duck {
public static void quack(){
System.out.println("Piiiiiiiiii");
}
}
public class EarlyTest {
public static void main(String[] args) {
Duck duck = new Duck();
Duck rubberduck = new RubberDuck();
duck.quack();
rubberduck.quack(); //early binding - compile time
}
}
Result is:
Quack
Quack
while for Late (dynamic or overriding) binding:
public class Duck {
public void quack(){
System.out.println("Quack");
}
}
public class RubberDuck extends Duck {
public void quack(){
System.out.println("Piiiiiiiiii");
}
}
public class LateTest {
public static void main(String[] args){
Duck duck = new Duck();
Duck rubberduck = new RubberDuck();
duck.quack();
rubberduck.quack(); //late binding - runtime
}
}
result is:
Quack
Piiiiiiiiii
Early binding happens in compile time, while late binding during runtime.

Resources