I've been experimenting with the dynamic component loader in Angular2. An issue, possibly related to this: https://github.com/angular/angular/issues/4330 , seems to be that once a component is loaded with, say, the loadIntoLocation() function, the injector cannot find the parent component's injector or the things that were injected into it. As that article suggests, you can pass an array of Resolved Providers obtained from the parent's injector (Injector.resolve) into the last parameter of loadIntoLocation().
This seems to work initially, but I have also found that any children of the dynamically loaded components also have the same problem. The injector of the children does not know to look up the injection tree for providers, so a standard injection in the constructor of the children like
constructor( myComponent: MyComponent)
does not work. The children (of the dynamically loaded components) are not dynamically loaded but just "normally" instantiated using a template, selector etc. I am wondering:
Is this (still) a known issue or am I misunderstanding anything?
If a known issue is there any workaround at the child level? I tried a constructor as above and also using #Host and also using forward ref + #host but none work. Is there another way to manually pass bindings to a component that is not dynamically loaded?
Is there any other possible workaround for this?
The problems seems to be caused by the resolved providers passed to loadToLocation(). DI is hierarchical and DI tries to resolve required types by walking the hierarchy towards the root, but the chain is broken at loadToLocation() because there providers are passed instead of a child-injector.
The dynamically added component and its children can only resolve providers passed to loadToLocation() or listed in the providers list of the dynamically added component itself (or one of its children if it is a (grand-)parent of the actually resolved component).
When DI walks upwards from within the dynamically inserted tree to resolve a dependency, the iteration stops at the component added by loadToLocation() because the injector of this component doesn't have a parent injector (which would be the injector of the host component where the dynamically added component was added).
See also https://github.com/angular/angular/issues/5990
Correction on this. I have conflated another problem in my code with what I thought was the bug described above. The children of my dynamically loaded component were derived from abstract classes. I was trying to inject the abstract class into the children as opposed to the actual implementation which, per this
Interface based programming with TypeScript, Angular 2 & SystemJS
I have learned you cannot do. The injection tree does get broken upon a dynamic load (which I do think should be corrected), but I take back the part about the children thereafter not being able to traverse their own tree. Thanks for the comments - it helped me sort it out.
Related
Say I have many instances of a component, and it consists of child components, or maybe even a hierarchy of components beneath it. The top-level component may have data that the child components are interested in. There are a couple ways I've tried to handle this:
Injecting the parent component into the child components.
Pass data through mapped attributes.
The first solution introduces a lot of coupling, and the second solution is verbose and messy. What I'd like to do is create a simple model class that the parent component would instantiate and bind so that child components could inject it, without affecting any injector outside the hierarchy. I tried creating a child injector and binding the model there, but it didn't work (the children don't use the child injector, which makes sense).
I'm pretty certain angular does something like this for injecting elements into components, but I couldn't figure out how after a cursory search. So, is this possible? How would I do it?
You can define a module in a directive or component.
* Example:
*
* #NgDirective(selector: '[foo]', module: FooDirective.initModule)
* FooDirective {
* static initModule() => new Module()..type(SomeTypeA);
* }
*
see:
https://github.com/angular/angular.dart/pull/779
https://github.com/angular/angular.dart/issues/652
I have an Injector instance a, and I'd like to create another Injector b which does the same as a, except for two bindings, which get overridden by the Module I provide. Is this possible?
I know about Modules.override, but that does not take an Injector as argument. If it was possible to convert an Injector to a Module, that would solve my problem.
The simplest way of thinking about this would be through child injectors, but that is explicitly disallowed as a design decision:
The reason overriding a binding in a child injector isn't supported is because it can lead a developer towards writing code that can work in either a parent & child injector, but have different behavior in each. This can lead to very surprising scenarios, because of just-in-time (JIT) bindings and the way they interact with parent/child injectors.
At this point, I would probably think about restructuring your application to avoid requiring these complicated bindings, but if you want to go further, you can probably use Injector.getBindings() or Injector.getAllBindings() (note the difference!) and stitch them back into a module using the Elements SPI. After all, Binding<?> extends Element, and Elements.getModule(...) will create a Module from your Elements. I haven't checked that it works, but that's probably your best lead.
This is more of a conceptual question.
I had to work on a functionality that had to create a dynamic h:dataTable. And whenever I created a component, I did something similar to this:
DataTable table = (DataTable) FacesContext.getCurrentInstance().getApplication()
.createComponent(DataTable.COMPONENT_TYPE);
Using the FacesContext to create everything for me.
However I could just as simply have done this:
DataTable table = new DataTable();
The reason I did it in the first way is that all the tutorials and material I read while developing did it that way, but I never got a clear answer why.
Is there an actual reason why the first is better than the second?
The Application#createComponent() adds an extra abstract layer allowing runtime polymorphism and pluggability. The concrete implementation is configurable by <component> entry in faces-config.xml which could in turn be provided via a JAR. This allows changing implementation without rewriting/recompiling the code.
It's exactly like as how JDBC API works: you don't do new SomeDriver(), but you do Class.forName(someDriverClassName) which allows the driver to not be a compiletime dependency and thus your JDBC code to be portable across many DB vendors without rewriting/recompiling.
However, if the application is for "internal usage" only and not intented to be distributable (and thus all the code is always full under you control), then runtime polymorphism has not a so big advantage and may add (very minor) overhead.
See also:
What is the relationship between component family, component type and renderer type?
I have a bit of a dilemma, which to be honest is a fringe case but still poses an issue.
Currently I am using Ninject MVC and bind all my controllers like so:
Kernel.Bind<SomeController>.ToSelf();
Which works a treat for 99% of things that I have needed to do, however at the moment I am doing some wacky stuff around dynamic routing and dynamic controllers which require me to manually write a method to get the type of a controller from ninject. Now initially I thought it would be easy, but its not... I was expecting that I could get the controller based on its name, but that didnt work.
Kernel.Get<IController>("SomeController");
That got me thinking that its probably because it only knows about a binding to SomeController, not IController. So I thought, I can just write all my bindings like so:
Kernel.Bind<IController>.To<SomeController>().Named("SomeController");
This way it should be easy to get the type of the controller from the name doing the previous code, however if I were to bind this way, I would have a problem when I come to unbind the controllers (as plugins can be loaded and unloaded at runtime). So the normal:
Kernel.Unbind<SomeController>()
Which was great, will no longer work, and I would have to do:
Kernel.Unbind<IController>();
However then I realised that I need to give it some constraint to tell it which binding for this type I want to unbind, and there seems to be no overloads or DSL available to do this...
So I am trapped between a rock and a hard place, as I need to satisfy the ControllerLookup method, but also need to keep it so I can add and remove bindings easily at runtime.
protected override Type GetControllerType(RequestContext requestContext, string controllerName) {
//... find and return type from ninject
}
Anyone have any ideas?
(Just incase anyone questions why I am doing this, its because of the way I am loading plugins, Ninject knows about the types and the namespaces, but within the context of creating a controller it doesn't know the namespace just the controller name, so I do this to satisfy the isolation of the plugin, and the location of the dynamic controller, it is a roundabout way of doing it, but it is what people have done with AutoFac before Example of similar thing with AutoFac)
In my opinion the bindings should be created once at application startup and not change anymore after the first resolve. Everything else can lead to strange issues. Unless you have proper isolation using an AppDomain for each plugin you can not really unload them anyway. Instead of unloading bindings you can make them conditional and disable them using some configuration.
If you really want to unload bindings then I suggest not to do it for single bindings but take advantage of modules. Load all bindings belonging to one plugin together in one or several modules and unload those modules instead of the single bindings.
I am putting together a built-in script capability using the excellent Pascal DWScript. I have also add my own Delphi-side class definition (TDemo) to DWScript using:
dwsUnit.ExposeRTTI( TDemo.ClassInfo )
This just works and is a great way of quickly adding properties and methods.
I also wish to add an existing instance in a similar way, so I have created my instance FDemo of type TDemo and then performed:
dwsUnit.ExposeInstanceToUnit( 'Demo', 'TDemo', FDemo );
This looks a promising routine to call but I get an AV from an uninitialised unit table. I've also looked in the unit test code of the SVN source to see the use of this function but to no avail. Can anyone point me at what I should add / change?
ExposeInstanceToUnit has to be used from within the TdwsUnit table initialization, see RTTIExposeTests/ExposeInstancesAfterInitTable for some sample code. It allows directly exposing dynamic instances.
The other approach is to use the Instances collection of a TdwsUnit component, you get design-time support, and more controls over your instances and their lifetime.
Also keep in mind you have to make sure the instances you expose will properly behave even if the script misbehaves, f.i. when the user attempts to manually destroys an instance you exposed, and that instance shouldn't be destroyed. By default ExposeRTTI will map the destructors, so you may want to restrict that by specifying eoNoFreeOnCleanup.
edit: a last approach recently added is to use the TdwsRttiConnector, which basically allows exposing and connection to anything that's reachable through RTTI. That's very lightweight in terms of code to setup, but the downside is you don't get any form of compile-time checks.