Injecting generated classes without writing too much module configuration code - dependency-injection

Here's the situation: I have an abstract class with a constructor that takes a boolean (which controls some caching behavior):
abstract class BaseFoo { protected BaseFoo(boolean cache) {...} }
The implementations are all generated source code (many dozens of them). I want to create bindings for all of them automatically, i.e. without explicit hand-coding for each type being bound. I want the injection sites to be able to specify either caching or non-caching (true/false ctor param). For example I might have two injections like:
DependsOnSomeFoos(#Inject #NonCaching AFoo aFoo, #Inject #Caching BFoo bFoo) {...}
(Arguably that's a bad thing to do, since the decision to cache or not might better be in a module. But it seems useful given what I'm working with.)
The question then is: what's the best way to configure bindings to produce a set of generated types in a uniform way, that supports a binding annotation as well as constructor param on the concrete class?
Previously I just had a default constructor on the implementation classes, and simply put an #ImplementedBy on each of the generated interfaces. E.g.:
// This is all generated source...
#ImplementedBy(AFooImpl.class)
interface AFoo { ... }
class AFooImpl extends BaseFoo implements AFoo { AFooImpl() { super(true); } }
But, now I want to allow individual injection points to decide if true or false is passed to BaseFoo, instead of it always defaulting to true. I tried to set up an injection listener to (sneakily) change the true/false value post-construction, but I couldn't see how to "listen" for a range of types injected with a certain annotation.
The problem I keep coming back to is that bindings need to be for a specific type, but I don't want to enumerate all my types centrally.
I also considered:
Writing some kind of scanner to discover all the generated classes and add a pair of bindings for each of them, perhaps using Google Reflections.
Creating additional, trivial "non caching" types (e.g. AFoo.NoCache extends AFoo), which would allow me to go back to #ImplementedBy.
Hard wiring each specific type as either caching/non-caching when it's generated.
I'm not feeling great about any of those ideas. Is there a better way?
UPDATE: Thanks for the comment and answer. I think generating a small module alongside each type and writing out a list of the modules to pull in at runtime via getResources is the winner.
That said, after talking w/ a coworker, we might just dodge the question as I posed it and instead inject a strategy object with a method like boolean shouldCache(Class<? extends BaseFoo> c) into each generated class. The strategy can be implemented on top of the application config and would provide coarse and fine grained control. This gives up on the requirement to vary the behavior by injection site. On the plus side, we don't need the extra modules.

There are two additional approaches to look at (in addition to what you mentioned):
Inject Factory classes instead of your real class; that is, your hand-coded stuff would end up saying:
#Inject
DependsOnSomeFoos(AFoo.Factory aFooFactory, BFoo.Factory bFooFactory) {
AFoo aFoo = aFooFactory.caching();
BFoo bFoo = bFooFactory.nonCaching();
...
}
and your generated code would say:
// In AFoo.java
interface AFoo {
#ImplementedBy(AFooImpl.Factory.class)
interface Factory extends FooFactory<AFoo> {}
// ...
}
// In AFooImpl.java
class AFooImpl extends BaseFoo implements AFoo {
AFooImpl(boolean caching, StuffNeededByAFIConstructor otherStuff) {
super(caching);
// use otherStuff
}
// ...
class Factory implements AFoo.Factory {
#Inject Provider<StuffNeededByAFIConstructor> provider;
public AFoo caching() {
return new AFooImpl(true, provider.get());
}
// ...
}
}
Of course this depends on an interface FooFactory:
interface FooFactory<T> {
T caching();
T nonCaching();
}
Modify the process that does your code generation to generate also a Guice module that you then use in your application setup. I don't know how your code generation is currently structured, but if you have some way of knowing the full set of classes at code generation time you can either do this directly or append to some file that can then be loaded with ClassLoader.getResources as part of a Guice module that autodiscovers what classes to bind.

Related

CDI and multiple instances

I'm just investigating possibilities of DI frameworks and I made some stupid example for it. I have simple service.
public class Service implements ServiceI {
private Source source;
private Translator translator;
#Inject
public Service(Translator translator, Source source) {
this.translator = translator;
this.source = source;
}
I want to have two instances of this service one which is initiated with TranslatorA and SourceA and second which will be injected with different values.
How can one have two instances with different beans injected inside?
I'm interested in ways how to achieve this in both Guice and Weld CDI.
So far I created multiple Guice modules and specify bind-to in it as I like. But I'm not completely sure if it is correct way. And this completely fails in CDI as there are no modules.
I thing that having multiple instances must be pretty common case or am I wrong?
The way you would do this with CDI is by setting up a producers for translator and source. It's the only way to control which implementations are used for injection at runtime. The implementation details may vary based on your exact needs but something like this should get you on the right track
#Produces
public Translator produceTranslator(#Dependent TranslatorA implA, #Dependent TranslatorB implB) {
return checkRuntimeCondition() ? implA : implB;
}
And the same for the source. That way when you inject Service, CDI'll call the producer method for each parameter and use a runtime condition to select the implementation. YMMV on the details, you may need to set up additional qualifiers to avoid ambiguity.

DI and runtime parameters - structuremap

I'm increasingly finding myself mixing runtime parameters and implicit construction injection and it smells bad to me.
Example - I have a base class describing a Filter, and various inherited types for specific filters (tag, category, date, author, etc etc)
var filter = StructureMap.ObjectFactory
.With("caption").EqualTo("Posts filtered by tag:")
.With("parameters").EqualTo(parameters)
.With("displayInSummary").EqualTo(true)
.GetInstance<TagListFilter>();
The reason I do this is because in the constructor I have an interface using which I wish StructureMap to inject a concrete class (IArticleConfigurator):
public TagListFilter(string caption, IDictionary<string,string> parameters, bool displayInSummary, IArticleConfigurator configurator)
:base(caption, parameters,displayInSummary, configurator)
But it just occured to me that I replaced a simple constructor, albeit with a concrete class instead of interface, with, essentially, the same thing but using DI to inject 1 concrete type. I'm doing this because currently our configs are in a xml file but will be moved to a CMS, so seemed like a good idea to use an interface.
It seems wrong and not in the spirit of DI.
Should I use a factory to generate my various filters? If so, can I still leverage DI to get a concrete instance of my IArticleConfigurator?
You shouldn't pass parameters explicitly from one dependency to another or, at least, you should minimize their number. One big disadvantage of resolving instances with parameters is that you specify parameter names as string literals - which make you code very fragile in changes of constructor signature.
One example I might think of (note that I have no clue regarding your domain and responsibilities of entities) is to inject provider or, as you already said, factory. For example, create something like ITagListFilterConfigurationProvider (you should change the name as you want to, I just trying to give motivation). You might create very abstract provider like IFilterConfigurationProvider with three methods as below, if you have same parameters for filters:
interface ITagListFilterConfigurationProvider
{
string Caption { get; }
IDictionary<string,string> GetParameters();
bool IsDisplayInSummary { get; }
}
Now you constructor will look like:
public TagListFilter(ITagListFilterConfigurationProvider configurationProvider, IArticleConfigurator configurator)
All you need is to implement it as you already did (because you are passing concrete parameters to constructor) and extract this behaviour to provider. What is left - is to register concrete provider with StructureMap and resolve filter without passing any concrete parameters
var filter = StructureMap.ObjectFactory.GetInstance<TagListFilter>();

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

OpenRasta: Uri seems to be irrelevant for handler selection

When registering two handlers for the same type, but with different URIs, the handler selection algorithm doesn't seem to check the uri when it determines which handler to use.
If you run the program below, you'll notice that only HandlerOne will be invoked (twice). It does not matter if I call for "/one" or "/two", the latter supposed to be handled by HandlerTwo.
Am I doing something wrong or is this something to be fixed in OpenRasta? (I'm using 2.0.3.0 btw)
class Program
{
static void Main(string[] args)
{
using (InMemoryHost host = new InMemoryHost(new Configuration()))
{
host.ProcessRequest(new InMemoryRequest
{
HttpMethod = "GET",
Uri = new Uri("http://x/one")
});
host.ProcessRequest(new InMemoryRequest
{
HttpMethod = "GET",
Uri = new Uri("http://x/two")
});
}
}
}
class Configuration : IConfigurationSource
{
public void Configure()
{
using (OpenRastaConfiguration.Manual)
{
ResourceSpace.Has.ResourcesOfType(typeof(object))
.AtUri("/one").HandledBy(typeof(HandlerOne));
ResourceSpace.Has.ResourcesOfType(typeof(object))
.AtUri("/two").HandledBy(typeof(HandlerTwo));
}
}
}
class HandlerOne
{
public object Get() { return "returned from HandlerOne.Get"; }
}
class HandlerTwo
{
public object Get() { return "returned from HandlerTwo.Get"; }
}
Update
I have a feeling that I could accomplish what I want similar using UriNameHandlerMethodSelector as described on http://trac.caffeine-it.com/openrasta/wiki/Doc/Handlers/MethodSelection, but then I'd have to annotate each handler methods and also do AtUri().Named(), which looks like boilerplate to me and I'd like to avoid that. Isn't AtUri(X).HandledBy(Y) making the connection between X and Y clear?
Eugene,
You should never have multiple registrations like that on the same resource type, and you probably never need to have ResourcesOfType<object> ever associated with URIs, that'll completely screw with the resolution algorithms used in OpenRasta.
If you're mapping two different things, create two resource classes. Handlers and URIs are only associate by resource class, and if you fail at designing your resources OpenRasta will not be able to match the two, and this is by design.
If you want to persist down that route, and I really don't think you should, then you can register various URIs to have a name, and hint on each of your methods that the name ought to be handled using HttpOperation(ForUriName=blah). That piece of functionality is only there for those very, very rare scenarios where you do need to opt-out of the automatic method resolution.
Finally, as OpenRasta is a compsable framework, you shouldnt have to go and hack around existing classes, you ought to plug yourself into the framework to ensure you override the components you don't want and replace them by things you code yourself. In this case, you could simply write a contributor that replaces the handler selection with your own moel if you don't like the defaults and want an MVC-style selection system. Alternatively, if you want certain methods to be selected rather than others, you can remove the existing operation selectors and replace them (or complement them with) your own. That way you will rely on published APIs to extend OpenRasta and your code won't be broken in the future. I can't give that guarantee if you forked and hacked existing code.
As Seb explained, when you register multiple handlers with the same resource type OpenRasta treats the handlers as one large concatenated class. It therefore guesses (best way to describe it) which potential GET (or other HTTP verb) method to execute, which ever it thinks is most appropriate. This isn't going to be acceptable from the developers prospective and must be resolved.
I have in my use of OpenRasta needed to be able to register the same resource type with multiple handlers. When retrieving data from a well normalised relational database you are bound to get the same type response from multiple requests. This happens when creating multiple queries (in Linq) to retrieve data from either side of the one-to-many relation, which of course is important to the whole structure of the database.
Taking advice from Seb, and hoping I've implemented his suggestion correctly, I have taken the database model class, and built a derived class from it in a resources namespace for each instance of when a duplicating resource type might have been introduced.
ResourceSpace.Has.ResourcesOfType<IList<Client>>()
.AtUri("/clients").And
.AtUri("/client/{clientid}").HandledBy<ClientsHandler>().AsJsonDataContract();
ResourceSpace.Has.ResourcesOfType<IList<AgencyClient>>()
.AtUri("/agencyclients").And
.AtUri("/agencyclients/{agencyid}").HandledBy<AgencyClientsHandler>().AsJsonDataContract();
Client is my Model class which I have then derived AgencyClient from.
namespace ProductName.Resources
{
public class AgencyClient: Client { }
}
You don't even need to cast the base class received from your Linq-SQL data access layer into your derived class. The Linq cast method isn't intended for that kind of thing anyway, and although this code will compile it is WRONG and you will receive a runtime exception 'LINQ to Entities only supports casting Entity Data Model primitive types.'
Context.Set<Client>().Cast<AgencyClient>().ToList(); //will receive a runtime error
More conventional casts like (AgencyClient) won't work as conversion to a SubClass isn't easily possible in C#. Convert base class to derived class
Using the AS operator will again compile and will even run, but will give a null value in the returned lists and therefore won't retrieve the data intended.
Context.Set<Client>().ToList() as IEnumerable<AgencyClient>; //will compile and run but will return null
I still don't understand how OpenRasta handles the differing return class from the handler to the ResourceType but it does, so let's take advantage of it. Perhaps Seb might be able to elaborate?
OpenRasta therefore treats these classes separately and the right handler methods are executed for the URIs.
I patched OpenRasta to make it work. These are the files I touched:
OpenRasta/Configuration/MetaModel/Handlers/HandlerMetaModelHandler.cs
OpenRasta/Handlers/HandlerRepository.cs
OpenRasta/Handlers/IHandlerRepository.cs
OpenRasta/Pipeline/Contributors/HandlerResolverContributor.cs
The main change is that now the handler repository gets the registered URIs in the initializing call to AddResourceHandler, so when GetHandlerTypesFor is called later on during handler selection, it can also check the URI. Interface-wise, I changed this:
public interface IHandlerRepository
{
void AddResourceHandler(object resourceKey, IType handlerType);
IEnumerable<IType> GetHandlerTypesFor(object resourceKey);
to that:
public interface IHandlerRepository
{
void AddResourceHandler(object resourceKey, IList<UriModel> resourceUris, IType handlerType);
IEnumerable<IType> GetHandlerTypesFor(object resourceKey, UriRegistration selectedResource);
I'll omit the implementation for brevity.
This change also means that OpenRasta won't waste time on further checking of handlers (their method signatures etc.) that are not relevant to the request at hand.
I'd still like to get other opinions on this issue, if possible. Maybe I just missed something.

How to instantiate a MEF exported object using Ninject?

My application is using MEF to export some classes from an external assembly. These classes are setup for constructor injection. The issue I am facing is that
MEF is attempting to instantiate the classes when I try to access them. Is there a way to have Ninject take care of the instantiation of the class?
IEnumerable<Lazy<IMyInterface>> controllers =
mefContainer.GetExports<IMyInterface>();
// The following line throws an error because MEF is
// trying to instantiate a class that requires 5 parameters
IMyInterface firstClass = controllers.First().Value;
Update:
There are multiple classes that implement IMyInterface and I would like to select the one that has a specific name and then have Ninject create an instance of it. I'm not really sure if I want laziness.
[Export(typeof(IMyInterface))]
public class MyClassOne : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
[Export(typeof(IMyInterface))]
public class MyClassTwo : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
Using MEF, I would like to get either MyClassOne or MyClassTwo and then have Ninject provide an instance of MyRepository and YourRepository (Note, these two are bound in a Ninject module in the main assembly and not the assembly they are in)
You could use the Ninject Load mechanism to get the exported classes into the mix, and the you either:
kernel.GetAll<IMyInterface>()
The creation is lazy (i.e., each impl of IMyInterface is created on the fly as you iterate over the above) IIRC, but have a look at the tests in the source (which is very clean and readable, you have no excuse :P) to be sure.
If you dont need the laziness, use LINQ's ToArray or ToList to get a IMyInterface[] or List<IMyInterface>
or you can use the low-level Resolve() family of methods (again, have a look in the tests for samples) to get the eligible services [if you wanted to do some filtering or something other than just using an instance - though binding metadata is probably the solution there]
Finally, if you can edit in an explanation of whether you need laziness per se or are doing it to illustrate a point. (and have a search for Lazy<T> here and in general wrt both Ninject and autofac for some samples - cant recall if there are any examples in the source - think not as it's still on 3.5)
EDIT: In that case, you want a bind that has:
Bind<X>().To<>().In...().Named( "x" );
in the registrations in your modules in the child assembly.
Then when you're resolving in the parent assembly, you use the Kernel.Get<> overload that takes a name parameter to indicate the one you want (no need for laziness, arrays or IEnumerable). The Named mechanism is a specific (just one or two helper extensions implement it in terms of the generalised concept) application of the binding metadata concept in Ninject - there's plenty room to customise it if somethng beyond a simple name is insufficient.
If you're using MEF to construct the objects, you could use the Kernel.Inject() mechanism to inject properties. The problem is that either MEF or Ninject
- has to find the types (Ninject: generally via Bind() in Modules or via scanning extensions, after which one can do a Resolve to subset the bindings before instantiation - though this isnt something you normally do)
- has to instantiate the types (Ninject: typically via a Kernel.Get(), but if you discovered the types via e.g. MEF, you might use the Kernel.Get(Type) overloads )
- has to inject the types (Ninject: typically via a Kernel.Inject(), or implicit in the `Kernel.Get())
What's not clear to me yet is why you feel you need to mix and mangle the two - ultimately sharing duties during construction and constructor injection is not a core use case for either lib, even if they're both quite composable libraries. Do you have a constraint, or do you have critical benefits on both sides?
You can use ExportFactory to create Instances
see docs here:
http://mef.codeplex.com/wikipage?title=PartCreator
Your case would be slitly different
I would use Metadata and a custom attribute also
[ImportMany(AllowRecomposition=true)]
IEnumerable<ExportFactory<IMyInterFace, IMyInterfaceMetaData>> Controllers{ get; set; }
public IMyInterface CreateControllerFor(string parameter)
{
var controller = Controllers.Where(v => v.Metadata.ControllerName == parameter).FirstOrDefault().CreateExport().Value;
return controller;
}
or use return Controllers.First() without the Metadata
Then you can code the ninject parts around that or even stick with MEF
Hope this helps

Resources