Error when using interface in grails domain class - grails

I am trying to impelement an interface base domain class with grails and I am runing into error at run time. The error type is Could not determine type for: com.testApp.MyInterface , at table: C.
// in /src/groovy directory
interface MyInterface{
int returnSomething()
}
// in domain class directory
class A implements MyInterface{
int returnSomething(){
//a first implementation here
}
}
class B implements MyInterface{
int returnSomething(){
//a second implementation here
}
}
class C {
....
MyInterface type
..............
}
I think because I'm not specifying the real implementation of "MyInterface " in domain class C, grails is getting into trouble when injecting beans at startup. But I would like to keep MyInterface as abstract as possible because differents classes will implement it, various ways. Is there a way to overcome this error? Can I build my models without domain classes extension (I want to avoid that as far as it is possible)?

You need domain classes only if you want to persist data. Without knowing the specifics of your situation, you may want to make classes A, B, and C services rather than domain classes. It has been my experience that sub-classes and interfaces work better in the service realm than the domain class realm. I would imagine that each returnSomething() implementation would operate over distinct domain classes that do not require interfaces.

Related

Inject annotation in base class - Dagger still wants to add injectable constructor

I'm using dagger.
Having the following classes:
class A {
#Inject
MyClass myClass;
}
class B extends A {
myClass.do();
}
When trying to compile this I'm getting
No injectable members on B . Do you want to add an injectable
constructor?
When moving myClass to B everything compiles. Any idea what might be the problem ?
Dagger can't know all subtypes of A so it doesn't know that it needs to generate adapters for classes like B.
Adding a no-arg constructor with #Inject will force the generation of code that can thus be used to perform injection on instances of B. You can also list B.class in the injects= list of a module to force adapter generation.

Ignore mocked object transitive dependencies

When a class implements an interface all we have to do is mock that interface.
However there are some cases when a class doesn't implement an interface, in that case binding the class to a mock leads guice to get the mocked object dependencies.
To clarify:
class A {
#Inject B;
}
class B{
#Inject C;
}
bind(a.class).toInstance(mock(B.class));
In this scenario, I don't care B's dependencies, but guice stills tries to inject C inside B.
Is there a way to avoid this without defining an interface?
First of all, I strongly recommend against using dependency injection in unit tests. When you're unit testing single class you should create it and pass its dependencies directly, through a constructor or methods. You won't have these problems then.
It's another story when you're writing integration tests though. There are several solutions to your problem.
Make sure all your classes receive dependencies only through injectable constructors. This way Guice won't inject anything because the object will be created by Mockito.
Use providers (and scoping, if needed). The following is equivalent to your attempt sans injection into B (I assume that you really meant bind(B.class).toInstance(mock(B.class)):
bind(B.class).toProvider(new Provider<B> {
#Override
public B get() {
return mock(B.class);
}
}).in(Singleton.class);
You should tweak the scope to satisfy your needs.
Using Mockito to partially solve this was quite easy.
You will need to use #Mock and #InjectMocks annotations like this
ATest{
#Mock B;
#InjectMocks A;
public void setUp(){
MockitoAnnotations.initMocks(this);
}
}
This way Mockito will do the inject instead of guice, there are a couple of restrictions to successfully inject the mock.
This works pretty well until your code have a strong dependency on a class.
Lets say inside A i have something like C obj = new C(); and C have injected fields.

In dependency Injection what is a dependency?

I'he read a lot about this but still i am unclear about the things. We say that DI means injecting dependencies into dependent components at runtime
Q1
What is a dependency ? If these are the objects created at runtime?
If yes, does that mean we are injecting values into variables by creating an object(created by framework i.e instantianting the bean via xml file with setter/constructor injection)
Q2
We do the DI to do work without object intervention?
Q1
From Wikipedia, all elements in DI pattern are objects. The dependent object specifies what it needs using interfaces. The injector object decides what concrete classes (instantiated objects) can satisfy the requirement and provide them to the dependent object.
So, that becomes a yes to the second part.
Q2
Again from Wikipedia:
The primary purpose of the dependency injection pattern is to allow
selection among multiple implementations of a given dependency
interface at runtime, or via configuration files, instead of at
compile time.
As an example consider a security service that can work different implementations of an Encoder. The different encoding algorithm could include SHA, MD5 and others. The security service only specifies that it needs an instance of "an encoding algorithm". The runtime DI environment then will look to find an object that is providing the interface of Encoder and then injects to the security service. In line with DI, the security service is also taking advantage of Inversion of Control (IoC); i.e. it does not itself decide what implementation to use but it is the DI runtime that takes that decision.
Not a elaborate answer to make it just simpler.
Ques1:
class Dependent {
propertyA = new PropertyA();
propertyB = new PropertyB();
}
Here Dependent is dependent to propertyA and propertyB. Above relation is an example of dependency.
If these are the objects created at runtime? Yes.
If yes....? Yes too
Ques2: Yes.
Detail is included below
Scenario 1:
class Dependent {
DBConnection connection = new OracleConnection();
}
Dependent class is highly coupled. Since there is no way to change the connection unless we change the code. So if customer need MySQLConnection() we will have to change the code and give them another exe/jar.
Scenario 2:
class Dependent {
DBConnection connection = ConnectionFactory.createConnection();
}
This is much better since, ConnectionFactory will be able to read some configuration and create necessary connection.
But still, it raises some difficulty to mock the Dependent class. It is hard to create mock in these scenarios. Then what?
Scenario 3:
class Dependent {
DBConnection connection;
setConnection(DBConnection connection) {
this.connecttion = connection;
}
}
class DependencyInjector {
inject() {
// wire them together every dependent & their dependency!
Dependent dependent = indentiyDepenentSomeSmartWay();
DBConnection connection = indentiyConnectionSomeSmartWay();
dependent.setConnection(connection);
}
}
Our DependencyInjector is a smart class, know all the necessary information! Above Dependent class is clean & simple. It is easy mock them for unit test, configurable using configuration.
Those object creation & coupling is detached!
To answer your questions in simple words,
For #1 :
Dependency injection is something about satisfying the need of one object by giving it the object it requires.
Let see an example :
Generally in an enterprise application we use an architecture where in services call a DAO layer and DAO does all the database related stuff. So service need and object of DAO to call all the DAO methods.
Considering we have an entity object - Person.
Lets say we have a DAO - PersonDAO.
public interface PersonDAO {
void addPerson(Person person);
void updatePerson(Person person);
void deletePerson(int personId);
}
and a service - PersonService.
public class PersonServiceImpl {
private PersonDAO personDAO;
public void addPerson() {
//some code specific to service.
//some code to create Person obj.
personDAO.add(person);
}
}
As you can see PersonService is using PersonDAO object to call its methods. PersonService depends on PersonDAO. So PersonDAO is dependency and PersonService is dependent object.
Normally in frameworks like Spring these dependencies are injected by frameworks itself. When application context is loaded all these dependency objects are created and put in Container and whenever needed they are used. The concept of Inversion of Control(IoC) is very closely related to Dependency Injection because of the way the dependency object is created.
E.g You could have created the PersonDAO object in PersonService itself as
PersonDAO personDAO = new PersonDAOImpl();
But in case of spring you are just defining a property for PersonDAO in PersonService and providing a setter for it which is used by spring to set the dependency. Here the creation of dependency is taken care by framework instead of the class which is using it hence it is called Inversion of Control.
For #2 : Yes. You are right.

Injecting generated classes without writing too much module configuration code

Here's the situation: I have an abstract class with a constructor that takes a boolean (which controls some caching behavior):
abstract class BaseFoo { protected BaseFoo(boolean cache) {...} }
The implementations are all generated source code (many dozens of them). I want to create bindings for all of them automatically, i.e. without explicit hand-coding for each type being bound. I want the injection sites to be able to specify either caching or non-caching (true/false ctor param). For example I might have two injections like:
DependsOnSomeFoos(#Inject #NonCaching AFoo aFoo, #Inject #Caching BFoo bFoo) {...}
(Arguably that's a bad thing to do, since the decision to cache or not might better be in a module. But it seems useful given what I'm working with.)
The question then is: what's the best way to configure bindings to produce a set of generated types in a uniform way, that supports a binding annotation as well as constructor param on the concrete class?
Previously I just had a default constructor on the implementation classes, and simply put an #ImplementedBy on each of the generated interfaces. E.g.:
// This is all generated source...
#ImplementedBy(AFooImpl.class)
interface AFoo { ... }
class AFooImpl extends BaseFoo implements AFoo { AFooImpl() { super(true); } }
But, now I want to allow individual injection points to decide if true or false is passed to BaseFoo, instead of it always defaulting to true. I tried to set up an injection listener to (sneakily) change the true/false value post-construction, but I couldn't see how to "listen" for a range of types injected with a certain annotation.
The problem I keep coming back to is that bindings need to be for a specific type, but I don't want to enumerate all my types centrally.
I also considered:
Writing some kind of scanner to discover all the generated classes and add a pair of bindings for each of them, perhaps using Google Reflections.
Creating additional, trivial "non caching" types (e.g. AFoo.NoCache extends AFoo), which would allow me to go back to #ImplementedBy.
Hard wiring each specific type as either caching/non-caching when it's generated.
I'm not feeling great about any of those ideas. Is there a better way?
UPDATE: Thanks for the comment and answer. I think generating a small module alongside each type and writing out a list of the modules to pull in at runtime via getResources is the winner.
That said, after talking w/ a coworker, we might just dodge the question as I posed it and instead inject a strategy object with a method like boolean shouldCache(Class<? extends BaseFoo> c) into each generated class. The strategy can be implemented on top of the application config and would provide coarse and fine grained control. This gives up on the requirement to vary the behavior by injection site. On the plus side, we don't need the extra modules.
There are two additional approaches to look at (in addition to what you mentioned):
Inject Factory classes instead of your real class; that is, your hand-coded stuff would end up saying:
#Inject
DependsOnSomeFoos(AFoo.Factory aFooFactory, BFoo.Factory bFooFactory) {
AFoo aFoo = aFooFactory.caching();
BFoo bFoo = bFooFactory.nonCaching();
...
}
and your generated code would say:
// In AFoo.java
interface AFoo {
#ImplementedBy(AFooImpl.Factory.class)
interface Factory extends FooFactory<AFoo> {}
// ...
}
// In AFooImpl.java
class AFooImpl extends BaseFoo implements AFoo {
AFooImpl(boolean caching, StuffNeededByAFIConstructor otherStuff) {
super(caching);
// use otherStuff
}
// ...
class Factory implements AFoo.Factory {
#Inject Provider<StuffNeededByAFIConstructor> provider;
public AFoo caching() {
return new AFooImpl(true, provider.get());
}
// ...
}
}
Of course this depends on an interface FooFactory:
interface FooFactory<T> {
T caching();
T nonCaching();
}
Modify the process that does your code generation to generate also a Guice module that you then use in your application setup. I don't know how your code generation is currently structured, but if you have some way of knowing the full set of classes at code generation time you can either do this directly or append to some file that can then be loaded with ClassLoader.getResources as part of a Guice module that autodiscovers what classes to bind.

How to instantiate a MEF exported object using Ninject?

My application is using MEF to export some classes from an external assembly. These classes are setup for constructor injection. The issue I am facing is that
MEF is attempting to instantiate the classes when I try to access them. Is there a way to have Ninject take care of the instantiation of the class?
IEnumerable<Lazy<IMyInterface>> controllers =
mefContainer.GetExports<IMyInterface>();
// The following line throws an error because MEF is
// trying to instantiate a class that requires 5 parameters
IMyInterface firstClass = controllers.First().Value;
Update:
There are multiple classes that implement IMyInterface and I would like to select the one that has a specific name and then have Ninject create an instance of it. I'm not really sure if I want laziness.
[Export(typeof(IMyInterface))]
public class MyClassOne : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
[Export(typeof(IMyInterface))]
public class MyClassTwo : IMyInterface {
private MyRepository one;
private YourRepository two;
public MyClassTwo(MyRepository repoOne, YourRepository repoTwo) {
one = repoOne;
two = repoTwo;
}
}
Using MEF, I would like to get either MyClassOne or MyClassTwo and then have Ninject provide an instance of MyRepository and YourRepository (Note, these two are bound in a Ninject module in the main assembly and not the assembly they are in)
You could use the Ninject Load mechanism to get the exported classes into the mix, and the you either:
kernel.GetAll<IMyInterface>()
The creation is lazy (i.e., each impl of IMyInterface is created on the fly as you iterate over the above) IIRC, but have a look at the tests in the source (which is very clean and readable, you have no excuse :P) to be sure.
If you dont need the laziness, use LINQ's ToArray or ToList to get a IMyInterface[] or List<IMyInterface>
or you can use the low-level Resolve() family of methods (again, have a look in the tests for samples) to get the eligible services [if you wanted to do some filtering or something other than just using an instance - though binding metadata is probably the solution there]
Finally, if you can edit in an explanation of whether you need laziness per se or are doing it to illustrate a point. (and have a search for Lazy<T> here and in general wrt both Ninject and autofac for some samples - cant recall if there are any examples in the source - think not as it's still on 3.5)
EDIT: In that case, you want a bind that has:
Bind<X>().To<>().In...().Named( "x" );
in the registrations in your modules in the child assembly.
Then when you're resolving in the parent assembly, you use the Kernel.Get<> overload that takes a name parameter to indicate the one you want (no need for laziness, arrays or IEnumerable). The Named mechanism is a specific (just one or two helper extensions implement it in terms of the generalised concept) application of the binding metadata concept in Ninject - there's plenty room to customise it if somethng beyond a simple name is insufficient.
If you're using MEF to construct the objects, you could use the Kernel.Inject() mechanism to inject properties. The problem is that either MEF or Ninject
- has to find the types (Ninject: generally via Bind() in Modules or via scanning extensions, after which one can do a Resolve to subset the bindings before instantiation - though this isnt something you normally do)
- has to instantiate the types (Ninject: typically via a Kernel.Get(), but if you discovered the types via e.g. MEF, you might use the Kernel.Get(Type) overloads )
- has to inject the types (Ninject: typically via a Kernel.Inject(), or implicit in the `Kernel.Get())
What's not clear to me yet is why you feel you need to mix and mangle the two - ultimately sharing duties during construction and constructor injection is not a core use case for either lib, even if they're both quite composable libraries. Do you have a constraint, or do you have critical benefits on both sides?
You can use ExportFactory to create Instances
see docs here:
http://mef.codeplex.com/wikipage?title=PartCreator
Your case would be slitly different
I would use Metadata and a custom attribute also
[ImportMany(AllowRecomposition=true)]
IEnumerable<ExportFactory<IMyInterFace, IMyInterfaceMetaData>> Controllers{ get; set; }
public IMyInterface CreateControllerFor(string parameter)
{
var controller = Controllers.Where(v => v.Metadata.ControllerName == parameter).FirstOrDefault().CreateExport().Value;
return controller;
}
or use return Controllers.First() without the Metadata
Then you can code the ninject parts around that or even stick with MEF
Hope this helps

Resources