I'm working with Guice and have one design question. My App consists of few module:
myapp-persistence (JPA Entities, DAO, other DB related stuff)
myapp-backend (Some background daemons, they use myapp-persistence )
myapp-rest (REST app that depends on myapp-persistence)
myapp-persistence must have singleton HibernateSessionFactory. It's by Hibernate design.
No problem I can solve it with Guice:
class MyAppPersistenceModule extends AbstractModule {
override def configure(): Unit = {
bind(classOf[SomeStuff])
bind(classOf[ClientDao])
bind(classOf[CustomerDao])
bind(classOf[SessionFactory]).toProvider(classOf[HibernateSessionFactoryProvider]).asEagerSingleton()
}
#Provides
def provideDatabaseConnectionConfiguration: DatabaseConnectionConfiguration = {
DatabaseConnectionConfiguration.fromSysEnv
}
}
The problem with passing DatabaseConnectionConfiguration to that singleton. myapp-persistence module doesn't really care how to get that config. Right now it's taken from sys variables.
myapp-rest is play-app and it wants to read conf from application.conf and inject it into other components using Guice.
myapp-backend does more or less the same.
Right now I'm locked myself with
#Provides
def provideDatabaseConnectionConfiguration: DatabaseConnectionConfiguration = {
DatabaseConnectionConfiguration.fromSysEnv
}
And I don't understand how to make it flexible and configurable for myapp-rest and myapp-backend.
UPD
According to answer, I did it this way:
Defined trait
trait DbConfProvider {
def dbConf: DbConf
}
Singleton factory now depends on provider:
class HibernateSessionFactoryProvider #Inject()(dbConfProvider: DbConfProvider) extends Provider[SessionFactory] {
}
myapp-persistence module exposes public guice module with all piblic persistence module DAO.
myapp-persistence has module used only for testing purposes. myapp-persistence Injector load module described below:
class MyAppPersistenceDbConfModule extends AbstractModule {
override def configure(): Unit = {
bind(classOf[DbConfProvider]).to(classOf[DbConfSysEnvProvider])
}
}
DbConfSysEnvProvider reads DB connection settings from sys env. Non production use case.
Play app has it's own conf mechanism. I've added my custom module to app conf:
# play-specific config
play.modules.enabled += "common.components.MyAppPersistenceDbConfModule"
# public components from myapp-persistence module.
play.modules.enabled += "com.myapp.persistence.connection.PersistenceModule"
And my configuration service:
#Singleton
class ConfigurationService #Inject()(configuration: Configuration) extends DbConfProvider {
...}
I am not an expert on Play-specific setup, but generally this kind of design problem is solved in one of the following ways:
No default. Remove the binding of DatabaseConnectionConfiguration from the upstream module (myapp-persistence), and define it in each downstream module (myapp-backend, myapp-rest) as appropriate.
Default with override. Keep the default binding of DatabaseConnectionConfiguration like you did, implementing the most common configuration strategy there. Override it in downstream modules using Guice Modules.override(..) API when needed.
Implement a unified configuration mechanism across the modules, that does not depend on particular frameworks used. (E.g. Bootique, which is built on Guice ... Haven't used it with Play though).
I personally prefer the approach #3, but in the absence of something like Bootique, #2 is a good substitute.
Related
I am working on a multi-module android project. In main module we have a CoreComponent with CoreModule. CoreModule provides some objects. I want to inject those objects in our feature modules without creating new components.
What is the best way to do that?
Main Module
#Component
CoreComponent ( modules = CoreModule.class )
#Module
CoreModule
Feature Module
#Module
FeatureModule(includes = CoreModule.class)
Unlike with Dagger 1, Dagger 2 modules don't have to be complete: CoreModule and FeatureModule can refer to bindings that they do not define. When Dagger 2 aggregates all of your modules to generate an implementation of your component, it will let you know whether you are missing any bindings.
#Component(modules = {CoreModule.class, FeatureModule.class})
public interface CoreComponent {
}
#Module
public class CoreModule {
#Provides public static A getA() { /* ... */ }
}
#Module /* no includes */
public class FeatureModule {
/* Feature Module injects binding A from CoreModule, even without includes= */
#Provides public static B getB(A a) { /* ... */ }
}
Should you use includes? It depends. If you have FeatureModule include CoreModule, then any time you refer to FeatureModule it will automatically include CoreModule, which can make FeatureModule easier to use, and Dagger will remove any duplicate inclusions of FeatureModule. However, if you want to use the bindings in FeatureModule but a MockCoreModule in your integration tests, then you won't be able to let FeatureModule include CoreModule because it will presumably conflict in bindings with MockCoreModule.
Notably, this assumes that FeatureModule and CoreModule are a part of the same overall Component, because you've only shown us one component. If you have a separate FeatureComponent, then you'll need to list it in your question; the relationship (if any) between CoreComponent and FeatureComponent: component dependency, subcomponent, or unrelated separate component. Each of those cases will be different.
I have many endpoints in my app:
/Route1
/Route2
...
/Route99
In a number of these routes, there is some common functionality such as getting specific data from one source such as a local file, or another resource such as a No SQL database or external HTTP endpoint. My problem is that these services need to have a service dependency themselves, and I am not sure that how I have currently done it is the best way to do it in NestJS.
Route1Service - Read a file of data, and return it. This uses the FileSystemService() to wrap all the error handling, different data types, path checking etc., of the NodeJS fs module. The Route1Service then returns this to the Route1Controller
#Injectable()
export class Route1Service {
private FS_:FileSystemService; // defined here instead of constructor, as I do not know how to set it in the constructor via NestJS, or if this is even the best way.
// constructor(private FS_: FileSystemService) { }
// Since I do not set it in the constructor
public DataServiceDI(FsService:FileSystemService):void {
this.FS_ = FsService;
}
public GetData(): string {
const Data:string = this.FS_.ReadLocalFile('a.txt');
return Data;
}
}
Route99Service might do the same thing, but with a different file (b.txt)
#Injectable()
export class Route99Service {
private FS_:FileSystemService;
public DataServiceDI(FsService:FileSystemService):void {
this.FS_ = FsService;
}
public GetData(): string {
const Data:string = this.FS_.ReadLocalFile('b.txt');
return Data;
}
}
This is a contrived example to illustrate my issue. Obviously a basic RouteService could be used, and pass the file name, but I am trying to illustrate the dependent service. I do not know how to define the module(s) to use this dependent service or if I should be doing it this way.
What I have been doing for my definition:
#Module({
controllers: [Route1Controller],
providers: [Route1Service, FileSystemService],
})
export class Route1Module {}
The controller than has the constructor with both Services:
#Controller('route1')
export class Route1Controller
constructor(
private Route1_: Route1Service,
private FsSystem_: FileSystemService
) { }
Now that my controller has the FsSystem service as a separate entity, I need to add a method on my Route1Service, DataServiceDI(), to allow me to pass the FileSystemService as a reference. Then my service can use this service to access the file system.
My question comes down to, is this the best practice for this sort of thing? Ultimately, in my code, these services (FileSystemService, NoSqlService) extend a common service type, so that all my services can have this DataServiceDI() in then (they extend a base service with this definition).
Is this the best approach for longer term maintainability? Is there an easier way to simply inject the proper service into my Route1Service so it is injected by NestJS, and I do not have to do the DI each time?
The current method works for me to be able to simply test the service, since I can easily mock the FileSystemServie, NoSqlService, etc., and then inject the mock.
I am newbie for Guice and seeking help for the following use case :
I have developed one package say (PCKG) where the entry class of that package depends on other class like:
A : Entry point class --> #Inject A(B b) {}
B in turn is dependent on C and D like --> #Inject B(C c, D d) {}
In my binding module I am doing :
bind(BInterface).to(Bimpl);
bind(CInterface).to(CImpl);
...
Note I am not providing binding information for A as i want to provide its binding by its consumer class. (this is how the design is so my request is to keep the discussion on main problem rather than design).
Now my consumer class is doing like:
AModule extends PrivateModule {
protected void configure() {
bind(AInterface.class).annotatedWith(AImpl.class);
}
}
Also in my consumer package:
.(new PCKGModule(), new AModule())
Q1. Am i doing the bindings correctly in consumer class. I am confused because when i am doing some internal testing as below in my consumer package:
class testModule {
bind(BInterface).to(Bimpl);
bind(CInterface).to(CImpl)...
}
class TestApp {
public static void main(..) {
Guice.createInstance(new testModule());
Injector inj = Guice.createInstance(new AModule());
A obj = inj.getInstance(A.class);
}
}
It is throwing Guice creation exception.Please help me get rid of this situation.
Also one of my friend who is also naive to Guice was suggesting that I need to create B's instance in AModule using Provides annotation. But i really didn't get his point.
Your main method should look like this:
class TestApp {
public static void main(..) {
Injector injector = Guice.createInjector(new TestModule(), new AModule());
A obj = injector.getInstance(A.class);
}
Note that the Java convention is for class names to have the first letter capitalised.
I'm pretty sure your implementation of AModule isn't doing what you think it's doing either, but it's hard to be certain based on the information you've provided. Most likely, you mean to do this:
bind(AInterface.class).to(AImpl.class)`
There's no need to do anything "special" with A's binding. Guice resolves all the recursion for you. That's part of its "magic".
annotatedWith() is used together with to() or toInstance(), like this:
bind(AInterface.class).to(AImpl.class).annotatedWIth(Foo.class);
bind(AInterface.class).to(ZImpl.class).annotatedWIth(Bar.class);
Then you can inject different implementations by annotating your injection points, e.g.:
#Inject
MyInjectionPoint(#Foo AInterface getsAImpl, #Bar AInterface getsZImpl) {
....
}
It's worth also pointing out that you can potentially save yourself some boilerplate by not bothering with the binding modules (depending how your code is arranged) and using JIT bindings:
#ImplementedBy(AImpl.class)
public interface AInterface {
....
}
These effectively act as "defaults" which are overridden by explicit bindings, if they exist.
I thought DI was implemented to allow use the same services over the application, and change them as needed. However this snippet (Angular 2.0.0-beta.0) refuses to work:
# boot.ts
import {ProjectService} from './project.service'
bootstrap(AppComponent, [ProjectService]);
# my.component.ts
export class MyComponent {
constructor(project: ProjectService) {
}
}
and with explicit service requirement it works:
# my.component.ts
import {ProjectService} from './project.service';
export class MyComponent {
constructor(project: ProjectService) {
}
}
The official doc is somewhat inconsistent, but has the same in the plunkr example:
# boot.ts
import {HeroesListComponent} from './heroes-list.component';
import {HeroesService} from './heroes.service';
bootstrap(HeroesListComponent, [HeroesService])
# heroes-list.component.ts
import {HeroesService} from './heroes.service';
Is this the intended way of DI usage? Why we have to import service in every class requiring it, and where are the benefits if we can't just describe the service once on boot?
This isn't really related to dependency injection. You can't use a class in TS that is not imported.
This line references a class and DI derives from the type what instance to inject.
constructor(project: ProjectService) {
If the type isn't specified by a concrete import, DI can't know which of all possible ProjectService classes should be used.
What you can do for example, is to request a type (ProjectService) and get a different implementation (subclass like MockProjectService or EnhancedProjectService,...)
bootstrap(HeroesListComponent, [provide(ProjectService useClass: MockProjectService)]);
this way DI would inject a MockProjectService for the following constructor
constructor(project: ProjectService) {
I need to intercept calls to private methods in Grails services. The following aspect IS working for any annotated public methods, however nothing happens when the annotation is at PRIVATE methods.
import exceptions.DwcpExeption
import org.aspectj.lang.ProceedingJoinPoint
import org.aspectj.lang.annotation.Around
import org.aspectj.lang.annotation.Aspect
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import org.springframework.stereotype.Component
#Aspect
#Component
public class LoggerInterceptor {
private static Logger log = LoggerFactory.getLogger(LoggerInterceptor.class);
#Around("#annotation(newAnnotation)")
public Object aroundEvents(ProceedingJoinPoint proceedingJoinPoint, NewAnnotation newAnnotation) {
log.info newAnnotation.value()
String logMessage = String.format("%s.%s(%s)",
proceedingJoinPoint.getTarget().getClass().getName(),
proceedingJoinPoint.getSignature().getName(),
Arrays.toString(proceedingJoinPoint.getArgs()));
log.info "*Entering $logMessage"
def result
try {
result = proceedingJoinPoint.proceed()
catch (ex) {
log.error '', ex
}
log.info "*Exiting $logMessage. Result: $result"
return result
}
}
Maybe the problem is in config? I've tried in applicationContext.xml
<aop:aspectj-autoproxy proxy-target-class="true"/>
and in resources.groovy
aop.config("proxy-target-class": true)
Nevertheless, only public methods are intercepted.
Spring AOP is a proxy-based "AOP lite" approach in comparison to AspectJ. It only works for Spring components and only for public, non-static methods. This is also explained in the Spring AOP documentation as follows:
Due to the proxy-based nature of Spring’s AOP framework, protected methods are by definition not intercepted, neither for JDK proxies (where this isn’t applicable) nor for CGLIB proxies (where this is technically possible but not recommendable for AOP purposes). As a consequence, any given pointcut will be matched against public methods only!
If your interception needs include protected/private methods or even constructors, consider the use of Spring-driven native AspectJ weaving instead of Spring’s proxy-based AOP framework. This constitutes a different mode of AOP usage with different characteristics, so be sure to make yourself familiar with weaving first before making a decision.
Bottom line: Please switch to AspectJ which can be easily integrated into Spring applications via LTW (load-time weaving) as described in Section 9.8, “Using AspectJ with Spring applications”.
If you don't specify the scope it defaults to public. Add a pointcut for private methods:
#Around("#annotation(newAnnotation) && execution(private * *(..))")