I have two singleton services bound to the dependency injector (via Jersey):
ResourceConfig.register(new AbstractBinder() {
#Override
protected void configure() {
bind(Foo.class)
.to(Foo.class)
.in(Singleton.class);
bind(Bar.class)
.to(Bar.class)
.in(Singleton.class);
}
});
Foo uses Bar, so I can simply do the following:
public class Foo {
#javax.inject.Inject private Bar bar;
}
If Foo only uses Bar rarely, then I can defer its construction:
public class Foo {
#javax.inject.Inject private javax.inject.Provider<Bar> bar;
}
I have also read that using Provider is recommended in general as it avoids this eager evaluation and circular dependencies (though I try to avoid those anyway).
But what about making it a proxy?:
ResourceConfig.register(new AbstractBinder() {
#Override
protected void configure() {
bind(Foo.class)
.to(Foo.class)
.in(Singleton.class)
.proxy(true)
.proxyForSameScope(true);
bind(Bar.class)
.to(Bar.class)
.in(Singleton.class)
.proxy(true)
.proxyForSameScope(true);
}
});
I am new to injection so not sure on the history. Are they effectively the same concept for the two different frameworks?
A proxy makes the code look nicer and pushes the concern to the creator of the service instead of the user of it (which may or may not be desirable). Is there any disadvantage to this versus Provider?
Note the only thing being considered here is singleton services.
This is purely an opinion, but I would prefer the proxy myself, for most of the reasons you mentioned. It is better in general to proxy interfaces because then the JDK proxies are used but as long as your Bar class follows the proxy rules you should still be ok.
Related
I'm brushing up on my design patterns knowledge by going through them in Dart, and I'm currently working on the remote proxy pattern. As I understand, the pattern implies a shared interface between the real object residing on a server machine, and the proxy object on a client machine.
I've managed to get all the networking between the client and server working fine, and i've set up a simple RPC API with dart's HttpServer and HttpClient, but there's one thing that's bugging me. The methods on the proxy object must be asynchronous because of the networking involved, but the real object's methods aren't asynchronous. It would appear that this makes it impossible for them to share an interface, and thus functional consistency between the two classes isn't guaranteed by the type system.
Is there a way to implement some kind of a future version of a certain interface in dart? I don't mean something that returns Future<SomeInterface>, but something where the methods of SomeInterface are implemented asynchronously with Future return types. What i'm looking for is something like:
abstract class IShared {
int foo();
}
class Bar implements IShared {
#override
int foo() {
// perform work here
return 0;
}
}
class BarProxy implements async IShared {
// Currently an invalid override
#override
Future<int> foo() async {
// perform async work here
return 0;
}
}
I'm aware that Future<IShared> implies something completely different entirely, but is there anything that could help implement what I want? Maybe i'm being too strict with requiring a shared interface between the real object and the proxy, but that's how it's always implemented in class diagrams.
Or perhaps there's a good pattern that i'm missing that can achieve this.
To be clear, I don't want to make the methods of the shared interface and non proxy object async with Future returns if possible.
Seems like a case for FutureOr which you can use to represent the case where you want to be able to return a object or same object packed inside an Future:
import 'dart:async';
abstract class IShared {
FutureOr<int> foo();
}
class Bar implements IShared {
#override
int foo() {
// perform work here
return 0;
}
}
class BarProxy implements IShared {
#override
Future<int> foo() async {
// perform async work here
return 0;
}
}
I am trying to understand Components in Dagger 2. Here is an example:
#Component(modules = { MyModule.class })
public interface MyComponent {
void inject(InjectionSite injectionSite);
Foo foo();
Bar bar();
}
I understand what the void inject() methods do. But I don't understand what the other Foo foo() getter methods do. What is the purpose of these other methods?
Usage in dependent components
In the context of a hierarchy of dependent components, such as in this example, provision methods such as Foo foo() are for exposing bindings to a dependent component. "Expose" means "make available" or even "publish". Note that the name of the method itself is actually irrelevant. Some programmers choose to name these methods Foo exposeFoo() to make the method name reflect its purpose.
Explanation:
When you write a component in Dagger 2, you group together modules containing #Provides methods. These #Provides methods can be thought of as "bindings" in that they associate an abstraction (e.g., a type) with a concrete way of resolving that type. With that in mind, the Foo foo() methods make the Component able to expose its binding for Foo to dependent components.
Example:
Let's say Foo is an application Singleton and we want to use it as a dependency for instances of DependsOnFoo but inside a component with narrower scope. If we write a naive #Provides method inside one of the modules of MyDependentComponent then we will get a new instance. Instead, we can write this:
#PerFragment
#Component(dependencies = {MyComponent.class }
modules = { MyDependentModule.class })
public class MyDependentComponent {
void inject(MyFragment frag);
}
And the module:
#Module
public class MyDepedentModule {
#Provides
#PerFragment
DependsOnFoo dependsOnFoo(Foo foo) {
return new DependsOnFoo(foo);
}
}
Assume also that the injection site for DependentComponent contains DependsOnFoo:
public class MyFragment extends Fragment {
#Inject DependsOnFoo dependsOnFoo
}
Note that MyDependentComponent only knows about the module MyDependentModule. Through that module, it knows it can provide DependsOnFoo using an instance of Foo, but it doesn't know how to provide Foo by itself. This happens despite MyDependentComponent being a dependent component of MyComponent. The Foo foo() method in MyComponent allows the dependent component MyDependentComponent to use MyComponent's binding for Foo to inject DependsOnFoo. Without this Foo foo() method, the compilation will fail.
Usage to resolve a binding
Let's say we would like to obtain instances of Foo without having to call inject(this). The Foo foo() method inside the component will allow this much the same way you can call getInstance() with Guice's Injector or Castle Windsor's Resolve. The illustration is as below:
public void fooConsumer() {
DaggerMyComponent component = DaggerMyComponent.builder.build();
Foo foo = component.foo();
}
Dagger is a way of wiring up graphs of objects and their dependencies. As an alternative to calling constructors directly, you obtain instances by requesting them from Dagger, or by supplying an object that you'd like to have injected with Dagger-created instances.
Let's make a coffee shop, that depends on a Provider<Coffee> and a CashRegister. Assume that you have those wired up within a module (maybe to LightRoastCoffee and DefaultCashRegister implementations).
public class CoffeeShop {
private final Provider<Coffee> coffeeProvider;
private final CashRegister register;
#Inject
public CoffeeShop(Provider<Coffee> coffeeProvider, CashRegister register) {
this.coffeeProvider = coffeeProvider;
this.register = register;
}
public void serve(Person person) {
cashRegister.takeMoneyFrom(person);
person.accept(coffeeProvider.get());
}
}
Now you need to get an instance of that CoffeeShop, but it only has a two-parameter constructor with its dependencies. So how do you do that? Simple: You tell Dagger to make a factory method available on the Component instance it generates.
#Component(modules = {/* ... */})
public interface CoffeeShopComponent {
CoffeeShop getCoffeeShop();
void inject(CoffeeService serviceToInject); // to be discussed below
}
When you call getCoffeeShop, Dagger creates the Provider<Coffee> to supply LightRoastCoffee, creates the DefaultCashRegister, supplies them to the Coffeeshop constructor, and returns you the result. Congratulations, you are the proud owner of a fully-wired-up coffeeshop.
Now, all of this is an alternative to void injection methods, which take an already-created instance and inject into it:
public class CoffeeService extends SomeFrameworkService {
#Inject CoffeeShop coffeeShop;
#Override public void initialize() {
// Before injection, your coffeeShop field is null.
DaggerCoffeeShopComponent.create().inject(this);
// Dagger inspects CoffeeService at compile time, so at runtime it can reach
// in and set the fields.
}
#Override public void alternativeInitialize() {
// The above is equivalent to this, though:
coffeeShop = DaggerCoffeeShopComponent.create().getCoffeeShop();
}
}
So, there you have it: Two different styles, both of which give you access to fully-injected graphs of objects without listing or caring about exactly which dependencies they need. You can prefer one or the other, or prefer factory methods for the top-level and members injection for Android or Service use-cases, or any other sort of mix and match.
(Note: Beyond their use as entry points into your object graph, no-arg getters known as provision methods are also useful for exposing bindings for component dependencies, as David Rawson describes in the other answer.)
Let's say we have class:
public class WithDependencies
{
public WithDependencies(IAmDependencyOne first, IAmDependencyTwo second)
// ...
}
Now the question. How do you create objects of WithDependencies class in an application?
I know there are many ways.
new WithDependencies(new DependencyOne(), new DependencyTwo());
new WithDependencies(IoC.Resolve(IDependencyOne), IoC.Resolve(IDependencyTwo());
// register IDependencyOne, IDependencyTwo implementations at app start
IoC.Resolve(WithDependencies);
// register IDependencyOne, IDependencyTwo implementations at app start
// isolate ourselves from concrete IoC Container
MyCustomWithDependenciesFactory.Create();
and so on...
What do you think is the way to do it?
Edit:
Because I don't get answers or I don't understand them I'll try to ask again. Let's say that on some event (button, timer, whatever) I need new object WithDependencies(). How do I create it? Assume IoC container is already configured.
It depends on the context, so it's impossible to provide a single answer. Conceptually you'd be doing something like this from the Composition Root:
var wd = new WithDependencies(new DependencyOne(), new DependencyTwo());
However, even in the absence of a DI Container, the above code isn't always unambiguously the correct answer. In some cases, you might want to share the same dependency among several consumers, like this:
var dep1 = new DependencyOne();
var wd = new WithDependencies(dep1, new DependencyTwo());
var another = AnotherWithDependencies(dep1, new DependencyThree());
In other cases, you might not want to share dependencies, in which case the first option is more correct.
This is just a small glimpse of an entire dimension of DI concerned with Lifetime Management. Many DI Containers can take care of that for you, which is one excellent argument to prefer a DI Container over Poor Man's DI.
Once you start using a DI Container, you should follow the Register Resolve Release pattern when resolving types, letting Auto-wiring take care of the actual composition:
var wd = container.Resolve<WithDependencies>();
The above example assumes that the container is already correctly configured.
If you need to create a dependency which has its own dependencies, you can either A) do it yourself, or B) ask something else to do it for you. Option A negates the benefits of dependency injection (decoupling, etc.), so I would say option B is a better starting point. Now, we have chosen to use the factory pattern, no matter whether it takes the form of a service locator (i.e. IoC.Resolve), a static factory, or an instance factory. The point is that we have delegated that responsibility to an external authority.
There are a number of trade-offs required for static accessors. (I went over them in another answer, so I won't repeat them here.) In order to avoid introducing a dependency on the infrastructure or the container, a solid option is to accept a factory for creating WithDependencies when we need an instance somewhere else:
public class NeedsWithDependencies
{
private readonly IWithDependenciesFactory _withDependenciesFactory;
public NeedsWithDependencies(IWithDependenciesFactory withDependenciesFactory)
{
_withDependenciesFactory = withDependenciesFactory;
}
public void Foo()
{
var withDependencies = _withDependenciesFactory.Create();
...Use the instance...
}
}
Next, we can create a container-specific implementation of the factory:
public class WithDependenciesFactory : IWithDependenciesFactory
{
private readonly IContainer _container;
public WithDependenciesFactory(IContainer container)
{
_container = container
}
public WithDependencies Create()
{
return _container.Resolve<WithDependencies>();
}
}
Now NeedsWithDependencies is completely isolated from any knowledge of how WithDependencies gets created; it also exposes all its dependencies in its constructor, instead of hiding dependencies on static accessors, making it easy to reuse and test.
Defining all those factories can get a little cumbersome, though. I like Autofac's factory relationship type, which will detect parameters of the form Func<TDependency> and automatically inject a function which serves the same purpose as the hand-coded factory above:
public class NeedsWithDependencies
{
private readonly Func<WithDependencies> _withDependenciesFactory;
public NeedsWithDependencies(Func<WithDependencies> withDependenciesFactory)
{
_withDependenciesFactory = withDependenciesFactory;
}
public void Foo()
{
var withDependencies = _withDependenciesFactory();
...Use the instance...
}
}
It also works great with runtime parameters:
public class NeedsWithDependencies
{
private readonly Func<int, WithDependencies> _withDependenciesFactory;
public NeedsWithDependencies(Func<int, WithDependencies> withDependenciesFactory)
{
_withDependenciesFactory = withDependenciesFactory;
}
public void Foo(int x)
{
var withDependencies = _withDependenciesFactory(x);
...Use the instance...
}
}
Sometimes I try to get rid of factories or at least not depend directly on them, so Dependency Injection (without factories) is useful of course.
Therefore I use Google Juice, cause its a small little framework using Java Annotations and you can quickly change your injections / dependencies. Just take a look at it:
http://code.google.com/p/google-guice/
What is the preferred way to remove a default pipeline contributor (OpenRasta 2.0.3)?
I haven't found a lot on that on the net, but one way seems to be writing a custom DependencyRegistrar, i.e. deriving from DefaultDependencyRegistrar and then e.g. overriding AddDefaultContributors(). Apart from that I doubt that it's the best way to remove just a single pipeline contributor, it seems to need additional per-host (ASP vs. InMemory) work, whereas I would consider messing with pipeline handlers to be a host-agnostic affair.
But even if I'd go this route, this guy here seems to have tried it without success: http://groups.google.com/group/openrasta/browse_thread/thread/d72b91e5994f402b
I tried similar things, but so far couldn't make my custom registrar replace the default.
So what's the simplest and best way to remove a default pipeline contributor, preferable in a host agnostic way? Is there a working example somewhere?
No, you just need to derive from the registrar and use the protected members that are available to imperatively remove the types you don't want auto-registered.
The registrar needs to be registered in your container before you provide it to OpenRasta, otherwise the type has been resolved already.
Answering myself with working code snippets as they might be helpful to others.
So it looks like removing default pipeline contributors cannot be done
in a host agnostic way (although I don't see why OpenRasta could not
be modified to allow for easy deletion of handlers in the future).
The 2 classes that need to be written are in fact independent of the
host(s) used:
public class MyDependencyRegistrar : DefaultDependencyRegistrar
{
protected override void AddDefaultContributors()
{
base.AddDefaultContributors();
PipelineContributorTypes.Remove(typeof(HandlerResolverContributor));
// If we remove the only contributor for the 'well-known'
// IHandlerSelection stage, like done above, we need to add
// another one implements IHandlerSelection, otherwise
// we'll run into errors (and what's the point of a pipeline
// without a handler selector anyway?). So let's do that here:
AddPipelineContributor<MyOwnHandlerResolverContributor>();
}
}
In order to make that Registrar available, we need to create an accessor
like the following, which then needs to be set in the various hosts:
public class MyDependencyResolverAccessor : IDependencyResolverAccessor
{
InternalDependencyResolver resolver;
public IDependencyResolver Resolver
{
get
{
if (resolver == null)
{
resolver = new InternalDependencyResolver();
resolver.AddDependency<IDependencyRegistrar, MyDependencyRegistrar>();
}
return resolver;
}
}
}
For Asp.Net, this seems to work for me:
public class Global : System.Web.HttpApplication
{
void Application_Start(object sender, EventArgs e)
{
OpenRastaModule.Host.DependencyResolverAccessor =
new MyDependencyResolverAccessor();
For InMemoryHost, which I use for integration testing and in-process access
of my handlers, I haven't found a way around copying the whole class
InMemoryHost and modifying it to my needs. In fact, we don't need
MyDependencyResolverAccessor in this case, as InMemoryHost implements
IDependencyResolverAccessor already. So here's how it could look like. Only the
last line was actually added to the existing code in InMemoryHost:
public class TwinMemoryHost : IHost, IDependencyResolverAccessor, IDisposable
{
readonly IConfigurationSource _configuration;
bool _isDisposed;
public TwinMemoryHost(IConfigurationSource configuration)
{
_configuration = configuration;
Resolver = new InternalDependencyResolver();
Resolver.AddDependency<IDependencyRegistrar, MyDependencyRegistrar>();
...
I have been doing my first Test Driven Development project recently and have been learning Ninject and MOQ. This is my first attempt at all this. I've found the TDD approach has been thought provoking, and Ninject and MOQ have been great. The project I am working on has not particularly been the best fit for Ninject as it is a highly configurable C# program that is designed to test the use of a web service interface.
I have broken it up into modules and have interfaces all over the shop, but I am still finding that I am having to use lots of constructor arguments when getting an implementation of a service from the Ninject kernel. For example;
In my Ninject module;
Bind<IDirEnum>().To<DirEnum>()
My DirEnum class;
public class DirEnum : IDirEnum
{
public DirEnum(string filePath, string fileFilter,
bool includeSubDirs)
{
....
In my Configurator class (this is the main entry point) that hooks all the services together;
class Configurator
{
public ConfigureServices(string[] args)
{
ArgParser argParser = new ArgParser(args);
IDirEnum dirEnum = kernel.Get<IDirEnum>(
new ConstructorArgument("filePath", argParser.filePath),
new ConstructorArgument("fileFilter", argParser.fileFilter),
new ConstructorArgument("includeSubDirs", argParser.subDirs)
);
filePath, fileFilter and includeSubDirs are command line options to the program. So far so good. However, being a conscientious kind of guy, I have a test covering this bit of code. I'd like to use a MOQ object. I have created a Ninject module for my tests;
public class TestNinjectModule : NinjectModule
{
internal IDirEnum mockDirEnum {set;get};
Bind<IDirEnum>().ToConstant(mockDirEnum);
}
And in my test I use it like this;
[TestMethod]
public void Test()
{
// Arrange
TestNinjectModule testmodule = new TestNinjectModule();
Mock<IDirEnum> mockDirEnum = new Mock<IDirEnum>();
testModule.mockDirEnum = mockDirEnum;
// Act
Configurator configurator = new Configurator();
configurator.ConfigureServices();
// Assert
here lies my problem! How do I test what values were passed to the
constructor arguments???
So the above shows my problem. How can I test what arguments were passed to the ConstructorArguments of the mock object? My guess is that Ninject is dispensing of the ConstuctorArguments in this case as the Bind does not require them? Can I test this with a MOQ object or do I need to hand code a mock object that implements DirEnum and accepts and 'records' the constructor arguments?
n.b. this code is 'example' code, i.e. I have not reproduced my code verbatim, but I think I have expressed enough to hopefully convey the issues? If you need more context, please ask!
Thanks for looking. Be gentle, this is my first time ;-)
Jim
There are a few problems with the way you designed your application. First of all, you are calling the Ninject kernel directly from within your code. This is called the Service Locator pattern and it is considered an anti-pattern. It makes testing your application much harder and you are already experiencing this. You are trying to mock the Ninject container in your unit test, which complicates things tremendously.
Next, you are injecting primitive types (string, bool) in the constructor of your DirEnum type. I like how MNrydengren states it in the comments:
take "compile-time" dependencies
through constructor parameters and
"run-time" dependencies through method
parameters
It's hard for me to guess what that class should do, but since you are injecting these variables that change at run-time into the DirEnum constructor, you end up with a hard to test application.
There are multiple ways to fix this. Two that come in mind are the use of method injection and the use of a factory. Which one is feasible is up to you.
Using method injection, your Configurator class will look like this:
class Configurator
{
private readonly IDirEnum dirEnum;
// Injecting IDirEnum through the constructor
public Configurator(IDirEnum dirEnum)
{
this.dirEnum = dirEnum;
}
public ConfigureServices(string[] args)
{
var parser = new ArgParser(args);
// Inject the arguments into a method
this.dirEnum.SomeOperation(
argParser.filePath
argParser.fileFilter
argParser.subDirs);
}
}
Using a factory, you would need to define a factory that knows how to create new IDirEnum types:
interface IDirEnumFactory
{
IDirEnum CreateDirEnum(string filePath, string fileFilter,
bool includeSubDirs);
}
Your Configuration class can now depend on the IDirEnumFactory interface:
class Configurator
{
private readonly IDirEnumFactory dirFactory;
// Injecting the factory through the constructor
public Configurator(IDirEnumFactory dirFactory)
{
this.dirFactory = dirFactory;
}
public ConfigureServices(string[] args)
{
var parser = new ArgParser(args);
// Creating a new IDirEnum using the factory
var dirEnum = this.dirFactory.CreateDirEnum(
parser.filePath
parser.fileFilter
parser.subDirs);
}
}
See how in both examples the dependencies get injected into the Configurator class. This is called the Dependency Injection pattern, opposed to the Service Locator pattern, where the Configurator asks for its dependencies by calling into the Ninject kernel.
Now, since your Configurator is completely free from any IoC container what so ever, you can now easily test this class, by injecting a mocked version of the dependency it expects.
What is left is to configure the Ninject container in the top of your application (in DI terminology: the composition root). With the method injection example, your container configuration would stay the same, with the factory example, you will need to replace the Bind<IDirEnum>().To<DirEnum>() line with something as follows:
public static void Bootstrap()
{
kernel.Bind<IDirEnumFactory>().To<DirEnumFactory>();
}
Of course, you will need to create the DirEnumFactory:
class DirEnumFactory : IDirEnumFactory
{
IDirEnum CreateDirEnum(string filePath, string fileFilter,
bool includeSubDirs)
{
return new DirEnum(filePath, fileFilter, includeSubDirs);
}
}
WARNING: Do note that factory abstractions are in most cases not the best design, as explained here.
The last thing you need to do is to create a new Configurator instance. You can simply do this as follows:
public static Configurator CreateConfigurator()
{
return kernel.Get<Configurator>();
}
public static void Main(string[] args)
{
Bootstrap():
var configurator = CreateConfigurator();
configurator.ConfigureServices(args);
}
Here we call the kernel. Although calling the container directly should be prevented, there will always at least be one place in your application where you call the container, simply because it must wire everything up. However, we try to minimize the number of times the container is called directly, because it improves -among other things- the testability of our code.
See how I didn't really answer your question, but showed a way to work around the problem very effectively.
You might still want to test your DI configuration. That's very valid IMO. I do this in my applications. But for this, you often don't need the DI container, or even if your do, this doesn't mean that all your tests should have a dependency on the container. This relationship should only exist for the tests that test the DI configuration itself. Here is a test:
[TestMethod]
public void DependencyConfiguration_IsConfiguredCorrectly()
{
// Arrange
Program.Bootstrap();
// Act
var configurator = Program.CreateConfigurator();
// Assert
Assert.IsNotNull(configurator);
}
This test indirectly depends on Ninject and it will fail when Ninject is not able to construct a new Configurator instance. When you keep your constructors clean from any logic and only use it for storing the taken dependencies in private fields, you can run this, without the risk of calling out to a database, web service or what so ever.
I hope this helps.