Short story: I want to get the Injectee from within a Supplier that is bound using Jersey's AbstractBinder.bindFactory(Class) method.
Basically to work around JERSEY-3675
Long story: I was using the org.glassfish.hk2.api.Factory method to create an instance of my RequestScoped Object and life was good.
I moved my registration into a Feature and then life was not good because of JERSEY-3675.
Long story short, org.glassfish.hk2.utilities.binding.AbstractBinder do not work inside Features. No problem, I thought, I will use org.glassfish.jersey.internal.inject.AbstractBinder.
Slight problem I encountered: Jersey's AbstractBinder.bindFactory() method takes in Supplier not Factory. No problem, I thought, I will use Supplier (I like it better, anyway).
Bigger problem I encountered: I had been using org.glassfish.hk2.api.Injectee to get the InstantiationData about who was calling the injection. This does not get injected if I do not use HK2's Factory. The javadoc even says the method is 'indeterminate' if not called from Factory.provide().
Even though there is a Jersey Injectee (org.glassfish.jersey.internal.inject.Injectee), this seems to only be available when using Jersey's InjectionResolver. I do not want to use InjectionResolver because
HK2's InjectionResolver must be Singleton but I want to have RequestScoped stuff in my injected object.
On second read, though, Jersey's InjectionResolver does not say anything about needing to be Singleton. Can anyone confirm?
I do not want to create my own annotation for this (I have created my own annotations and InjectionResolvers for other cases)
I do not feel confident overriding #Inject with an InjectionResolver. Not sure what that means or how I would be able to register multiple of those and have them work together (one for each Feature)
In meantime, I am using the workaround mentioned in the bug.
I am new to DI scene, so if something (or all of it) does not make sense, please let me know.
Related
this blog article says that:
While there are sometimes sensible ways to mock out objects without DI
(typically by mocking out class methods, as seen in the OCMock example
above), it’s often flat out not possible. Even when it is possible,
the complexity of the test setup might outweigh the benefits. If
you’re using dependency injection consistently, you’ll find writing
tests using stubs and mocks will be much easier.
but it doesn't explain why. What are possible scenarios where DI (injecting an id object conforming to protocol) will serve better for mocking in Objective-C, than simple OCMockito:
[given([mockArray objectAtIndex:0]) willReturn:#"first"];
[verifyCount(mockArray, times(1)) objectAtIndex:];
?
I've noticed that it is easier to create a separate class for test target when the original class do some async stuff.
Let assume you write a test for UIViewController which has a LoginSystem dependency which uses AFNetworking to do a request to the API. LoginSystem takes a block argument as a callback. (UIViewController->LoginSystem->AFNetworking).
If you make a mock of LoginSystem probably you will end with problems how to fire a callback block to test your UIViewController behaviour on success/failure. When I tried that I ended with MKTArgumentCaptor to retrieve a block argument and then I had to invoke it inside a test file.
On the other hand, if you create a separate class for LoginSystem (let call it LoginSystemStub which extends from LoginSystem) you are able to "mock" a behaviour in 3 lines of code and outside the test file. We should also keep our test file clean and readable.
Another case is that verify() doesn't work with checking asynchronous behaviour. It is much more easier to call expect(smth2).will.equal(smth)
EDIT:
Pointers to NSError (NSError**) also don't work well with verify() and it's better to create a stub :D
Imagine you are trying to test a more complex behavior of an object interacting with one of its child objects. To make certain that the parent object is operating correctly, you have to mock all the methods of the child object and even potentially track its changing state.
But if you do that, you just wrote an entirely new object in a confusing and convoluted way. It would have been simpler to write an entirely new object and tell the parent to use that.
With DI you inject your model at runtime, it's not bound in your classes but only in the configuration.
When you want to mock you just create a mock model and inject that instead of your real data. Besides the model, you changed your implementation in a single line.
See here for a hands on example or here for the idea behind it.
Disclaimer: Of course you can mock other stuff than the model, but that's probably the most common use-case.
The answer is: It's not better. It's only better if you need some super custom behavior.
The best thing about it is that you don't have to create an interface/protocol for every class you inject and you can limit to DI the modules you really need to inject making your code cleaner and more YAGNI.
It applies to any dynamic language, or language with reflection. Creating so much clutter just for the sake of Unit-Tests struck me as a bad idea.
I have a bit of a dilemma, which to be honest is a fringe case but still poses an issue.
Currently I am using Ninject MVC and bind all my controllers like so:
Kernel.Bind<SomeController>.ToSelf();
Which works a treat for 99% of things that I have needed to do, however at the moment I am doing some wacky stuff around dynamic routing and dynamic controllers which require me to manually write a method to get the type of a controller from ninject. Now initially I thought it would be easy, but its not... I was expecting that I could get the controller based on its name, but that didnt work.
Kernel.Get<IController>("SomeController");
That got me thinking that its probably because it only knows about a binding to SomeController, not IController. So I thought, I can just write all my bindings like so:
Kernel.Bind<IController>.To<SomeController>().Named("SomeController");
This way it should be easy to get the type of the controller from the name doing the previous code, however if I were to bind this way, I would have a problem when I come to unbind the controllers (as plugins can be loaded and unloaded at runtime). So the normal:
Kernel.Unbind<SomeController>()
Which was great, will no longer work, and I would have to do:
Kernel.Unbind<IController>();
However then I realised that I need to give it some constraint to tell it which binding for this type I want to unbind, and there seems to be no overloads or DSL available to do this...
So I am trapped between a rock and a hard place, as I need to satisfy the ControllerLookup method, but also need to keep it so I can add and remove bindings easily at runtime.
protected override Type GetControllerType(RequestContext requestContext, string controllerName) {
//... find and return type from ninject
}
Anyone have any ideas?
(Just incase anyone questions why I am doing this, its because of the way I am loading plugins, Ninject knows about the types and the namespaces, but within the context of creating a controller it doesn't know the namespace just the controller name, so I do this to satisfy the isolation of the plugin, and the location of the dynamic controller, it is a roundabout way of doing it, but it is what people have done with AutoFac before Example of similar thing with AutoFac)
In my opinion the bindings should be created once at application startup and not change anymore after the first resolve. Everything else can lead to strange issues. Unless you have proper isolation using an AppDomain for each plugin you can not really unload them anyway. Instead of unloading bindings you can make them conditional and disable them using some configuration.
If you really want to unload bindings then I suggest not to do it for single bindings but take advantage of modules. Load all bindings belonging to one plugin together in one or several modules and unload those modules instead of the single bindings.
I have an application using the Entity Framework code first. My setup is that I have a core service which all other services inherit from. The core service contains the following code:
public static DatabaseContext db = new DatabaseContext();
public CoreService()
{
db.Database.Initialize(force: false);
}
Then, another class will inherit from CoreService and when it needs to query the database will just run some code such as:
db.Products.Where(blah => blah.IsEnabled);
However, I seem to be getting conflicting stories as to which is best.
Some people advise NOT to do what I'm doing.
Other people say that you should define the context for each class (rather than use a base class to instantiate it)
Others say that for EVERY database call, I should wrap it in a using block. I've never seen this in any of the examples from Microsoft.
Can anyone clarify?
I'm currently at a point where refactoring is possible and quite quick, so I'd like some general advice if possible.
You should wrap one context per web request. Hold it open for as long as you need it, then get rid of it when you are finished. That's what the using is for.
Do NOT wrap up your context in a Singleton. That is not a good idea.
If you are working with clients like WinForms then I think you would wrap the context around each form but that's not my area.
Also, make sure you know when you are going to be actually executing against your datasource so you don't end up enumerating multiple times when you might only need to do so once to work with the results.
Lastly, you have seen this practice from MS as lots of the ADO stuff supports being wrapped in a using but hardly anyone realises this.
I suggest to use design principle "prefer composition over inheritance".
You can have the reference of the database context in your base class.
Implement a singleton for getting the DataContext and assign the datacontext to this reference.
The conflicts you get are not related to sharing the context between classes but are caused by the static declaration of your context. If you make the context an instance field of your service class, so that every service instance gets its own context, there should be no issues.
The using pattern you mention is not required but instead you should make sure that context.Dispose() is called at the service disposal.
I'm currently working on a Rails project, and have found times where it's easiest to do
if object.class == Foo
...
else if object.class == Bar
...
else
...
I started doing this in views where I needed to display different objects in different ways, but have found myself using it in other places now, such as in functions that take objects as arguments. I'm not precisely sure why, but I feel like this is not good practice.
If it's not good practice, why so?
If it's totally fine, when are times that one might want to use this specifically?
Thanks!
Not sure why that works for you at all. When you need to test whether object is instance of class Foo you should use
object.is_a? Foo
But it's not a good practice in Ruby anyway. It'd much better to use polymorphism whenever it's possible. For example, if somewhere in the code you can have object of two different classes and you need to display them differently you can define display method in both classes. After that you can call object.display and object will be displayed using method defined in the corresponding class.
Advantage of that approach is that when you need to add support for the third class or a whole bunch of new classes all you'll need to do is define display method in every one of them. But nothing will change in places where you actually using this method.
It's better to express type specific behavior using subtyping.
Let the objects know how they are displays. Create a method Display() and pass all you need from outside as parameter. Let "Foo" know to display foo and "Bar" know how to display bar.
There are many articles on replacing conditionals with polymorphism.
It’s not a good idea for several reasons. One of them is duck typing – once you start explicitly checking for object class in the code, you can no longer simply pass an instance of a different class that conforms to a similar interface as the original object. This makes proxying, mocking and other common design tricks harder. (The point can be also generalized as breaking encapsulation. It can be argued that the object’s class is an implementation detail that you as a consumer should not be interested in. Broken encapsulation ≈ tight coupling ≈ pain.)
Another reason is extensibility. When you have a giant switch over the object type and want to add one more case, you have to alter the switch code. If this code is embedded in a library, for example, the library users can’t simply extend the library’s behaviour without altering the library code. Ideally all behaviour of an object should be a part of the object itself, so that you can add new behaviour just by adding more object types.
If you need to display different objects in a different way, can’t you simply make the drawing code a part of the object?
Please be gentle, I'm a newb to this IoC/MVC thing but I am trying. I understand the value of DI for testing purposes and how IoC resolves dependencies at run-time and have been through several examples that make sense for your standard CRUD operations...
I'm starting a new project and cannot come up with a clean way to accomplish user permissions. My website is mostly secured with any pages with functionality (except signup, FAQ, about us, etc) behind a login. I have a custom identity that has several extra properties which control access to data... So....
Using Ninject, I've bound a concrete type* to a method (Bind<MyIdentity>().ToMethod(c => MyIdentity.GetIdentity()); so that when I add MyIdentity to a constructor, it is injected based on the results of the method call.
That all works well. Is it appropriate to (from the GetIdentity() method) directly query the request cookies object (via FormsAuthentication)? In testing the controllers, I can pass in an identity, but the GetIdentity() method will be essentially untestable...
Also, in the GetIdentity() method, I will query the database. Should I manually create a concrete instance of a repository?
Or is there a better way all together?
I think you are reasonably on the right track, since you abstracted away database communication and ASP.NET dependencies from your unit tests. Don't worry that you can't test everything in your tests. There will always be lines of code in your application that are untestable. The GetIdentity is a good example. Somewhere in your application you need to communicate with framework specific API and this code can not be covered by your unit tests.
There might still be room for improvement though. While an untested GetIdentity isn't a problem, the fact that it is actually callable by the application. It just hangs there, waiting for someone to accidentally call it. So why not abstract the creation of identities. For instance, create an abstract factory that knows how to get the right identity for the current context. You can inject this factory, instead of injecting the identity itself. This allows you to have an implementation defined near the application's composition root and outside reach of the rest of the application. Besides that, the code communicates more clearly what is happening. Nobody has to ask "which identity do I actually get?", because it will be clear by the method on the factory they call.
Here's an example:
public interface IIdentityProvider
{
// Bit verbose, but veeeery clear,
// but pick another name if you like,
MyIdentity GetIdentityForCurrentUser();
}
In your composition root you can have an implementation of this:
private sealed class AspNetIdentityProvider : IIdentityProvider
{
public MyIdentity GetIdentityForCurrentUser()
{
// here the code of the MyIdentity.GetIdentity() method.
}
}
As a trick I sometimes have my test objects implement both the factory and product, just for convenience during unit tesing. For instance:
private sealed class FakeMyIdentity
: FakeMyIdentity, IIdentityProvider
{
public MyIdentity GetIdentityForCurrentUser()
{
// just returning itself.
return this;
}
}
This way you can just inject a FakeMyIdentity in a constructor that expects an IIdentityProvider. I found out that this doesn’t sacrifice readability of the tests (which is important).
Of course you want to have as little code as possible in the AspNetIdentityProvider, because you can't test it (automatically). Also make sure that your MyIdentity class doesn't have any dependency on any framework specific parts. If so you need to abstract that as well.
I hope this makes sense.
There are two things I'd kinda do differently here...
I'd use a custom IPrincipal object with all the properties required for your authentication needs. Then I'd use that in conjunction with custom cookie creation and the AuthenticateRequest event to avoid database calls on every request.
If my IPrincipal / Identity was required inside another class, I'd pass it as a method parameter rather than have it as a dependency on the class it's self.
When going down this route I use custom model binders so they are then parameters to my actions rather than magically appearing inside my action methods.
NOTE: This is just the way I've been doing things, so take with a grain of salt.
Sorry, this probably throws up more questions than answers. Feel free to ask more questions about my approach.