Ninject kernel reference in win service - windows-services

I have simple win service, that executes few tasks periodically. How should I pass Ninject kernel to all my task classes?
Is it good idea to create static variable of base task class and initialize it on service start?

Rather than a static variable on the base task class, I would favor injecting the kernel into each class instance. This provides a bit more flexibility should you ever decide that you need more than one kernel (for whatever reason). The static variable in the base class just seems yucky, for lack of a better term.

Related

Dependency Injection and/vs Global Singleton

I am new to dependency injection pattern. I love the idea, but struggle to apply it to my case. I have a singleton object, let’s call it X, which I need often in many parts of my program, in many different classes, sometimes deep in the call stack. Usually I would implement this as a globally available singleton. How is this implemented within the DI pattern, specifically with .NET Core DI container? I understand I need to register X with the DI container as a singleton, but how then I get access to it? DI will instantiate classes with constructors which will take reference to X, that’s great – but I need X deep within the call hierarchy, within my own objects which .NET Core or DI container know nothing about, in objects that were created using new rather than instantiated by the DI container.
I guess my question is – how does global singleton pattern aligns/implemented by/replaced by/avoided with the DI pattern?
Well, "new is glue" (Link). That means if you have new'ed an instance, it is glued to your implementation. You cannot easily exchange it with a different implementation, for example a mock for testing. Like gluing together Lego bricks.
I you want to use proper dependency injection (using a container/framework or not) you need to structure your program in a way that you don't glue your components together, but instead inject them.
Every class is basically at hierarchy level 1 then. You need an instance of your logger? You inject it. You need an instance of a class that needs a logger? You inject it. You want to test your logging mechanism? Easy, you just inject something that conforms to your logger interface that logs into a list and the at the end of your test you can check your list and see if all the required logs are there. That is something you can automate (in contrast to using your normal logging mechanism and checking the logfiles by hand).
That means in the end, you don't really have a hierarchy, because every class you have just gets their dependencies injected and it will be the container/framework or your controlling code that determines what that means for the order of instantiation of objects.
As far as design patterns go, allow me an observation: even now, you don't need a singleton. Right now in your program, it would work if you had a plain global variable. But I guess you read that global variables are "bad". And design patterns are "good". And since you need a global variable and singleton delivers a global variable, why use the "bad", when you can use the "good" right? Well, the problem is, even with a singleton, the global variable is bad. It's a drawback of the pattern, a toad you have to swallow for the singleton logic to work. In your case, you don't need the singleton logic, but you like the taste of toads. So you created a singleton. Don't do that with design patterns. Read them very carefully and make sure you use them for the intended purpose, not because you like their side-effects or because it feels good to use a design pattern.
Just an idea and maybe I need your thought:
public static class DependencyResolver
{
public static Func<IServiceProvider> GetServiceProvider;
}
Then in Startup:
public void Configure(IApplicationBuilder app, IServiceProvider serviceProvider)
{
DependencyResolver.GetServiceProvider = () => { return serviceProvider; };
}
And now in any deed class:
DependencyResolver.GetServiceProvider().GetService<IService>();
Here's a simplified example of how this would work without a singleton.
This example assumes that your project is built in the following way:
the entry point is main
main creates an instance of class GuiCreator, then calls the method createAndRunGUI()
everything else is handled by that method
So your simplified code looks like this:
// main
// ... (boilerplate)
container = new Container();
gui = new GuiCreator(container.getDatabase(), container.getLogger(), container.getOtherDependency());
gui.createAndRunGUI();
// ... (boilerplate)
// GuiCreator
public class GuiCreator {
private IDatabase db;
private ILogger log;
private IOtherDependency other;
public GuiCreator(IDatabase newdb, ILogger newlog, IOtherDependency newother) {
db = newdb;
log = newlog;
other = newother;
}
public void createAndRunGUI() {
// do stuff
}
}
The Container class is where you actually define which implementations will be used, while the GuiCreator contructor takes interfaces as arguments. Now let's say the implementation of ILogger you choose has itself a dependency, defined by an interface its contructor takes as argument. The Container knows this and resolves it accordingly by instantiating the Logger as new LoggerImplementation(getLoggerDependency());. This goes on for the entire dependency chain.
So in essence:
All classes keep instances of interfaces they depend upon as members.
These members are set in the respective constructor.
The entire dependency chain is thus resolved when the first object is instantiated. Note that there might/should be some lazy loading involved here.
The only places where the container's methods are accessed to create instances are in main and inside the container itself:
Any class used in main receives its dependencies from main's container instance.
Any class not used in main, but rather used only as a dependency, is instantiated by the container and receives its dependencies from within there.
Any class used neither in main nor indirectly as a dependency somewhere below the classes used in main will obviously never be instantiated.
Thus, no class actually needs a reference to the container. In fact, no class needs to know there even is a container in your project. All they know is which interfaces they personally need.
The Container can either be provided by some third party library/framework or you can code it yourself. Typically, it will use some configuration file to determine which implementations are actually supposed to be used for the various interfaces. Third party containers will usually perform some sort of code analysis supported by annotations to "autowire" implementations, so if you go with a ready-made tool, make sure you read up on how that part works because it will generally make your life easier down the road.

The benefits and correct usage of a DI Container

I'm having troubles getting the advantage of a IoC (DI) container like Ninject, Unity or whatever. I understand the concepts as follows:
DI: Injecting a dependency into the class that requires it (preferably via constructor injection). I totally see why the less tight coupling is a good thing.
public MyClass{
ISomeService svc;
public MyClass(ISomeService svc){
svc = svc;
}
public doSomething(){
svc.doSomething();
}
}
Service Locator: When a "container" is used directly inside the class that requires a dependancy, to resolve the dependancy. I do get the point that this generates another dependancy and I also see that basically nothing is getting injected.
public MyClass{
public MyClass(){}
public doSomething(){
ServiceLocator.resolve<ISomeService>().doSomething();
}
}
Now, what confuses me is the concept of a "DI container". To me, it looks exactly like a service locator which - as far as I read - should only be used in the entry point / startup method of an application to register and resolve the dependancies and inject them into the constructors of other classes - and not within a concrete class that needs the dependancy (probably for the same reason why Service locators are considered "bad")
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
public void main(){
DIContainer.register<ISomeService>(new SomeService());
// ...
var myclass = new MyClass(DIContainer.resolve<ISomeService>());
myclass.doSomething();
}
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
DI containers are supposed to help you create an object graph quickly. You just tell it which concrete implementations you want to use for which abstractions (the registration phase), and then it can create any objects you want want (resolve phase).
If you create the dependencies and pass them to the constructor (in the application initialization code), then you are actually doing Pure DI.
I would argue that Pure DI is a better approach in many cases. See my article here
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
I would say yes. You should create the object graph when your application starts up. This is called the composition root.
If you need to create objects after your application has started then you should use factories (mainly abstract factories). And such factories will be created with the other objects in the composition roots.
Your classes shouldn't do much in the constructor, this will make the cost of creating all the dependencies at the composition root low.
However, I would say that it is OK to create some types of objects using the new keyword in special cases. Like when the object is a simple Data Transfer Object (DTO)

Simplest explanation of how a DI container works?

In simple terms and/or in high-level pseudo-code, how does a DI container work and how is it used?
At its core a DI Container creates objects based on mappings between interfaces and concrete types.
This will allow you to request an abstract type from the container:
IFoo f = container.Resolve<IFoo>();
This requires that you have previously configured the container to map from IFoo to a concrete class that implements IFoo (for example Foo).
This in itself would not be particularly impressive, but DI Containers do more:
They use Auto-Wiring which means that they can automatically figure out that if IFoo maps to Foo and IBar maps to Bar, but Foo has a dependency on IBar, it will create a Foo instance with a Bar when you request IFoo.
They manage the lifetime of components. You many want a new instance of Foo every time, but in other cases you might want the same instance. You may even want new instances of Foo every time, but the injected Bar should remain the same instance.
Once you start trying to manually manage composition and lifetimes you should start appreciating the services provided by a DI Container :)
Many DI Containers can do much more than the above, but those are the core services. Most containers offer options for configuring via either code or XML.
When it comes to proper usage of containers, Krzysztof Kozmic just published a good overview.
"It's nothing more than a fancy hash
table of objects."
While the above is a massive understatement that's the easy way of thinking about them. Given the collection, if you ask for the same instance of an class - the DI container will decide whether to give you a cached version or a new one, or so on.
Their usage makes it easier and cleaner when it comes to wiring up dependencies. Imagine you have the following pseudo classes.
class House(Kitchen, Bedroom)
// Use kitchen and bedroom.
end
class Kitchen()
// Code...
end
class Bedroom()
// Code...
end
Constructing a house is a pain without a DI container, you would need to create an instance of a bedroom, followed by an instance of a kitchen. If those objects had dependencies too, you would need to wire them up. In turn, you can spend many lines of code just wiring up objects. Only then could you create a valid house. Using a DI/IOC (Inversion of Control) container you say you want a house object, the DI container will recursively create each of its dependencies and return you a house.
Without DI/IOC Container:
house = new House(new Kitchen(), new Bedroom());
With DI/IOC Container:
house = // some method of getting the house
At the end of the day they make code easy to follow, easier to write and shift the responsibility of wiring objects together away from the problem at hand.
You configure a DI container so it knows about your interfaces and types - how each interface maps to a type.
When you call Resolve on it, it looks at the mapping and returns the mapped object you have requested.
Some DI containers use conventions over configuration, where for example, if you define an interface ISomething, it will look for a concrete Something type to instantiate and return.

What is the best strategy for Dependency Injection of User Input?

I've used a fair amount of dependency injection, but I'd like to get input on how to handle information from the user at runtime.
I have a class that connects to a com port. I allow the user to select the com port number. Right now, I have that com port parameter as a constructor argument. The reasoning being that the class cannot function without that information, and it's implementation specific (a mock version of this class wouldn't need a com port).
The alternative is to have a "Start" method that takes in the com port, or have a property that sets the com port. This makes it very compatible with an IoC container, but it doesn't necessarily make sense from the perspective of the class.
It seems like the logical route conflicts with the dependency injection design, but it's because my UI is getting information for a specific type of class.
Other alternatives would include using an IoC container that lets me pass in additional constructor parameters, or just constructing the classes I need at the top level without using dependency injection.
Is there a generally accepted standard pattern for this type of problem?
There are two routes you can take, depending on your needs.
1. Wire the UI directly to your concrete classes
This is the simplest option, but many times perfectly acceptable. While you may have a Domain Model with lots of interfaces and use of DI, the UI constitutes the Composition Root of the object graphs, and you could simply wire up your concrete class here, including your required port number parameter.
The upside is that this approach is simple and easy to understand and implement.
The downside is that you get less flexibility. You will not be able to arbitrarily replace one implementation with another (but then again, you may not need that flexibility).
Even with the UI locked to a concrete implementation, this doesn't mean that the Domain Model itself wouldn't be reusable in other applications.
2. Add an Abstract Factory
The other option is to add another layer of indirection. Instead of having your UI create the class directly, it could use an Abstract Factory to create the instance.
The factory's Create method could take the port number as an input, so this abstraction belongs best in a UI sub-layer.
public abstract class MyFactory
{
public abstract IMyInterface Create(int portNumber);
}
You could then have your DI container wire up an implementation of this factory that uses the port number and passes it as a constructor argument to your real implementation. Other factory implementations may simply ignore the parameter.
The advantage of this approach is that you don't pollute your API (or your concrete implementations), and you still have the flexibility that programming to interfaces give you.
The disadvantage is that it adds yet another layer of indirection.
Most IoC containers have some form of Constructor Injection that would allow your IoC container to pass a mocked COM port into your class for unit testing. That seems like the most clean solution.
I would avoid adding a "Start" method, etc. Its much better practice to (when possible) always have your classes in a valid state, and switching to a parameterless constructor with a start method leaves your class invalid between those calls. Doing this to enable testing is just making your class more difficult in order to test (which should make it nicer).

Why not pass your IoC container around?

On this AutoFac "Best Practices" page (http://code.google.com/p/autofac/wiki/BestPractices), they say:
Don't Pass the Container Around
Giving components access to the container, or storing it in a public static property, or making functions like Resolve() available on a global 'IoC' class defeats the purpose of using dependency injection. Such designs have more in common with the Service Locator pattern.
If components have a dependency on the container, look at how they're using the container to retrieve services, and add those services to the component's (dependency injected) constructor arguments instead.
So what would be a better way to have one component "dynamically" instantiate another? Their second paragraph doesn't cover the case where the component that "may" need to be created will depend on the state of the system. Or when component A needs to create X number of component B.
To abstract away the instantiation of another component, you can use the Factory pattern:
public interface IComponentBFactory
{
IComponentB CreateComponentB();
}
public class ComponentA : IComponentA
{
private IComponentBFactory _componentBFactory;
public ComponentA(IComponentBFactory componentBFactory)
{
_componentBFactory = componentBFactory;
}
public void Foo()
{
var componentB = _componentBFactory.CreateComponentB();
...
}
}
Then the implementation can be registered with the IoC container.
A container is one way of assembling an object graph, but it certainly isn't the only way. It is an implementation detail. Keeping the objects free of this knowledge decouples them from infrastructure concerns. It also keeps them from having to know which version of a dependency to resolve.
Autofac actually has some special functionality for exactly this scenario - the details are on the wiki here: http://code.google.com/p/autofac/wiki/DelegateFactories.
In essence, if A needs to create multiple instances of B, A can take a dependency on Func<B> and Autofac will generate an implementation that returns new Bs out of the container.
The other suggestions above are of course valid - Autofac's approach has a couple of differences:
It avoids the need for a large number of factory interfaces
B (the product of the factory) can still have dependencies injected by the container
Hope this helps!
Nick
An IoC takes the responsibility for determining which version of a dependency a given object should use. This is useful for doing things like creating chains of objects that implement an interface as well as having a dependency on that interface (similar to a chain of command or decorator pattern).
By passing your container, you are putting the onus on the individual object to get the appropriate dependency, so it has to know how to. With typical IoC usage, the object only needs to declare that it has a dependency, not think about selecting between multiple available implementations of that dependency.
Service Locator patterns are more difficult to test and it certainly is more difficult to control dependencies, which may lead to more coupling in your system than you really want.
If you really want something like lazy instantiation you may still opt for the Service Locator style (it doesn't kill you straight away and if you stick to the container's interface it is not too hard to test with some mocking framework). Bear in mind, though that the instantiation of a class that doesn't do much (or anything) in the constructor is immensely cheap.
The container's I have come to know (not autofac so far) will let you modify what dependencies should be injected into which instance depending on the state of the system such that even those decisions can be externalized into the configuration of the container.
This can provide you plenty of flexibility without resorting to implementing interaction with the container based on some state you access in the instance consuming dependencies.

Resources