Resources and Examples of using MEF for DI/IoC - dependency-injection

I've searched high and look for samples about using MEF for DI. I know its not DI but from what I hear (really hear in podcasts) it can be used as such...but I can't find any blog posts or samples.
I am using MEF in this project already (to support plugins) and thought it would be nice to leverage for DI also.
Maybe I am barking up the wrong tree?

This can be described by an example. For instance, let's say you have a core library that you base all your bespoke applications on. Call it MyCompany.Core. Normally, every application you write has to contain a reference to MyCompany.Core, and then the application has to take care of bootstrapping and calling into MyCompany.Core to start the appropriate services, etc., in the correct order. This doesn't make much sense when you consider that the core itself probably knows better how it's supposed to be started up, etc.
To use MEF for dependency injection, your core would do this:
[Import("/Application", typeof(IBespokeApplication))]
private IBespokeApplication bespokeApplication;
The core itself would contain the application startup code, and might call something like this once it had started up all of its services:
bespokeApplication.Start();
In the bespoke application, you have to export yourself:
[Export("/Application", typeof(IBespokeApplication))]
public class MyApplication : IBespokeApplication
{
public void Start()
{
/* start app */
}
}
Now the bespoke application could contain a direct reference to MyCompany.Core, and could call services directly, or you could even expose the services as Exports and Import them into the application. For instance, in the core:
[Export("/LoggingService", typeof(ILoggingService))]
public class NLogLoggingService : ILoggingService
{
/* ... */
}
Then in the bespoke application:
[Import("/LoggingService", typeof(ILoggingService))]
private ILoggingService loggingService;
...and when you want to use it:
loggingService.LogInformation("My Message");
As far as I can tell from the literature, that's the essence of dependency injection.

Related

Configuring mapping behavior that's non-framework specific

I'm trying to configure my Object Mapper without knowing which mapper I'm using. :/
This might sound a bit strange. The reason for this is that I'm trying out the Onion Architecture so my UI cannot know about my Object Mapper located in my Infrastructure. See this solution for an example.
I'm having some trouble figuring out how I should "delegate" the none default mapping behavior.
Stuff like:
Mapper
.CreateMap<MyModel, MyDestViewModel>()
.ForMember(
dest => dest.SomeDestinationProperty,
opt => opt.MapFrom(src => src.SomeSourceProperty)
);
I've setup a class in my MVC project which is called from Global.asax and this is where I want to configure my mappings.
public static class MapConfig
{
public static void RegisterMaps()
{
}
}
I was thinking I could do something like the following. (IMapper is a self defined interface located in Domain)
public static void RegisterMaps(HttpConfiguration config)
{
var mapper = config.DependencyResolver.GetService(IMapper);
mapper.CreateMap<MyModel, MyViewModel>();
}
Now... how would I go about setting up special behavior like the .ForMember? Keeping in mind that it cannot be AutoMapper specific.
I was thinking something along these lines mapper.CreateMap<MyModel, MyViewModel>(Expression<Func<T>>) where the Func would do some black magic that I cannot figure out right now :( - Am I on the right path or have I missed something essential?
Onion Architecture isn't about the configuration being implementation-agnostic, it's about the execution.
Just create an IMapper interface for the execution of mappings, but don't worry about the configuration. This applies to your ORM, IoC container and everything else.
Also, Onion Architecture isn't about project structure, it's about the direction of your dependencies. Just call CreateMap in your UI. You can then define an IMapper interface all the way down in Core, with the other pieces implementing a version that delegates to AutoMapper.
you're abstracting away useful functionality that will cost you more time than you initially realize. Why not spend the time choosing a mapper and sticking with it?
Why is it so important that your UI doesnt know about your mapper? Assuming that you are using MVC, you are going to be flexing a lot of your chosen mappers functionality to flatten our your domain models to view models anyway.
Its the same kind of nonsense where people use generic repository implementations 'just in case' they decide to switch ORM mid project.
Choose your infrastructure carefully and stick with it.

ServiceManager Advice

I'm simply looking for advice on the best way I should handle this situation.
Right now I've got several files in a folder called Service. The files contact several functions which do random things of course. Each of these files needs access to the SM Adapter.
My question is, should I implement the ServiceManagerAwareInterface in each of these files OR should I just make a new class which implements the ServiceManagerAwareInterface and just extend my classes on the new class which implements this service?
Both ways work as they should, just not sure which way would be more proper.
If you think that your system will always rely on ZF2, both approaches are equivalent.
Now from an OO design perspective, personally I have a preference for the approach in which you extend your service then implement the ServiceManagerAwareInterface. I would even use an interface for the dependency over the ServiceLocator to protect even more my classes. Why?
Extending your classes does not cost you a lot, same for making your class depending on interfaces.
Let's take this example, Imagine you did not use this approach during a ZF1 project, during which you had probably resolved your dependencies with the Zend_Registry.
Now, let's assume you moved to a ZF2 implementation, how much time you think you'll spend refactoring your code from something like Zend_Registry::get($serviceX) to $this->getServiceManager()->get($serviceX) on your Service layer?
Now Assume you had made the choice of protecting your classes, first by creating your own Service locator interface, as simple as:
public interface MyOwnServiceLocatorInterface{
public function get($service);
}
Under ZF1 you had created an adapter class using the Zend_Registry:
public class MyZF1ServiceLocator implements MyOwnServiceLocatorInterface{
public function get($service){
Zend_Registry::get($service);
}
}
Your Service classes are not coupled to the Zend_Registry, which make the refactoring much more easier.
Now, You decide to move to ZF2 so you'll logically use the ServiceManger. You create then this new Adapter class:
public class MyZF2ServiceLocator implements
ServiceManagerAwareInterface,MyOwnServiceLocatorInterface
{
private $_sm;
public function get($service){
$this->_sm->get($service);
}
public function setServiceManager($serviceManager){
$this->_sm = $serviceManager;
}
}
Again, your Service classes are not coupled to the ZF2 ServiceManger.
Now, how would look like the configuration/registration of you Service layer on the ServiceManager. Well, you'll use your Module::getServiceConfig class for that:
//Module.php
public function getServiceConfig()
{
return array(
'factories'=>array(
'My\ServiceA'=>function($sm){
return new My\ServiceA($sm->get('My\Service\Name\Space\MyZF2ServiceLocator'));
}
//Some other config
)
}
As you can see, no refactoring is needed within your Service classes as we protected them by relying on interface and using adapters. As we used a closure factory, we don't even need to extend our Service classes and implement the ServiceLocatorAwareInterface.
Now, before concluding in my previous example i have to note that I did not treat the case in which my classes are constructed via factories, however, you can check one of my previous answers that address the factory topic but also the importance of loose coupling among an application layers.
you can add initializers to do that. It can reduce repetitive injection in getting the service that pass db adapter. OR, you can set abstract_factories, it will reduce repetitive SM registration. I just posted SM Cheatsheet here, Hope helpful :)
https://samsonasik.wordpress.com/2013/01/02/zend-framework-2-cheat-sheet-service-manager/

DI Container and custom-scoped state in legacy system

I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.

How to organize DI Framework usage in an application?

EDIT: I forgot to move the kernel into a non-generic parent class here and supply a virtual method to access it. I do realize that the example below, as is, would create a plethora of kernel instances.
I just learned how to do injection this past week and here's how I've got things set up currently:
using Ninject;
using System.Reflection;
namespace Infrastructure
{
public static class Inject<T>
{
static bool b = Bootstrap();
static IKernel kernel;
static bool Bootstrap()
{
kernel = new StandardKernel();
kernel.Load(Assembly.GetExecutingAssembly());
return true;
}
public static T New() { return kernel.Get<T>(); }
}
}
And then I plan to make the various ninject module classes part of the Infrastructure namespace so that this will load them.
I haven't been able to find anything on here or Google that gives examples of how to actually organize the usage of Ninject in your project, but this seems right to me as it allows me to only need the Ninject reference in this assembly. Is this a more or less 'correct' way or is there a better design?
There are a few problems with how you are doing things now.
Let me first start with the obvious C# problem: Static class variables in generic classes are shared on a per T basis. In other words, Inject<IUserRepository> and Inject<IOrderRepository> will each get their own IKernel instance, which is unlikely what you really want, since it is most likely you need a single IKernel for the life time of your application. When you don't have a single IKernel for the application, there is no way to register types as singleton, since singleton is always scoped at the container level, not at the application level. So, you better rewrite the class as non-generic and move the generic type argument to the method:
Inject.New<T>()
The second problem is one concerned dependency injection. It seems to me you are trying to use the Service Locator anti-pattern, since you are probably explicitly calling Inject.New<T> from within your application. A DI container should only be referenced in the start-up path of the application and should be able to construct a complete object graph of related objects. This way you can ask the container to get a root level object for you (for instance a Controller in the context of MVC) and the rest of the application will be oblivious to the use of any DI technology. When you doing this, there is no need to abstract the use of the container away (as you did with your Inject class).
Not all application or UI technologies allow this BTW. I tend to hide my container (just as you are doing) when working with a Web Forms application, because it is impossible to do proper dependency injection on Page classes, IHttpHandler objects, and UserControl classes.

Which dependencies should I inject?

When using dependency injection which dependencies do you inject?
I have previously injected all dependencies but have found when doing TDD there are typically two types of dependency:
Those which are genuine external dependencies which may change e.g. ProductRepository
Those which exist purely for testability e.g. Part of the behaviour of the class that has been extracted and injected just for testability
One approach is to inject ALL dependencies like this
public ClassWithExternalDependency(IExternalDependency external,
IExtractedForTestabilityDependency internal)
{
// assign dependencies ...
}
but I've found this can cause dependency bloat in the DI registry.
Another approach is to hide the "testability dependency" like this
public ClassWithExternalDependency(IExternalDependency external)
: this (external, new ConcreteClassOfInternalDependency())
{}
internal ClassWithExternalDependency(IExternalDependency external,
IExtractedForTestabilityDependency internal)
{
// assign dependencies ...
}
This is more effort but seems to make a lot more sense. The downside being not all objects are configured in the DI framework, thereby breaking a "best practice" that I've heard.
Which approach would you advocate and why?
I believe you're better off injecting all of your dependencies. If it starts to get a little unwieldy, that's probably an indication that you need to simplify things a bit or move the dependencies into another object. Feeling the "pain" of your design as you go can be really enlightening.
As for dependency bloat in the registry, you might consider using some sort of conventional binding technique, rather than registering each dependency by hand. Some IoC containers have convention-based type-scanning bindings built into them. For example, here's part of a module I use in a Caliburn WPF application that uses Ninject:
public class AppModule : NinjectModule
{
public override void Load()
{
Bind<IShellPresenter>().To<ShellPresenter>().InSingletonScope();
BindAllResults();
BindAllPresenters();
}
/// <summary>
/// Automatically bind all presenters that haven't already been manually bound
/// </summary>
public void BindAllPresenters()
{
Type[] types = Assembly.GetExecutingAssembly().GetTypes();
IEnumerable<Type> presenterImplementors =
from t in types
where !t.IsInterface
&& t.Name.EndsWith("Presenter")
select t;
presenterImplementors.Run(
implementationType =>
{
if (!Kernel.GetBindings(implementationType).Any())
Bind(implementationType).ToSelf();
});
}
Even though I have dozens of results and presenters running around, I don't have to register them explicitly.
I certainly won't inject all dependencies, because were to stop? Do you want to inject your string dependencies? I only invert the dependencies that I need for unit testing. I want to stub my database (see this example for instance). I want to stub the sending of e-mail messages. I want to stub the system clock. I want to stub writing to the file system.
The thing about inverting as many dependencies as you can, even those that you don't need for testing, is that make unit testing a lot harder and the more you stub out the less you really test how the system really acts. This makes your tests much less reliable. It also complicates your DI configuration in the application root.
I would wire all my non-external dependencies by hand and 'register' only external dependencies. When I say non-external, I mean the objects which belong to my component and which were extracted out to interfaces just for the sake of single responsibility/testability I would never have any other implementations of such interfaces ever. External dependencies are stuff like DB connections, web services, interfaces which don't belong to my component. I would register them as interfaces because their implementations can be switched to stubbed ones for integration testing. Having a small number of components registered in a DI container makes the DI code easier to read and bloat free.

Resources