Composing a workflow using IoC - dependency-injection

I am building a WPF application that uses an IoC container for dependency injection (MEF in my case). The application contains several detailed processes that we are modeling as WF workflows. However, some (not all) of the activities rely on services and other components that are managed by the IoC container. I see a few possible ways to accomplish this but none of them seem to follow best-practices. They are:
Use a service-locator in the constructor or Execute method of each activity to locate
and set the dependencies. Personally, I don't like service locators as I believe they violate one of the tenants of DI where code doesn't know where or how the dependency is created. It also makes the activities less testable (or at least adds a couple of steps to the testing process). I've seen some examples on StackOverflow and CodePlex that use a WF Services extension that basically works the same way. I'm not using WF Services, so that isn't an option.
Export each of the activities and have the workflow import them.
This would ensure that the container has satisfied all of the
dependencies before we need them but means we aren't building the
workflow in XAML.
Export the workflow and have it import the dependencies needed by
the activities. Then I would have to set the dependencies as
parameters for the activities to consume. Not only will this result
in a lot of overhead code in the workflow, but it now means that the
workflow requires knowledge of the dependencies for all of the
activities. If an activity changes, is added or removed, I now have
to make changes to the workflow to accomodate any changes to the
dependencies.
Take the same approach as #3 except instead of exporting the
workflow, have a controlling class that is exported, imports all of
the dependencies and sets them as input parameters for the workflow
itself. Each activity would pull the dependencies it needs. This
has all of the same problems as #3 with more code to maintain.
So, my question is, what approach should I take? (I.e. what approach have you taken?)
I am also assuming the above list is not comprehensive and hope someone will suggest a better option, if one exists.
Thx!

Approach two seems to be most adequate. You might use some activity declaration in xaml which will be used later to import real one.
EDIT:
<wf:Workflow.Activities></activities:PassThrough UserId="mstewart"></wf:Workflow.Activities>
and then you could have something among those lines
interface IActivityInfo
{
IActivity ImportActivity();
}
interface IActivity<TActivityInfo> where TActivityInfo : IActivityInfo
{
IActivityInfo Info { get; }
}
class PassThrough : IActivityInfo
{
public IActivity ImportActivity(){ return ServiceLocator.Current.GetInstance<IActivity<PassThrough>>(); }
}
[Export(typeof(IActivity<PassThrough>))]
class PassThroughActivity : IActivity<PassThrough>
{
}
This approach would let you easily separate xaml design process from underlying activity.

Related

Why doesn't Simple Injector have an IContainer abstraction like Unity?

I have used Unity for my last project and was generally pleased. But benchmarks have me thinking I may go with Simple Injector for my next project.
However, Simple Injector does not seem to have an interface for its Container class. This means that anytime I want to use the container in a method, I cannot mock the container for unit testing.
I am confused how a tool that really functions based of interfaces, would not itself make an interface to the container. I know that the classic methods of dependency injection do not need the container for anywhere more than the startup. (The rest uses constructor injection.) But I have found that when the rubber hits the road that cannot always be true. Sometimes you just need the container in order to do a "resolve" in the code.
If I go with Simple Injector then that code seems to gets harder to unit test.
Am I right? Or am I missing something?
Simple Injector does not contain an IContainer abstraction, because:
It would be useless for Simple Injector to define it,
because in case of depending on IContainer instead of Container, your code would in that case still depend on Simple Injector, and this causes a vendor lock-in, which Simple Injector tries to prevent.
Any code you write, apart from the application's Composition Root, should not depend on the container, nor on an abstraction over the container. Both are implementations of the Service Locator anti-pattern.
You should NOT use a DI library when unit testing. When unit testing, you should manually inject all fake or mock objects in the class under test. Using a container only complicates things. Perhaps you are using a container, because manually creating those classes is too cumbersome for you. This might indicate problems with your code (you might be violating the Single Responsibility Principle) or your tests (you might be missing a factory method to create the class under test).
You might use the container for your integration tests, but you
shouldn't have that many integration tests in the first place. The focus should be on unit tests and this should be easy when applying the dependency injection pattern. On top of that, there are better ways of hiding the container from your integration tests, compared to depending on a very wide library-defined interface.
It is trivial to define such interface (plus an adapter) yourself, which justifies not having it in the library. It is your job as application developer to define the right abstractions for your application as stated by the Dependency Inversion Principle. Libraries and frameworks that tend to do this will fail most of the time in providing an abstraction that works for everyone.
The library itself does not use that abstraction and a library should, according to the Framework Design Guidelines, in that case not define such abstraction for you. As stated in the previous point, Simple Injector would get the abstraction wrong anyway.
Last but not least, the Simple Injector container does actually implement System.IServiceProvider which is defined in mscorlib.dll and can be used for retrieving service objects.
I think the answer given here is entirely founded upon accepting that ServiceLocator is an anti-pattern, which in turn I don't believe is globally accepted as true. See Windows Workflow Foundation's Extensions support.
The anti-pattern link (and its two updates) may also be weak... the latest update claims violation of encapsulation ("relieving you of the burden of having to understand every implementation detail of every piece of code in your code base.") while then at the same time claiming that up-front knowledge of dependencies is somehow different for that claim than discovering them via unit tests. Either way, you're going to need to know what to give it.
All in all, if you want to follow the Locator pattern, either leverage its IServiceProvider, or simplify your container population (to a singleton) and create a static wrapper for it.

Inversion of control - XML or Fluent API?

I've just started using Inversion of Control containers and I'm having a difficult time understanding when to use the fluent API or XML when configuring and registering components.
Are there any best practices around when you should prefer one over the other? Or is this simply developer preference? Would it be considered bad practice to mix them in a simple application?
Thanks!
The sweet spot for me is a combination of the two. XML for pulling together large units of functionality and possibly configuring them at deployment time; fluent for setting up the individual components within those units. See http://code.google.com/p/autofac/wiki/StructuringWithModules for an Autofac example. Other containers often have similar capabilities.
Container configuration in code (what you call "the fluent API") is more maintainable, because the code is compiled and therefore the compiler will find many errors for you. However, it requires you to recompile if you want to make changes.
Container configuration in XML is the opposite: the compiler cannot help you find errors, but you can make changes without recompiling.
If you are just starting out with dependency injection, I would stick to configuration in code to keep things simple.
I noticed that the other answers don't touch on a very important problem the Fluent Configuration brings to you DI implementation. Which is hard-wiring concrete implementations to your code base, thus creating tightly-coupled dependencies...which ironically is what IoC Containers are trying to solve. You will find yourself contaminating all your applications with component registration code, now, that doesn't sound good.
Why doesn't it sound good? Because when you have a distributed system with many applications that use different components you basically have two options to register your components:
1. Add DI code on all your applications.
Something like this...
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleLogger>().As<ILogger>();
builder.RegisterType<EmailNotificationDispatcher>().As<INotificationDispatcher>();
builder.RegisterType<DefaultMessageHandler>().As<IMessageHandler>();
//many more...
and of course, ALL the applications in the system will need this sort of logic with different components.
Advantage
You're using Fluent Configuration because it is pretty...that's about it :)
Disadvantage
You are contaminating all you're applications with this fluent component registration, including CONCRETE types
You will need to recompile your application(s) if you need to swap out a dependency/component
If you need to migration to different IoC container tool...the same
2. Wrap your DI code inside a wrapper component to make your applications IoC-container-agnostic.
Something like this...
public sealed class ComponentFactory
{
private static IContainer _container;
public static void Init()
{
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleLogger>().As<ILogger>();
builder.RegisterType<EmailNotificationDispatcher>().As<INotificationDispatcher>();
builder.RegisterType<DefaultMessageHandler>().As<IMessageHandler>();
//many more...
_container = builder.Build();
}
public static T Resolve<T>()
{
return _container.Resolve<T>();
}
}
With this approach your applications won't effectively be aware of what concrete classes their using at all.
You don't need to re-compile and redeploy all applications if you need to swap components.
Furthermore, if you need to migrate to a different IoC Container, the changes will be done in one place...the wrapper class.
The huge disadvange of this approach is that you will have to deploy all components to all applications bin folder even if there are components (or binaries assemblies) one application might not need. So, your lightweight applications will become fat heavy clients.
XML (or json) Configuration
With xml configuration none of the above will happen...
you deploy what your applications need...on a per-application basis
you register the components your applications need...on a per-app basis
your applications are light and have no idea of what concrete implementation they're using. They become very scalable
there's more but hopefully you're getting the idea
Now, I'm not an advocate of XML Configuration, I hate it, I'm a coder and prefer the fluent way over xml (or json) but in my experience xml/json configuration proves to be more scalable and maintainable.
Now, shoot me!!!

Code re-use between Grails project - keeping it DRY

The Grails framework has a lot of constructs/features that allows for adhering to the DRY principle ("don't repeat yourself") within a project. That is: within a specific project you're seldom required to repeat identical blocks of settings or code. So far so good.
However, the more I've worked with Grails the more of I've observed that I repeat code not within the same project but between projects. That is project A has controllers, GSP:s and images that overlaps with project B. This is a maintenance nightmare since bug fixes in project A must also be fixed in project B, etc.
I'd like to take DRY to the next level by not duplicating code between my projects.
My question: How do you tackle this problem (violated "inter-projects DRY") in your own internal Grails projects?
Please be very specific/concrete. If possible try to include specific code examples on how you solve it in practice.
Writing a custom plugin is the best way. You don't need to release it to the public repository, as you can use a private repository somewhere within your own network.
I haven't had enough duplicated code yet to pull out a plugin (most of the code repeated in my projects seem to be covered by the various public plugins), but a plugin can be as simple as a few common domain classes or services.
I agree with Lee, Using common/shared plugins is probably the best way to go. At one place that I worked we had quite a few internal plugins for this very reason.
The most common pattern is to put your common domain objects into their own plugin. This works really well for domain classes or services. We didn't end up refactoring the controllers, views, and static resources into a plugin, but the same principle should apply.
Long story short: Reuse of Grails artifacts = use a plugin.
To add to Lee and Colin's points, which are both valid, I think thinking in terms of multiple plugins can yield other benefits.
For example, you can split up your application functinality into multiple pieces, and have different people work on them. Or it can yield results during deployment, if, say, you need to have two layers of access to an app - user-level and admin - if your domain model is in a separate plugin, as Colin suggested, you can easily build two applications and deploy them separately.
For my app, I have several plugins specific to my project - domain classes plugin, one that is a bunch of code for importing data (which I can run easily against my site), some other plugins for graphing and customization of scaffolding. It takes a bit more thinking, but I expect this factoring will yield dividends in the future as we bring on more people to the team.

Different ways to inject dependencies in ASP.NET MVC Controllers?

In most samples I have seen on the web, DI in MVC Controllers is done like this
public ProductController(IProductRepository Rep)
{
this._rep = Rep;
}
A custom ControllerFactory is used and it utilizes the DI framework of choice and the repository is injected.
Why is the above considered better than
public ProuctController()
{
this._rep = ObjectFactory.GetInstance<IProductRepository>();
}
This will get the same results but doesn't require a custom controller factory.
As far as testing is concerned the Test App can have a separate BootStrapper. That way when the controllers are being tested they can get the fake repositories and when they are used for real they will get the real ones.
Constructor injection (the first approach) is better than the service locator pattern (the second approach) for several reasons.
First, service locator hides dependencies. In your second example, looking at the public interface alone, there's no way to know that ProductControllers need repositories.
What's more, I've got to echo OdeToCode. I think
IProductRepository repository = Mockery.NewMock<IProductRepository>();
IProductController controller = new ProductController(repository);
is clearer than
ObjectFactory.SetFactory(IProductRepository, new MockRepositoryFactory())
IProductController controller = new ProductController();
Especially if the ObjectFactory is configured in a test fixture's SetUp method.
Finally, the service locator pattern is demonstrably sub-optimal in at least one particular case: when you're writing code that will be consumed by people writing applications outside of your control. I wager that people generally prefer constructor injection (or one of the other DI methods) because it's applicable for every scenario. Why not use the method that covers all cases?
(Martin Fowler offers a much more thorough analysis in "Inversion of Control Containers and the Dependency Injection Pattern", particularly the section "Service Locator vs Dependency Injection").
The primary drawback to the second constructor is now your IoC container has to be properly configured for each test. This setup can become a real burden as the code base grows and the test scenarios become more varied. The tests are generally easier to read and maintain when you explicitly pass in a test double.
Another concern is coupling a huge number of classes to a specific DI/IoC framework. There are ways to abstract it away, of course, but you still have code littered throughout your classes to retrieve dependencies. Since all the good frameworks can figure out what dependencies you need by looking at the constructor, it’s a lot of wasted effort and duplicated code.
When you use the second approach, disadvantages are:
Huge and unreadable test setup/context methods are needed
The container is coupled to the controller
You will need to write a lot more code
Why do you want to use an ioc container anyway when you don't want dependency injection?

When do you use dependency injection?

I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.

Resources