Using StructureMap, is one of these project organizations better than another? - dependency-injection

I'm starting to work with StructureMap on a windows application project. In working on learning the basics, I found 2 ways to arrange my solution that accomplish the same goal, and I'm wondering if anyone can comment on if one of these 2 seems like a better option, and why.
The goal here was to use IOC so that I could use 2 services without taking dependencies on them. So I I created interfaces in my Business Layer, and then implemented those interfaces in my Infrastructure project and wrapped the actual services at that point.
In my fist attempt at this, I created a project DependencyResolver, which has code to intialize structuremap using the fluent interface (when someone wants IServiceA, give them an instance of ServiceX). Because the initialization of DependencyResolver needed to be kicked off from my app, I have a reference from the app to DependencyResolver like this:
So then I found that I could remove my reference to DependencyResolver, and rely on StructureMap scanner and naming conventions to get that reference at runtime, so then my setup looks like this:
So then, I took the naming conventions further, down into the services I was using, and was able to do away with the DependencyResolver all together. At this point, I am totally relying on the structuremap scanner and naming conventions to get things setup correctly:
So. Here I am, not quite sure how I should feel about these 3 options. Option 1 seems good, except I'm left with the UI indirectly referencing all the things that it shouldn't be referencing (directly) because of the use of StructureMap. However, I'm not sure if this really matters.
Option 2 removes the need for a reference from the app to DependencyResolver, and relies on naming conventions to access classes in that project, and I still have a high level of control over all the remaining setup (but I have now taken a dependence on structureMap directly from my application).
Option 3 seems the easiest (just name everything a certain way, and scan your assemblies), but that also seems more error prone, and brittle. Especially if I wanted to do something a little more complex than IServiceAbc => ServiceAbc.
So, can anyone who has a lot more experience with this stuff that I do give me some advice?
Should I be avoiding the indirect references from my App to my services, and if so, what are the real benefits of doing that?
Am I right that trying to do everything with naming conventions is only wise on simple projects?
Is there a standard pattern for doing what I'm trying to do here?
Sorry for the long post..

Encapsulate all usage of StructureMap in a Composition Root and use Constructor Injection throughout the rest of your code base.
You can implement the Composition Root in a separate assembly if you'd like, but I usually prefer placing it directly in the executable itself, and then implement all of the application logic in separate libraries.

I have used the top design on projects and it work extremely well.
Dependency resolvers are more or less factories to return interface instances, isn't structure map just one way to implement this? In that case I would make request for any item via the dependency resolver at one central place. Then it is also possible to remove structure map and add another service locator (unity, castle windsor etc.) in without changing anything else about your app.
Dependencies should not be resolved from two places as seen in your second option, and not only via the UI project in the third option (what happens if you swap out your UI project and put a different one in then?).

Related

How to implement Dependency Injection with MVC5 and MEF2 (Convention-Based) in an n-tier application?

I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);

Structuring an MVC Application wityh Entity Framework and building using TDD

Background
I am about to start the process of creating a new application with MVC 5 and EF6 and building it out with TDD. This is my first MVC application so i have decided to use it as a bit of a learning platform to better understand a whole range of patterns and methodologies that i have been exposed to but have only used in passing up until this point.
I started with this in my head:
EF - Model
Repositories
Services
UI (controllers views)
Removing the Repositories
I shifted this thinking to remove one layer, repositories simply as my understanding has grown i can see the EF (specifically IDbSet) implements a repository pattern or sorts and the context itself is a unit of work, so wrapping it around a further abstraction, for this application at least seems pointless, at that level anyway.
EF will be abstracted at the Service Layer
Removing the Repo's doesn't mean EF will be directly exposed to the controllers as in most cases i will use the services to expose certain methods and business logic to the controllers, but not exclusively exclude EF as i can use it outside of services to do things like building specific queries which could be used at a service level and a controller level, the service layer will simply be a simpler way of mapping specifics from the controller to the EF and data concerns.
This is where it gets a bit ropey for me
Service Layer
My services feel a little bit like repositories in the way they will map certain functions (getById etc), which i am not sure is just naturally the way they are or if my understanding of them is way off and there is more information that i can't find to better my knowledge.
TDD & EF
I have read a ton of stuff about the EF and how you can go about testing with unit wise, how you shouldn't bother as the leakyness of IQueryable and the fact that Linq-to-entities and Linq-to-objects means that you won't get the results that you intend all of the time, but this has led to simply confusing the hell out of me to the point where i have an empty test file and my head is completely blank because i am now over thinking the process.
Update on TDD the reason the TDD tag was included as i thought maybe someone would have an idea on how they approach something like this without a repository because that is an abstraction for abstractions sake. Would they not unit test against it and use other tests to test the query-able behavior like a integration test or end to end test? but from my limited understanding that wouldn't be TDD as the tests would not be driving my design in this instance?
Finally, To The Point
Is the:
EF
Service
UI
architecture a good way to go, initially at least?
Are there any good examples of a well defined service layer out there so i can learn, and are they in the main a way to map certain business operations that have data connotations to some for of persistence mechanic (in this case an ORM and EF) without having the persistence requirements of say a repository?
With the TDD stuff, is it ok to forgo unit tests for service methods that are basically just calling EF and returning data and just opting for slower integration tests (probably in a seperate project so they are not part of the main test flow and can be run on a more ad-hoc basis?
Having one of those weeks and my head feels like it is about to explode.
Lol I've had one of those weeks myself for sure. ;)
I've had the same kind of internal discussions over how to structure MVC projects, and my conclusion is find what's most comfortable to you.
What I usually do is create the following projects:
Core/Domain - here I have my entities/domain model, and any
other thing that may be shared among layers: interfaces, for
example, configuration, settings, and so on.
Data/EF - here
I have all my EF-dependent code: DataContext and Mappings
(EntityTypeConfiguration). Ideally I could create another
version of this using, say NHibernate and MySQL, and the rest of the
solution will stay the same.
Service - this depends on Core
and Data. I agree in the beginning it will look like a simple facade
to your Data, but as soon as you start adding features, you'll find
this is the place to add your "servicemodels". I'm not saying
ViewModel as this is quite Web-ui related. What i mean with
"ServiceModel" is creating a simpler version of your domain objects.
Real example: hide your CreatedOn, CreatedBy properties, for
example. Also, whenever one of your controller's actions grow to
anything over quite simplistic, you should refactor and move that
logic to the service and return to the controller what you really
need.
Web/UI This will be your webApp. It will depend on Core and Service.
You didn't mention dependency injection but you have to definitely look at it.
For testing, you can test your Data using a SqlCompact provider that re-creates the database for each test instead of using a full SqlExpress. This means your DataContext should accept a connectionString parameter. ;)
I've learned a lot seeing big projects source code, like http://www.nopcommerce.com. You could also have a look at http://sharparchitecture.net/ although I bet you already saw that.
Be prepared to have some nightmares with complex object graphs in EntityFramework. ;)
My final advice is: find something specific to do and dive in. Too much abstraction will keep you from starting, and starting is key to practice and understanding.

Avoiding assembly references when using IoC in multi layer app

Quick question so I start in the right direction. I have a multi project solution with an MVC presentation layer. Currently this layer only knows about an IServices class library. Now if I want to use an IoC it seems like I will have to start adding references to all of the other projects in my solution in the MVC application so that I can configure the IoC.
Is this right or should each layer have its own IoC?
Thanks,
James
Someone must load the .dll files, and someone be able to wire up the IoC.
Typically, the dll loading happens automatically and the IoC wiring happens in a nearly-hardcoded fashion.
You could load libraries dynamically: you can write code that tries to load each dll in a given folder and invoke some kind of GetLibraryDescriptor method. That method tells you that the library provides an implementation for, say, ISomeInterface. Now you can ask the dll to instantiate an object of that class it provides. You'd probably have to bridge this instantiation to the IoC. I believe that such a design is better suited for a service locator.
All this makes sense for shrink-wrap software, but I don't see many benefits for web software.
The only reason not to reference other libraries I see is to make sure nobody directly uses or access the code that is declared in there - a tedious task, a sometimes impossible goal that moves encapsulation to the wrong level. If your classes are well-encapsulated, it shouldn't be necessary.

Do Dependency Injection frameworks lead to poor/lazy design?

I'm fairly new to the DI concept, but I have been using it to some extent in my designs - mainly by 'injecting' interfaces into constructors and having factories create my concrete classes. Okay, it's not configuration-based - but it's never NEEDED to be.
I started to look at DI frameworks such as Spring.NET and Castle Windsor, and stumbled across this blog by Ayende.
What I got from this is
A) DI frameworks are awesome, but
B) It means we don't have to worry about how our system is designed in terms of dependencies.
For me, I'm used to thinking hard about how to loosely-couple my system but at the same time have some sort of control over dependencies.
I'm a bit scared of losing this control, and it being just a free-for-all. ClassA needs ClassB = no problem, just ask and ye shall receive! Hmmm.
Or is that just the point and this is the future and I should just go with it?
Thoughts?
One basic OO principle is that you want your code to depend on interfaces and not implementations, DI is how we do that. Historically, here is how it evolved:
People initially created classes they depended upon by "new'ing" them:
IMyClass myClass = new MyClass();
Then we wanted to remove instantiation so there were static methods to create them:
IMyClass myClass = MyClass.Create();
Then we no longer depended on the lifecycle of the class, but still depended on it for instantiation, so then we used the factory:
IMyClass myClass = MyClassFactory.Create();
This moved the direct dependency from the consuming code to the factory, but we still had the dependency on MyClass indirectly, so we used the service locator pattern like this:
IMyClass myClass = (IMyClass)Context.Find("MyClass");
That way we were only dependent on an interface and a name of a class in our code. But it can be made better, why not depend simply on an interface in our code? We can with dependency injection. If you use property injection you would simply put a property setter for the interface you want in your code. You then configure what the actual dependency is outside of your code and the container manages the lifecycle of that class and your class.
I wouldn't say that you don't have to think about dependencies, but using an IoC framework allows you to change the types which fulfill the dependencies with little or no hassle, since all the wiring is done in a central place.
You still have to think about what interfaces you need and getting them right is not always a trivial matter.
I don't see how a loosely coupled system could be considered lazily designed. If you go through all the trouble of getting to know an IoC framework, you're certainly not taking the shortcut.
I think that ideally, if you already have a loosley coupled system.., using a container will only move the place where you take the dependencies out of your code making them softer and let your system depend on the container building your object graph.
In reality, attempting to use the the container will probably show you that your system is not as loosley coupled as you thought it was.. so in this way, it may help you to create a better design.
Well, i'm a newbie at this subjet.. so maybe i'm not that right.
Cheers.
I must be high, because I thought the whole point of dependency injection is that the code that does stuff simply declares its dependencies so that someone who's creating it will know what to create with it for it to operate correctly.
How dependency injection makes you lazy is maybe it forces someone else to deal with dependencies? That's the whole point! That someone else doesn't need to be really someone else; it just means the code you write doesn't need to be concerned with dependencies because it declares them upfront. And they can be managed because they are explicit.
Edit: Added the last sentence above.
Dependency injection can be a bit difficult to get used to - instead of a direct path through your code, you end up looking at seemingly unconnected objects, and a given action traces it's path through a series of these objects whose coupling seems, to be kind, abstract.
It's a paradigm shift similar to getting used to OO. The intention is that your objects are written do have a focused and single responsibility, using the dependent objects as they're declared by the interface and handled by the framework.
This not only makes loose coupling easier, it makes it almost unavoidable, or at least nearly so, which makes it much simpler to do things like run your object in a mock environment - The IOC container is taking the place of the run environment.
I would disagree and say they lead to better design in many cases. Too often devs create components that do too much and have too many dependencies. With IOC developers i find tend to migrate to a better way of thinking and produce smaller simpler components that can be assembled together into an app.s
If they follow the spirit and do tests, they will further refine your components. Both exercises force you to write better testable components which fits very well with how IOC containers work.
You still have to worry. My team use Castle Windsor in our current project. It annoys me that it delays dependency lookup from compile time to runtime.
Without Castle Windsor, you write code and if you haven't sorted your dependencies out. Bang, the compiler will complain. With Castle Windsor you configure the dependencies in an xml file. They're still there, just separated out from the bulk of your code. The problem is, your code can compile fine if you make a mess of defining the dependencies. But, at runtime, Castle Windsor looks up a concrete classes to service requests for an interface by using reflection. If the dependency can't be found, you get an error at runtime.
I think Castle Windsor does check the dependencies exist when its initialized so that it can throw an error pretty quick. But, it's still annoying when using a strongly typed language that this fuss can't be sorted out at runtime.
So... anyway. Dependencies still seriously matter. You'll all most certainly pay more attention to them using DI than before.
We wrote custom DI framework, thought it took some time getting it right but it all worth the effort. We have divided the who systems into layers and the dependency injection in each layer is bound by rules. E.g. In the Log layer, CUD and BO interfaces cannot be injected.
We are still contemplating over the rules and some of these change every week while the others are remain the same.

When do you use dependency injection?

I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.

Resources