I've been following Daniel Cazzulino's series about building a DI container using TDD. In part five of the series, he adds support for container hierarchies without commenting on what makes this feature useful. I've seen mention of support for hierarchies in many of the DI frameworks, but I'm having trouble understanding when they'd be used, and why. Can someone offer some insight?
I left a comment on kzu's blog asking the same question. It's a shame he didn't clarify the use-case for such a feature before coding it.
The only thing I could think of is if you wanted to have different types resolved from your container in different parts of your app. For example, if you had an order-entry system with two separate sections, and each section was identical except that they needed to present a different product list, you could create a child container for each section, and "override" the registration of your product repository in each. Whenever a section tried to resolve a product repository (or anything that depended on one) it would get the instance you set up in the child container rather than the parent. Sort of like overriding a virtual method.
This might be way off base, but it's the best I could come up with.
Here's a sample that uses child containers in a scenario similar to the one Matt describes. It uses child containers to select between different database configurations.
The key here is that most of the configuration is shared between the child containers (that shared part belongs in the parent container)
There is good reason for having child containers if dependency injection is fully embraced by the project. Let's imagine an application that has processes messages from two different, but similar systems. Most of the processing is similar, but there are variations to support compatability from those systems. Our aim is to re-use the code we can, while writing different code as requirements differ.
In OO programming, we wire together a series of classes that will collaborate to meet the system requirements. The DI container takes this responsibility. When a message arrives from a system, we want to build a set of collaborating classes suitable for processing a message from that particular system.
We have a top level container which has items that do not vary between the two systems. Then, we have child containers that do vary between systems. When a message arrives, we ask the approriate child DI container for a messageProcessor. Based on the configuration of that container (falling back to the parent container as necessary) the DI framework can return a messageProcessor (being an object backed by approriate collaborators) for the system in question.
Please leave a comment if this is not a clear answer. Also, you can search for "robot legs problem". Each leg is identical but one needs a left foot and the other needs a right foot. We could have a child DI container for each leg.
The best example that I'm aware of for nested containers is a windowing system. It's very nice for just separation of concerns to have each tab/window have it's own container independent of the other tabs/windows, with all window containers inheriting global dependencies from a parent container.
This is especially needed if you can have duplicate tab/windows, since in many cases you want to distinct instances of various classes for each duplicate tab/window
Related
I'm in the process of designing a web-service hosted with Google App Engine comprised of three parts, a client website (or more), a simple CMS I designed to edit and view the content of that website, and lastly a server component to communicate between these two services and the database. I am new to docker and currently doing research to figure out how exactly to set up my containers along with the structure of my project.
I would like each of these to be a separate service, and therefor put them in different containers. From my research it seems perfectly possible to put them in separate containers and still have them communicate, but is this the optimal solution? Also given that in the future I might want to scale up so that my backed can supply multiple different frontends all managed from the same CMS.
tldr:
How should I best structure my web-service with docker, as well as assuming my back-end supplies more than one front end managed from a CMS.
Any suggestion for tools, or design patterns that make my life easier are welcome!
Personally, I don't like to think of designing whatever in terms of containers. Containers should be good for deployment process, for their main goal.
If you keep your logic in separate components/services you'll be able to combine them within containers in many different ways.
Once you have criteria what suits your product requirements (performance, price, security etc) you'll configure your docker images in the way you prefer.
So my advise is focus on design of your application first. Start from the number of solutions you have, provide a dockerfile for each one and then see what you will have to change.
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
Question
What is the proper way to share an EF DbContext across multiple repositories in an MVC web app? Is it prudent/necessary to do so, what are the pitfalls of doing or not doing this?
Background
Assume:
App.MvcSite (Several dozen controllers, multiple areas, etc)
App.Services (service layer, many services)
App.Data (many repositories, etc)
... etc excluded for simplicity (EF Code First latest)
Research To Date
I seem to find at least two/three schools of thought on SO and the interwebs.
Share/scope your DbContext to the Request so that a single request has a single DbContext shared by all repositories.
Share/scope your DbContext at the service layer -- the service maintains a single DbContext and passes it to each repository as needed.
Do not share a DbContext, since they are cheap let each Repo have its own.
In a small website this is a non-issue which is why most MS and community examples simply don't even address this.
In my experience thus far I have not used finite repositories. I have always had services use a DbContext and directly change it so I didn't need to worry about this. I'm told there is a great benefit to finite repositories from a unit testing perspective... we'll see if it makes the rest of this worthwhile.
My Thoughts
(1) Share/scope your DbContext to the Request
This is interesting as it smartly avoids the pitfall of a singleton context which some developers think is the answer but find DbContext doesn't work that way. But it seems to have a downside in that it assumes all repositories, services, etc are going to be in coordination across an entire request... this is often not the case, right? What if changes are saved by one repo before another completes its work. (outer(inner(inner)))
(2) Share/scope your DbContext at the service layer
This makes more sense to me because each service should be coordinating a specific unit of work (lower case intentional). So if multiple services were used in one request it would be proper (if not required) that each had its own context to the database.
(3) Do not share a DbContext, since they are cheap
This is the way I've always done it... well actually I almost always only had one DbContext per request because only one service was being called. Sometimes it might be two because two services were called by a controller who was coordinating the work. But given my current application, with many finite repositories, each repository having its own context would mean a given request might have 3-10 instances of DbContext. I assume (perhaps incorrectly) that this is problematic.
Repeating the question:
What is the proper way to share an EF DbContext across multiple repositories in an MVC web app? Is it prudent/necessary to do so, what are the pitfalls of doing or not doing this?
DbContext are cheap, but distributed transactions are not.
Objects attached to one context can't be used in another context (if you have object relations)
The easiest way to share a context is to start using an inversion of control container: http://www.codeproject.com/Articles/386164/Get-injected-into-the-world-of-inverted-dependenci
I would go for a combination between the first two options and regarding your take on the first option, don't let repositories save any changes (that's not the recommended way of doing things, Martin Fowler says that him self) and for that he introduced the Unit of Work pattern.
I'm building an MVC3 app, trying to use IoC and constructor injection. My database has (so far) about 50 tables. I am using EF4 (w/ POCO T4 template) for my DAC code. I am using the repository pattern, and each table has its own repository. My service classes in my service layer are injected w/ these repositories.
Problem: My service classes are growing in the number of repositories they need. In some cases, I am approaching 10 repositories, and it's starting to smell.
Is there a common approach for designing repositories and service classes such that the services don't require so many repositories?
Here are my thoughts, I'm just not sure which one is right:
1) This is a sign I should consider combining/grouping my repositories into related sections of tables, reducing the number or dependent repositories per service class. The problem with this approach, though, is that it will bloat and complicate my repositories, and will keep me from being able to use a common interface for all repositories (standard methods for data retrieval/update).
2) This is a sign I should consider breaking my services into groups based on my repositories (tables). Problem with this is that some of my service methods share common implementation, and breaking these across classes may complicate my dependencies.
3) This is a sign that I don't know what I'm doing, and have something fundamentally wrong that I'm not even able to see.
UPDATE: For an idea of how I'm implementing EF4 and repositories, check out this sample app on codeplex (I used version 1). However, looking at some of the comments there (and here), looks like I need to do a bit more reading to make sure this is the route I want to take -- sounds like it may not be.
Chandermani is right that some of your tables might not be core domain classes. This means you would never search for that data except in terms of a single type of parent entity. In those cases you can reference them as "complex types" rather than full-blown entities, and EF will still take care of you.
I am using the repository pattern, and each table has its own repository
I hope you're not writing these yourself from scratch.
The EF 4.1 already implements the Repository Pattern (DbSet), and the Unit of Work pattern (DbContext). The older versions do too, though the DbContext template can easily be tweaked to provide a clean mockable implementation by changing those properties to an IDbSet.
I've seen several tutorial articles where people still write their own, though. It is strange to me, because they usually don't provide a justification, other than the fact that they are "implementing the Repository Pattern".
Writing wrappers for these repositories for access methods like FindById make it slightly easier to access, but as you've seen is a big amount of effort potentially little payback. Personally, unless I find that there is interesting domain logic or complex queries to be encapsulated, I don't even bother and just use Linq directly against the IDbSet.
My service classes in my service layer are injected w/ these repositories.
Even if you choose to use custom query wrappers, you might choose to simply inject the DbContext, and let the service code instantiate the wrappers it needs. You'd still be able to mock your data access layer, you just wouldn't be able to mock up the wrapper code. I'd still recommend you inject less generic ones though, because complex implementation is exactly the type of thing you'd like to be able to factor out in maintenance, or replace with mocks.
If you look at DDD Aggregate Root pattern and try to see you data in this perspective you would realize that many of the table do not have a independent existence at all. Their data is only valid in context of their parent. Most of the operations on them require you to get the parent as well. If you can group such tables and find the parent entity\repository all other child repository can be removed. The complexity of associating the parent child which till now you would be doing in your business layer (assuming you are retrieving parent and child using independent repo) not would be shifted to the DAL
Refactoring the Service interface is also a viable option, and any common functionality can be moved into a base class and\or can be itself defined as a service which is consumed by all your existing services (Is A vs Has A)
#Chandermani has a good point about aggregate roots. Repositories should not, necessary have a 1:1 mapping to tables.
Getting large numbers of dependencies injected in is a good sign your services are doing too much. Follow the Single Responsibility Principle, and refactor them into more manageable pieces.
are your services writing to all of the repositories? i find that my services line up pretty closely with repositories, that they provide the business logic around the CRUD operations that the repository expose.
I'm relatively familiar with the concepts of DI/IOC containers having worked on projects previously where their use were already in place. However, for this new project, there is no existing framework and I'm having to pick one.
Long story short, there are some scenarios where we'll be configuring several implementations for a given interface. Glancing around the web, it seems like using any of the mainstream frameworks to selectively bind to one of the implementations is quite simple.
There are however contexts where we'll need to run ALL the configured implementations. I've scoured all the IOC tagged posts here and I'm trying to pour through documentation of the major frameworks (so far looking at Unity, Ninject, and Windsor), but docs are often sparse and I've not the time to inspect source for all the packages.
So, are there any mainstream IOC containers that will allow me to bind to all the configured concrete types for one of my services?
One thing that caught me the first time I was trying to resolve all implementations of a registered type was that un-named (default) type registrations will not be returned when you call ResolveAll(). Only named instances are returned.
So:
IUnityContainer container = new UnityContainer();
container.RegisterType<IMyInterface, MyFirstClass>();
container.RegisterType<IMyInterface, MySecondClass>("Two");
container.RegisterType<IMyInterface, MyThirdClass>("Three");
var instances = container.ResolveAll<IMyInterface>();
Assert.AreEqual(2, instances.Count, "MyFirstClass doesn't get constructed");
So I somehow missed this my first pass looking through Unity somehow...but I'll answer my own question.
Unity has precisely what I wanted.
http://msdn.microsoft.com/en-us/library/cc440943.aspx
Also, for anyone else doing the IOC hunt and dance like me, this link proved to be invaluable.
http://blog.ashmind.com/index.php/2008/09/08/comparing-net-di-ioc-frameworks-part-2/