Background
I am about to start the process of creating a new application with MVC 5 and EF6 and building it out with TDD. This is my first MVC application so i have decided to use it as a bit of a learning platform to better understand a whole range of patterns and methodologies that i have been exposed to but have only used in passing up until this point.
I started with this in my head:
EF - Model
Repositories
Services
UI (controllers views)
Removing the Repositories
I shifted this thinking to remove one layer, repositories simply as my understanding has grown i can see the EF (specifically IDbSet) implements a repository pattern or sorts and the context itself is a unit of work, so wrapping it around a further abstraction, for this application at least seems pointless, at that level anyway.
EF will be abstracted at the Service Layer
Removing the Repo's doesn't mean EF will be directly exposed to the controllers as in most cases i will use the services to expose certain methods and business logic to the controllers, but not exclusively exclude EF as i can use it outside of services to do things like building specific queries which could be used at a service level and a controller level, the service layer will simply be a simpler way of mapping specifics from the controller to the EF and data concerns.
This is where it gets a bit ropey for me
Service Layer
My services feel a little bit like repositories in the way they will map certain functions (getById etc), which i am not sure is just naturally the way they are or if my understanding of them is way off and there is more information that i can't find to better my knowledge.
TDD & EF
I have read a ton of stuff about the EF and how you can go about testing with unit wise, how you shouldn't bother as the leakyness of IQueryable and the fact that Linq-to-entities and Linq-to-objects means that you won't get the results that you intend all of the time, but this has led to simply confusing the hell out of me to the point where i have an empty test file and my head is completely blank because i am now over thinking the process.
Update on TDD the reason the TDD tag was included as i thought maybe someone would have an idea on how they approach something like this without a repository because that is an abstraction for abstractions sake. Would they not unit test against it and use other tests to test the query-able behavior like a integration test or end to end test? but from my limited understanding that wouldn't be TDD as the tests would not be driving my design in this instance?
Finally, To The Point
Is the:
EF
Service
UI
architecture a good way to go, initially at least?
Are there any good examples of a well defined service layer out there so i can learn, and are they in the main a way to map certain business operations that have data connotations to some for of persistence mechanic (in this case an ORM and EF) without having the persistence requirements of say a repository?
With the TDD stuff, is it ok to forgo unit tests for service methods that are basically just calling EF and returning data and just opting for slower integration tests (probably in a seperate project so they are not part of the main test flow and can be run on a more ad-hoc basis?
Having one of those weeks and my head feels like it is about to explode.
Lol I've had one of those weeks myself for sure. ;)
I've had the same kind of internal discussions over how to structure MVC projects, and my conclusion is find what's most comfortable to you.
What I usually do is create the following projects:
Core/Domain - here I have my entities/domain model, and any
other thing that may be shared among layers: interfaces, for
example, configuration, settings, and so on.
Data/EF - here
I have all my EF-dependent code: DataContext and Mappings
(EntityTypeConfiguration). Ideally I could create another
version of this using, say NHibernate and MySQL, and the rest of the
solution will stay the same.
Service - this depends on Core
and Data. I agree in the beginning it will look like a simple facade
to your Data, but as soon as you start adding features, you'll find
this is the place to add your "servicemodels". I'm not saying
ViewModel as this is quite Web-ui related. What i mean with
"ServiceModel" is creating a simpler version of your domain objects.
Real example: hide your CreatedOn, CreatedBy properties, for
example. Also, whenever one of your controller's actions grow to
anything over quite simplistic, you should refactor and move that
logic to the service and return to the controller what you really
need.
Web/UI This will be your webApp. It will depend on Core and Service.
You didn't mention dependency injection but you have to definitely look at it.
For testing, you can test your Data using a SqlCompact provider that re-creates the database for each test instead of using a full SqlExpress. This means your DataContext should accept a connectionString parameter. ;)
I've learned a lot seeing big projects source code, like http://www.nopcommerce.com. You could also have a look at http://sharparchitecture.net/ although I bet you already saw that.
Be prepared to have some nightmares with complex object graphs in EntityFramework. ;)
My final advice is: find something specific to do and dive in. Too much abstraction will keep you from starting, and starting is key to practice and understanding.
Related
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
I am currently building an ASP.net MVC application, which has be broken down into multiple modules (as well as a generic class library).
I have implemented a Unit Of Work pattern for my first module. This unit of work class contains a number of different repositories.
However, I was wondering whether or not it is good idea to have a separate Unit Of Work class for each module?
Well, EF supplies you with UnitOfWork and Repository patterns implemented itself. Usually they are not exactly what you want and it seem nice to add some methods to that native EF Repositories, but in most cases it doesn`t worth the trouble.
Implementing your own Repository based on EF is not a good idea if your project is simple. It adds a lot of work but not as much of value.
Implementing UnitOfWork based on EF is complete different story. The only reason i can see to do it is "to have different UoW for different parts of the solution". Avoid it otherwise, really.
We tried to add both this approaches ignoring prebuilt ones in our project. It was completely reasonable because we were designing modular solution and we didn`t even know how many modules we would have at the end. We expected to add new modules to the system when it is already running and heavy loaded. And i can say that it took a lot of time to develop such application. When you realize that you need to have access to one more entity from some module leads to changes in several places - the first evidence of inefficient design.
So, KISS and YAGNI are against it. If you are tangled by question "should i add this stuff to my project" - just don`t. You need a good reason to implement this parts yourself, not just some "nice design" bias, because it adds lots of complexity. Even if you think you would need it some day - wait until that day. If you would try to estimate which miscalculation would be more disastrous i am pretty sure that it is much easier to add something new to your project then remove something already existing.
Please see this and this
A unit of work is really just a way of keeping of track of a set of entities that have been loaded into memory. Once loaded, we can work with the entities in the normal way: changing state, adding new entities and removing other entities. When we are ready to save our changes we ask the unit of work to commit and it takes care of “flushing” the pending changes to the underlying database.
Is it a good idea to have a separate Unit Of Work class for each module?
My first thought is: how would a unit of work for one module differ from that of another? If they do, they probably shouldn't, because the domain should be persistence ignorant and the data layer should be business logic ignorant.
Take for instance the UoW that comes with Entity Framework itself: the context. [When you create a context, do stuff, call SaveChanges() and dispose of it, it acts as a UoW]. You can use one context class maybe for your whole application. You're not going to program any business logic in your context class. So there is no reason to have a context class per module unless each module uses really distinct parts of the database (which is hardly ever true). The same will hold for a UoW you create yourself.
It's a bit beyond the scope of your question, but you could ask yourself whether you need your own UoW and repository classes as EF offers basic implementations of both (context and DbSets).
I'm building an MVC3 app, trying to use IoC and constructor injection. My database has (so far) about 50 tables. I am using EF4 (w/ POCO T4 template) for my DAC code. I am using the repository pattern, and each table has its own repository. My service classes in my service layer are injected w/ these repositories.
Problem: My service classes are growing in the number of repositories they need. In some cases, I am approaching 10 repositories, and it's starting to smell.
Is there a common approach for designing repositories and service classes such that the services don't require so many repositories?
Here are my thoughts, I'm just not sure which one is right:
1) This is a sign I should consider combining/grouping my repositories into related sections of tables, reducing the number or dependent repositories per service class. The problem with this approach, though, is that it will bloat and complicate my repositories, and will keep me from being able to use a common interface for all repositories (standard methods for data retrieval/update).
2) This is a sign I should consider breaking my services into groups based on my repositories (tables). Problem with this is that some of my service methods share common implementation, and breaking these across classes may complicate my dependencies.
3) This is a sign that I don't know what I'm doing, and have something fundamentally wrong that I'm not even able to see.
UPDATE: For an idea of how I'm implementing EF4 and repositories, check out this sample app on codeplex (I used version 1). However, looking at some of the comments there (and here), looks like I need to do a bit more reading to make sure this is the route I want to take -- sounds like it may not be.
Chandermani is right that some of your tables might not be core domain classes. This means you would never search for that data except in terms of a single type of parent entity. In those cases you can reference them as "complex types" rather than full-blown entities, and EF will still take care of you.
I am using the repository pattern, and each table has its own repository
I hope you're not writing these yourself from scratch.
The EF 4.1 already implements the Repository Pattern (DbSet), and the Unit of Work pattern (DbContext). The older versions do too, though the DbContext template can easily be tweaked to provide a clean mockable implementation by changing those properties to an IDbSet.
I've seen several tutorial articles where people still write their own, though. It is strange to me, because they usually don't provide a justification, other than the fact that they are "implementing the Repository Pattern".
Writing wrappers for these repositories for access methods like FindById make it slightly easier to access, but as you've seen is a big amount of effort potentially little payback. Personally, unless I find that there is interesting domain logic or complex queries to be encapsulated, I don't even bother and just use Linq directly against the IDbSet.
My service classes in my service layer are injected w/ these repositories.
Even if you choose to use custom query wrappers, you might choose to simply inject the DbContext, and let the service code instantiate the wrappers it needs. You'd still be able to mock your data access layer, you just wouldn't be able to mock up the wrapper code. I'd still recommend you inject less generic ones though, because complex implementation is exactly the type of thing you'd like to be able to factor out in maintenance, or replace with mocks.
If you look at DDD Aggregate Root pattern and try to see you data in this perspective you would realize that many of the table do not have a independent existence at all. Their data is only valid in context of their parent. Most of the operations on them require you to get the parent as well. If you can group such tables and find the parent entity\repository all other child repository can be removed. The complexity of associating the parent child which till now you would be doing in your business layer (assuming you are retrieving parent and child using independent repo) not would be shifted to the DAL
Refactoring the Service interface is also a viable option, and any common functionality can be moved into a base class and\or can be itself defined as a service which is consumed by all your existing services (Is A vs Has A)
#Chandermani has a good point about aggregate roots. Repositories should not, necessary have a 1:1 mapping to tables.
Getting large numbers of dependencies injected in is a good sign your services are doing too much. Follow the Single Responsibility Principle, and refactor them into more manageable pieces.
are your services writing to all of the repositories? i find that my services line up pretty closely with repositories, that they provide the business logic around the CRUD operations that the repository expose.
Well, not sure if that's exactly the right title, but basically I have been having a lot of problems using repositories in MVC applications in such a way that you can substitute one set of repositories, implementing a different data storage technology, for another.
For example, suppose I want to use Entity Framework for my application. However, I also want to have a set of test data implemented in hard-coded Lists. I would like to have a set of interfaces (IUserRepository, IProductRepository, etc. -- let's not talk about a more generic IRepository<T> for now) that both approaches can instantiate. Then, using (say) a Dependency Injection tool such as Ninject or Castle Windsor, I can switch back and forth between the entity framework provider (accessing the actual database) and the test provider (accessing the lists).
In a nutshell, here's the problem:
-- If you are going to use Entity Framework, you want your repositories returning IQueryable<SomeType>.
-- If you are going to use hard-coded lists, you do NOT want your repositories returning IQueryable, because it adds hugely to the overhead, and plus, Linq to Entities is significantly different from Linq to Objects, causing many headaches in the code that is common to both providers.
In other words, I have found that the best approach isolates all the EF-dependent code within the repositories, so that the repositories themselves return IEnumerable or IList or some such -- then both EF and some other technology can use the same repositories. Thus, all the IQueryable's would be contained WITHIN the EF repositories. That way, you can use Linq to Entities with the EF repositories, and Linq to Objects with the Test repositories.
Yet this approach puts an enormous amount of the business logic into the repositories, and results in much duplicated code -- the logic has to be duplicated in each of the repositories, even if the implementations are somewhat different.
The whole idea of the repositories as this layer that is very thin and just connects to the database is then lost -- the repositories are "repositories" of business logic as well as of data store connectivity. You can't just have Find, Save, Update, etc.
I've been unable to resolve this discrepancy between needing to isolate provider-dependent code, and having business logic in a centralized location.
Any ideas? If anyone could point me to an example of an implementation that addresses this concern, I would be most appreciative. (I've read a lot, but can't find anything that specifically talks about these issues.)
UPDATE:
I guess I'm starting to feel that it's probably not possible to have repositories that can be swapped out for different providers -- that if you are going to use Entity Framework, for example, you just have to devote your whole application to Entity Framework. Unit tests? I'm struggling with that. My practice to this point has been to set up a separate repository with hard-coded data and use that for unit testing, as well as to test the application itself before the database is set up. I think I will have to look to a different solution, perhaps some mocking tool.
But then that raises the question of why use repositories, and especially why use repository interfaces. I'm working on this. I think determining the best practice is going to take a bit of research.
What I can say? Welcome to the club ...
What you found is problem reached by many developers who followed "repository boom" with EFv4. Yes it is the problem and the problem is really complex. I discussed this several times:
ASP.NET MVC 3 and Entity Framework code first architecture
Organizationally, where should I put common queries when using Entity framework
Separate topic is why to use repositories:
Generic repository, what is the point
Basically your proposed way is a solution but do you really want it? In my opinion the result is not repository but the Data Access Object (DAO) exposing plenty of access methods. Repository definition by Martin Fowler is:
A Repository mediates between the
domain and data mapping layers, acting
like an in-memory domain object
collection. Client objects construct
query specifications declaratively and
submit them to Repository for
satisfaction. Objects can be added to
and removed from the Repository, as
they can from a simple collection of
objects, and the mapping code
encapsulated by the Repository will
carry out the appropriate operations
behind the scenes. Conceptually, a
Repository encapsulates the set of
objects persisted in a data store and
the operations performed over them,
providing a more object-oriented view
of the persistence layer. Repository
also supports the objective of
achieving a clean separation and
one-way dependency between the domain
and data mapping layers.
I believe exposing IQueryable fulfils this 100 times better then creating a public interface similar to repositories from Stored procedures era - one access method per stored procedure (fixed query).
The problem can be summarized by the rule of leaky abstraction. IQueryable is an abstraction of the database query but the features provided by IQueryable are dependent on the provider. Different provider = different feature set.
What is a conclusion? Do you want such architecture because of testing? In such case start using integration tests as proposed in first two linked answers because in my opinion it is the lest painful way. If you go with your proposed approach you should still use integration tests to verify your repositories hiding all EF related logic and queries.
My controllers turn over the request to the appropriate service. The Service then makes calls to various Repositories. Repositories use Linq to Sql Entities purely for DataAccess and then map and return as Domain Objects. The service then decides what the Controller will present and replaces the DO with Presentation objects which are returned to the Controller to display in View.
SO I have Services-Repositories-Domain Objects- Presentation Objects
I am asking because it seems like i have a lot of objects, some not doing anything but passing data. Is this a reasonable scenairo or am i not following proper MVC pattern?
Yes, you've got the right idea. It can be a lot of classes and interfaces (not even counting the unit tests and mock/test classes) but if you have a decent sized application, you're libel to have many anyway. But to start out, it's a lot of work for not much initial gain.
I have seen projects skip the some of the services implementations for basic services that just pass through to the repository without any value added by the service. They go straight to the repository from the controller and don't seem to lose much.
There are other ways to ease the burden of some classes by using tools where possible. For example, projects like AutoMapper can help streamline your domain object to view model mappings.
If your application is big enough, your pattern makes sense. Otherwise, I smell overengineering...
Ask yourself: what if this layer didn't exist and you'll find out if it's true or not.
I had a very similar scenario. Initially my project had UI, Controllers, Service Layer and Repositories. My unit tests covered both the service layer and repositories (filters) and in some cases the unit tests were doing the same thing (as the service layer was sometimes a pass through to the repositories).
Due to a large refactor I cut out the service layer and this gave me a lot of flexibility with the Controllers dealing directly with the Repositories and applying Filters to get exactly what I want.
One problem I ran into was you cannot Serialize Linq2Sql objects and therefore sometimes had to translate these object to presentation objects.