I was interested in any framework/patterns to leverage to generate an object to object Mapping tool. My goal is to use EF to hydrate the object, and I would like to use a custom class which would define the transformation between ObjectA --> ObjectB (Similar to AutoMapper, Biztalk map yet in .net). I would think this would be a common scenario and would like to leverage any best practices/frameworks any one has used in the past. Any help is greatly appreciated!
You should have a look at this tool : Automapper Verbatim. It's not the same as the Automappper you'd find on Codeplex. Unlike the one on Codeplex, Automapper Verbatim will generate code rather than using reflection (it's much more faster).
Updates/Bug fixes seem to be posted on a regular basis.
We use this tool on several projects and it saves us a lot of time.
Some piece of advice though: do not hesitate to split your mapper in different .map files (for performance and readability reasons), putting them in the same directory will let you re-use mapping done from one file to another. Splitting .map files will avoid you merge issues as well (if you work in a team). Be sure to always get the latest version of .map file you'll need to work on, unless merging will become a nightmare :-)
Related
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
I'm building a MVC4 app, I've used EF5 model first, and kept it pretty simple. This isn't going to a huge application, there will only ever be 4 or 5 people on it at once and all users will be authenticated before being able to access any part of the application, it's very simply a place order - dispatcher sees order - dispatcher compeletes order sort of application.
Basically my question is do I need to be worrying about repositories and ViewModels if the size and scope of my application is so small. Any view that is strongly typed to a domain entity is using all of the properties within that entity. I'm using TryOrUpdateModel in my controllers and have read some things saying this can cause a lot of problems, but not a lot of information on exactly what those problems can be. I don't want to use an incredibly complicated pattern for a very simple app.
Hopefully I've given enough detail, if anyone wants to see my code just ask, I'm really at a roadblock here though, and could really use some advice from the community. Thanks so much!
ViewModels: Yes
I only see bad points when passing an EF Entities directly to a view:
You need to do manual whitelisting or blacklisting to prevent over-posting and mass assignment
It becomes very easy to accidentally lazy load extra data from your view, resulting in select N+1 problems
In my personal opinion, a model should closely resembly the information displayed on the view and in most cases (except for basic CRUD stuff), a view contains information from more than one Entity
Repositories: No
The Entity Framework DbContext already is an implementation of the Repository and Unit of Work patterns. If you want everything to be testable, just test against a separate database. If you want to make things loosely coupled, there are ways to do that with EF without using repositories too. To be honest, I really don't understand the popularity of custom repositories.
In my experience, the requirements on a software solution tend to evolve over time well beyond the initial requirement set.
By following architectural best practices now, you will be much better able to accommodate changes to the solution over its entire lifetime.
The Respository pattern and ViewModels are both powerful, and not very difficult or time consuming to implement. I would suggest using them even for small projects.
Yes, you still want to use a repository and view models. Both of these tools allow you to place code in one place instead of all over the place and will save you time. More than likely, it will save you copy paste errors too.
Moreover, having these tools in place will allow you to make expansions to the system easier in the future, instead of having to pour through all of the code which will have poor readability.
Separating your concerns will lead to less code overall, a more efficient system, and smaller controllers / code sections. View models and a repository are not heavily intrusive to implement. It is not like you are going to implement a controller factory or dependency injection.
I am currently building an ASP.net MVC application, which has be broken down into multiple modules (as well as a generic class library).
I have implemented a Unit Of Work pattern for my first module. This unit of work class contains a number of different repositories.
However, I was wondering whether or not it is good idea to have a separate Unit Of Work class for each module?
Well, EF supplies you with UnitOfWork and Repository patterns implemented itself. Usually they are not exactly what you want and it seem nice to add some methods to that native EF Repositories, but in most cases it doesn`t worth the trouble.
Implementing your own Repository based on EF is not a good idea if your project is simple. It adds a lot of work but not as much of value.
Implementing UnitOfWork based on EF is complete different story. The only reason i can see to do it is "to have different UoW for different parts of the solution". Avoid it otherwise, really.
We tried to add both this approaches ignoring prebuilt ones in our project. It was completely reasonable because we were designing modular solution and we didn`t even know how many modules we would have at the end. We expected to add new modules to the system when it is already running and heavy loaded. And i can say that it took a lot of time to develop such application. When you realize that you need to have access to one more entity from some module leads to changes in several places - the first evidence of inefficient design.
So, KISS and YAGNI are against it. If you are tangled by question "should i add this stuff to my project" - just don`t. You need a good reason to implement this parts yourself, not just some "nice design" bias, because it adds lots of complexity. Even if you think you would need it some day - wait until that day. If you would try to estimate which miscalculation would be more disastrous i am pretty sure that it is much easier to add something new to your project then remove something already existing.
Please see this and this
A unit of work is really just a way of keeping of track of a set of entities that have been loaded into memory. Once loaded, we can work with the entities in the normal way: changing state, adding new entities and removing other entities. When we are ready to save our changes we ask the unit of work to commit and it takes care of “flushing” the pending changes to the underlying database.
Is it a good idea to have a separate Unit Of Work class for each module?
My first thought is: how would a unit of work for one module differ from that of another? If they do, they probably shouldn't, because the domain should be persistence ignorant and the data layer should be business logic ignorant.
Take for instance the UoW that comes with Entity Framework itself: the context. [When you create a context, do stuff, call SaveChanges() and dispose of it, it acts as a UoW]. You can use one context class maybe for your whole application. You're not going to program any business logic in your context class. So there is no reason to have a context class per module unless each module uses really distinct parts of the database (which is hardly ever true). The same will hold for a UoW you create yourself.
It's a bit beyond the scope of your question, but you could ask yourself whether you need your own UoW and repository classes as EF offers basic implementations of both (context and DbSets).
Note: This is a follow-up question for this previous question of mine.
Inspired by this blog post, I'm trying to construct a fluent way to test my EF4 Code-Only mappings. However, I'm stuck almost instantly...
To be able to implement this, I also need to implement the CheckProperty method, and I'm quite unsure on how to save the parameters in the PersistenceSpecification class, and how to use them in VerifyTheMappings.
Also, I'd like to write tests for this class, but I'm not at all sure on how to accomplish that. What do I test? And how?
Any help is appreciated.
Update: I've taken a look at the implementation in Fluent NHibernate's source code, and it seems like it would be quite easy to just take the source and adapt it to Entity Framework. However, I can't find anything about modifying and using parts of the source in the BSD licence. Would copy-pasting their code into my project, and changing whatever I want to suit my needs, be legal for non-commercial private or open source projects? Would it be for commercial projects?
I was going to suggest looking at how FluentNH does this, until I got to your update. Anyway, you're already investigating that approach.
As to the portion of your question regarding the BSD license, I'd say the relevant part of the license is this: Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: [conditions follow].
From my reading of that line, you can modify (which would include the removal of any code not relevant to your use cases) the code however you wish, and redistribute it as long as you meet the author's conditions.
Since there are no qualifications on how you may use or redistribute the code or binaries, then you are free to do that however you wish, for any and all applications.
Here and here are descriptions of the license in layman's terms.
I'm always writing simple set of integration tests for each entity. Tests are persisting, selecting, updating and deleting entity. I thing there is no better and easier way to test your mapping and other features of the model (like cascade deletes).
So I have a layered ASP.NET MVC proof-of-concept application with good separation between presentation concerns, business logic, and infrastructure concerns. Right now it is operating off of a fake repository (i.e. LINQ queries against static IQueryable objects). I would like to create a functional SQL repository now.
That said, I don't want to simply tie it into a database that has a 1-1 mapping between tables and entities. That wouldn't meet the business need I am hoping to solve (partial integration with existing database - no hope for convention over configuration).
Do you have suggestions for which ORM / mapping tools I should consider and/or avoid?
Do you have suggestions for articles/books I could look at to help me approach this topic?
Would it be better to simply use parameterized queries in this scenario?
Entity Framework in version 4 would definitely allow you to:
have a mapping between the physical database schema and your conceptual schema, e.g. having an entity mapped to several tables, or several tables joined together forming a single business entity
grab data from views (instead of tables directly)
use stored procedures (where needed and appropriate) for INSERT, UPDATE, DELETE on every entity
NHibernate sounds like a good fit for what you are looking for. You will be able to make your repositories call queries in either HQL or using the API, either way you can get to your database and shape the data to fit the way your repository is being used. It will always be hard to make a square peg fit into a round hole though. SO has lots of nice support when you get into using NHibernate, good luck.
As you mentioned in the question, it is very debatable to choose an ORM. Different people will have different project needs. I am not exactly sure what will take priority for you. Here is what I have tried myself.
NHibernate seems to be the most commonly used ORM in DotNet projects. I feel it suffers from a typical open source problem. It offers so many features but the documentation really sucks. If you have lots of time at your disposal you can give it a shot.
Another options is to go for something like Entity Framework. Its very easy to set up and get up and running. With version 4.0 and the CTP there is provison for code first as well as fluent mapping and configuration. Since you have said you would want to keep the domain model separated EF 4 will help you because it has a notion of conceptual model which is an abstraction over the mapping layer.
You can refer to few links below for the blogs I had written based on my experience
http://nileshgule.blogspot.com/2010/08/entity-framework-hello-world.html
http://nileshgule.blogspot.com/2010/09/nhibernate-code-first-approach-with.html
http://nileshgule.blogspot.com/2010/09/entity-framework-first-query-using.html
http://nileshgule.blogspot.com/2010/09/entity-framework-learning-series.html