I was recently listening to a podcast "herding code" where Julia Lerman was the guest. They were discussing the changes that were made in EF4. They got into the subject of Self-Tracking entities. Julia had said that self tracking entities were not perfect and that they were made for WCF and wouldn't work with asp.net. This is my main question. Of course they didn't any other details than that. Works for WCF and not for ASP.Net huh? What exactly does that mean? So what happens when you send the self-tracking entities to the presentation layer and the presentation layer is asp.net? She also said they weren't very interoperable and that if you presentation layer had .Net Framework 3.5 you could get it to work but it would take a little extra work.
She goes on to say that POCO template is the perfect solution and you can manage your "change type" notifications with classes designed with this template. Is there some type of mechanism to track these notifications or is it something your expected to handle manually?
So she really made it sound like she really liked the POCO template but the self-tracking entities solution was limited.
Related
I know this has been sort of answered in various posts, but using VS2012 with MVC4, I wonder if there is a more updated method or new ways to do things.
I have a large enterprise application that has 22 projects in it. It has a complex large business object/logic project and has multiple presentation layers. Working on a new presentation layer using MVC 4. I have never used MVC before at this scale.
Here are my questions:
How do people handle the model in this scenario? All the Microsoft examples are so simple.
I have seen posts to auto mappers and recommend deves use simple models and extract from the BO layer, but some of these tools like auto mapper seem to have gone idle, is there a library in MVC that does that now?
I'm just trying to figure out best practices before I get started, seems usually I figure them out after the fact.
I break my MVC apps into several different projects.
AppName.Configuration: to handle any configuration of the app (i.e. pulling in web.config/app settings, etc)
AppName.Data: this is the data layer where all DB access is performed (no business logic). The DBML/EDMX lives here, my repository class(es) live here as well.
AppName.Models: this is where all of my ViewModels are defined for MVC, as well as other model objects needed throughout the application.
AppName.Services: This is my business layer, all everything must pass through here to get to the data layer or to the presentation layer. ViewModels are constructed from the database objects, data validation happens here, etc.
AppName.Web: this would be the MVC application.
AppName.Data.Test: Unit tests for Data app
AppName.Services.Test: Unit tests for the services
AppName.Web.Test: Unit tests for the MVC controllers
AppName.Web.UI.Test: Unit tests for the web user interfaces (using WATIN)
I don't use any auto-mappers, as i clearly define a specific viewmodel for each view of the application. If it isn't needed for the view, it doesn't go in there.
Most MVC examples are so basic, they show everything in the web app (data, models, business logic in the controllers, etc.)
I hope this helps.
I am developing a medium sized ASP.NET project using ASP.NET MVC and Entity Framework. I have developed a 3-tier system by setting up 3 Visual Studio projects and referencing them accordingly:
Presentation -- This is my MVC project and contains all the views and controllers. I deleted the model folder completely from this project as I am moving it to the BO project (see below)
Business Objects (BO) -- This project contains the "meat" of the application, and is where the real heart of the application is located. Here, objects are defined that represent things I'm trying to model in code (User, Facility, Appointment, etc.).
Data Access (DA) - This project is all Entity Framework so far.
The "problem" that I am having is that I am doing a lot of manual one-to-one mapping in the BO. For example, when a User.load() is called, I load the user from EF, then map a number of parameters (firstname, lastname, username, active, etc.) from the EF result to parameters on the object.
I see this as good and bad. Good: it disconnects EF from the project, so if I ever need to use another data store I'm not tied only to EF. Bad: it takes a little more time, because I have to setup each parameter and carefully handle them on Add(), Update(), etc. by implementing my own change tracking.
What do you think of this approach?
it disconnects EF from the project
Which is indeed good.
I am doing a lot of manual one-to-one mapping in the BO
I suggest you take a look at AutoMapper.
I find the book ASP.NET MVC in Action from Manning quite good. The second version, recently released, also has a small chapter about AutoMapper included. It's not in the free sample chapters but you might want to check out the source code (or buy the book of course).
If you are using .Net 4.0 then you should definitely consider creating and using POCO Entities instead of EntityObject which is not only gives you Persistence Ignorance (which you have mentioned) but also you will not need any Mapper in between since you work with POCOs (Plain Old CLR Object) in all layers including your data access.
If you haven't work with EF & POCOs, then this would be a good start:
http://blogs.msdn.com/b/adonet/archive/2009/05/21/poco-in-the-entity-framework-part-1-the-experience.aspx
If you are using .Net 3.5 SP1 and CANNOT upgrade to 4.0 then like XIII correctly mentioned, AutoMapper will automate your mapping process or you can come up with your own AutoMapper which is nothing more than a simple reflection code.
Hi I've been given the task of creating an N-Teir website using the Entity Framework 4 and am coming up against some brick walls, more than likely in my knowledge.
My plan so far was to have these layers
Website (application layer),
What I'm calling Name.Framework (BLL),
Name.Data (DAL),
Name.Entities (contains POCO classes and other struct classes used in website/bll,
Name.Common (utility class)
I've tried to use the repository class, but am struggling to make things work how I thought they would. Below are a few examples of what I'm getting stuck on.
If I want to use .include() would this be in my Repository or is this the responsibilty of the business layer? (and I have no idea how this would work in the BLL)
Same question for .Order()? As I understood it this would need to be in repository or at least passed into the repo in some way?!?
Should I be using the BLL to pass in the Context to the repository/data layer? At the moment when I get an entity from the Data layer any navigation properties that weren't referenced in the repo just come back with 'Object Context Disposed', should the Business layer still hold the context etc so that this wouldn't happen?
Or to summarize this HELP!!!
I need to have this in some sort of order by tomorrow (eek!) as the project leader wants to know if we are going to continue with Entity Framework or move to NHibernate as in-house we have more knowledge of it.
Thanks for any help or suggestions
Matt
Looking for something similar myself I found this. Not looked into it too much at the moment but looks promising.
I'm currently working on a web hobby project with EF4 Code-Only, where I have the following structure ([name] being the name of my project):
[name].Web - An ASP.NET MVC 2 project
[name].Web.Models - Custom view models, along with AutoMapper mappings from my entity objects
[name].Models - My POCO classes, and interfaces for repositories
[name].DataAccess - Some interfaces related to data access, for example IUnitOfWork
[name].DataAccess.EF - All Entity Framework related classes and interfaces
I also have a test project for each of the above, plus a couple of projects with helpers and extensions for the tests.
It might be relevant to mention that part of the purpose of this hobby project is for me to learn how to use EF4 with some design patterns of my own choice (the ones that concern EF in this project are the Repository Pattern and the Unit of Work pattern). Another partial purpose is to build up a code base that I can re-use in later projects, and this has affected the division between projects in my application - for example, if I wasn't concerned with re-use, I'd probably have all data access related classes in one project instead of two.
I've implemented a basic EF, poco, Repository, UnitOfWork architecture largely following this article here:
http://devtalk.dk/CommentView,guid,b5d9cad2-e155-423b-b66f-7ec287c5cb06.aspx
I found it to be very helpful in these endeavors. Don't know if it helps you but others might be interested in the link.
In my first ASP.NET MVC applications, the model was a simple O/R mapping between a table and the classes, managed by the Entity Framework.
Now I would like to add some meat to this skeleton, and introduce business methods for the generated classes. What is the recommended approch to this in ASP.NET MVC (with Entity Framework)? My favorite would be solution which also can be used in a service layer, with no ASP.NET MVC references, so that the same domain logic also could be reused in a desktop client.
Technically, I think it should be possible to extend the generated classes in a way which preserves the additional business logic even if the O/R classes need to be refreshed. (This is more a question related to the Entity Framework however.)
Edit: Many thanks for the contributions, and the information about the next version of Entity Framework (4.0). Building two sets of classes, one auto-generated to represent the data in the persistency layer and one for the actual business logic sounds interesting.
Within MVC.Net, the model is the least clearly defined part. In my opinion, it's basically the rest of your application (i.e. anything not related to the View or the Controller). The O/R Mapping part of your application should probably be outside of the "Model" also, as this is more of a data layer. The Model, should really deal in business objects and create views of your data to pass to the View.
There are lots of differing opinions on this, but I think it's best not to think of MVC.Net as traditional MVC Architecture.
If you are using EF v1.0 right now, the Entity Framework is very intrusive into your application, which means that you cannot create POCO very easily. The way you can extend your model is by using the partial class. So when you refresh your model, the partial class you did will still be valid. The Entity Framework team realizes this is a problem , and have improved this in next version (EF V4.0).
NHibernate is much more friendly and allow you easily extend your business logic.
I really think this blog post by Jeremy D. Miller is very good at pointing out the problem.
Abstract your Business Layer out into another project, then pass an instance of it onto your mvc controller using something like structure map. You can then call this Business Layer from your controller to retrieve your business entities (Model) and pass them on to the UI. This will allow you to resuse your Business Layer in your desktop application.
Not only meat but also some clothes and a style could be added to this project to make it seem chic. It depends on the time you have for the project. If you have time, I could suggest you to get a look to TDD and the frameworks that could be used with TDD such as Castle, NUnit, Moq etc.
As you mentioned a service layer is a must for any project but with these kinds of frameworks you could design your architecture more robust.
I've been reading a few things about ASP.NET MVC, SOLID and so on, and I am trying to figure out a simple "recipe" for small-to-medium ASP.NET MVC apps that would put these concepts together; the issue that I am most concerned with is ending up with controllers that are too complex and being like code-behind files in webforms, with all type of business logic into them.
I am considering the following architecture, for a small data-driven app:
Controllers: only handle requests, call an appropriate service and return the action result to the View;
Models: POCO, handle all the business logic, authorization etc. Depends on repositories, totally ignorant of persistence infrastructure.
Repositories: implement IRepository<T>, use dependency injection and is where my db code will reside; only receives and returns POCO.
I am considering having services between the controllers and the models, but if they will just pass forward method calls I am not sure how useful it would be.
Finally there should have unit tests covering the model code, and unit+integration tests covering the repository code (following the "red-green" practice, if possible)
Thoughts?
Ian Cooper had a good post on exactly this recently:
The Fat Controller
Simple recipe: (view)Presentation Layer using ASP.NET, (controller)Code Behinds or AJAX Services Layer, (model)Application Services layer, Business Model layer, and Persistance/Data Access layer.
Of course you can slice and dice numerous ways to deal with complexities in order to build a clearly readable and understandable application.
For a recent discourse on the subject, which I have found to be very good, check out this newly published book: Microsoft .NET: Architecting Applications for the Enterprise.
These walkthroughs are quite helpful:
MVC Framework and Application Structure
Walkthrough: Creating a Basic MVC Project with Unit Tests in Visual Studio
Also see: aspnet-mvc-structuring-controllers
Rob Conery has the best answer IMO.
Check out his MVC Storefront Application, which comes with complete source code and video tutorials.