I'm starting a new .NET MVC project with Entity Framework and I am struggling with some problems.
In my model I have about 150 entities (generated from the database). Is it a good idea to have only one DbContext? If not, how should I divide my entities?
If I have one DbContext and I create a class variable that instantiates a database context object (in Controller), what happens then with this DbContext? Does it create in memory separate space for each of my entities? In my case, when I have 150 entities it would not be very effective. Am I wrong?
I will be using my DbContext in many Controllers. Is it a good idea to create a MainController (where I create new DbContext), which will be inherited by the rest of the Controllers? Because this allows others to have access to the same Context.
What is the best practise for disposing my DbContext? I've read that it is good practice to use dependency injection. But in this way I will have to inject context to every of my controllers. Which dependency injection way is the most popular and used now?
Really need your advice. It will give me more insight to this piece of development.
It is fine to have one DbContext. If you have many you just need to ensure all the entities you need exist in the that context. For example, if you retrieve a Person from the database and their related Address, then both the Person and Address have to exist in the same DbContext.
I've not tried using multiple DbContext instances, but one thing to look out for is if you include the same table in multiple contexts you could end up with classes with similar names or maybe conflicts. For example, if you include Person in two contexts, then each context will attempt to create a class named Person.
When you create a DbContext it will only create objects for the data you retrieve from the database. So if you request one row from a Person table, then only one Person object will be created. If you request 100 rows, then 100 instances will be created.
There are really two options. One, create a new instance in each action, do your work, then save it. Or, create a DbContext in your constructor and reuse that throughout the class.
This depends one what you choose in point #3. If you pass it into the constructor, then implement IDisposable and release it in there. If you create a new one in each action, then ensure it gets disposed using the using statement.
For dependency injection there are a number of options, tutorials, etc, on how to do this in ASP.NET MVC. I personally use Autofac, and related MVC extensions, and pass a new instance of the DbContext into each controller.
150 Entities is not a huge DbContext, but it is above the size where EF starts to exhibit performance issues in the initialization of the first DbContext. If you can logically separate your entities into areas of responsibility (called a bounded context) then you might consider using more than one DbContext. Also, does your app need to use all those entities? If not, you may be able to simplify things. Also note, you need at least EF6 to make this work effectively, previous versions of Entity Framework had issues with multiple contexts.
You also have to be careful when using multiple contexts. Many people get into trouble because they get an entity from one context, but then call save changes on a different one, and then don't understand why their changes are not saved. Or, they try to add an entity retrieved from one to another, which you can't do. Multiple contexts make things more complicated, so make sure you want to take on that complexity before you split it up.
Don't worry about the amount of memory your DbContext uses, so long as you are properly disposing of it. The amount of memory will be minimal unless you actually load objects from all of those tables.
I consider a common base controller to be a code smell. It's usually completely unnecessary, and it usually ends up becoming a dumping ground for every piece of code you think you want to share, which violates the Single Responsibility Principal. On top of that, you shouldn't be doing data access in your controllers anyways. You should have a service layer of some sort or business layer that call into a data access layer. Properly segregating your concerns is a key part of designing a good MVC application.
Yes, Dependency Injection is a good practice. I'm not sure what you mean by "have to inject into all my controllers". The whole point of dependency injection is to inject your dependencies, so the concept of "having to" makes it seem like you're trying to avoid the very thing you're trying to do.
Dependency Injection is a principle. There are many ways to achieve this principle, and which way you use depends entirely on your own preferences and requirements. We can't tell you what's "best" other than to make sure you're following the principle, and not a specific technology.
Regarding the dbcontext question:
I would go with multiple dbcontext (bounded contexts).
One problem with single big dbcontext is the loading and the Initializing time as it will map all the entities and this increases when your entities number increase in your context.
Now your project must consist of modules and this where you can divide your big dbcontext into small db contexts that covers all what each individual module needs to work with the database, for example let's say that your project has two modules (membership and billing or financial) for customer/person entity, you will find that when you deal with person in the membership module you need all his details but not full details of his invoices, and when you deal with the person in the billing module you will need all his invoices deatils but not his full personal information, here you can create 2 dbcontexts one for each module with person entity contains what that module needs from the person entity.
Julie lerman has a good article about dbcontext with Entity-framework that start with to get more details about what I am trying to describe here,
https://msdn.microsoft.com/en-us/magazine/jj883952.aspx
Hope this helps
Related
I'd like to create a good app in ASP.NET MVC 5 using EF 6 Code first concept. I want it to be well-designed i.e. having generally speaking: Presentation, Logic and Data layers separated. I want it to be testable :)
Here's my idea and some issues related with creating application
Presentation layer: It's my whole MVC - view models(not models), views, controllers
I believe that's validation should be done somewhere else (in my opinion - it's a part of business logic) but it's quite convenient to use attributes from the DataAnnotations namespace in ViewModelds and check validation in controller.
Logic layer: Services - classes with their interfaces to rule business logic.
I put there functions like: AddNewPerson(PersonViewModel Person), SendMessageToPerson(...).
They will use DB context to make their actions (there's a chance that not all of them will be relying on context). There's a direct connection between service and db - I mean the service class have reference do context.
Where should I do mapping between ViewModel and Model? I've heard that service is a bad place for it - so maybe in controllers. I've heard that service should do the work related with db exclusively.
Is it right? Is my picture of service layer is good?
Data layer: I've read about Repository and UoW patterns a lot. There're some articles which suggest that EF6 implements these two things. I don't want to create extra code if there's no need for such a behavior. The question is: am i right to assume that i don't need them?
Here's my flow:
View<->Controllers(using ViewModels)<->Services(using Models)<->DB.
**I'm gonna use DI in my project.
What do you think about my project structure?
There is no reason to use a Unit of Work pattern with Entity Framework if you have no need to create a generic data access mechanism. You would only do this if you were:
using a data access technology that did not natively support a Unit of work pattern (EF does)
Wanted to be able to swap out data providers sometime in the future.. however, this is not as easy as it might seem as it's very hard NOT to introduce dependencies on specific data technologies even when using an Unit of Work (maybe even BECAUSE you are)... or
You need to have a way of unifying disparate data sources into an atomic transaction.
If none of those are the case, you most likely don't need a custom Unit of Work. A Repository, on the other hand can be useful... but with EF6 many of the benefits of a Repository are also available since EF6 provides mocking interfaces for testing. Regardless, stay away from a generic repository unless it's simply an implementation detail of your concrete repositories. Exposing generic repositories to your other layers is a huge abstraction leak...
I always use a Repository/Service/Façade pattern though to create a separation between my data and business (and UI and business for that matter) layers. It provides a convenient way to mock without having to mock your data access itself and it decouples your logic from the specific that are introduced by the Linq layer used by EF (Linq is relatively generic, but there are things that are specific to EF), a façade/repository/server interface decouples that).
In general, you're on the right path... However, let me point out that using Data Attributes on your view models is a good thing. This centralizes your validation on your model, rather than making you put validation logic all over the place.
You're correct that you need validation in your business logic as well, but your mistake is the assumption that you should only have it on the business logic. You need validation at all layers of your application.. And in particular, your UI validation may have different requirements than your business logic validation.
For instance, you may implement creating a new account as a multi-step wizard in your UI, this would require different validation than your business layer because each step has only a subset of the validation of the total object. Or you might require that your mobile interface has different validation requirements from your web site (one might use a captcha, while the other might use a touch based human validation for instance).
Either way, it's important to keep in mind that validation is important both at the client, server, and various layers...
Ok, let’s clarify a few things...
The notion of ViewModel (or the actual wording of ViewModel) is something introduced by Microsoft Martin Fowler. In fact, a ViewModel is nothing more than a simple class.
In reality, your Views are strongly typed to classes. Period. To avoid confusion, the wording ViewModel came up to help people understand that
“this class, will be used by your View”
hence why we call them ViewModel.
In addition, although many books, articles and examples use the word ViewModel, let's not forget that it's nothing more than just a Model.
In fact, did you ever noticed why there is a Models folder inside an MVC application and not a ViewModels folder?
Also, ever noticed how at the top of a View you have #model directive and not # viewmodel directive?
That's because everything could be a model.
By the way, for clarity, you are more than welcomed to delete (or rename) the Models folder and create a new one called ViewModels if that helps.
Regardless of what you do, you’ll ultimately call #model and not #viewmodel at the top of your pages.
Another similar example would be DTO classes. DTO classes are nothing more than regular classes but they are suffixed with DTO to help people (programmers) differentiate between all the other classes (including View Models).
In a recent project I’ve worked on, that notion wasn’t fully grasped by the team so instead of having their Views strongly typed to Models, they would have their Views strongly typed to DTO classes. In theory and in practice everything was working but they soon found out that they had properties such as IsVisible inside their DTO’s when in fact; these kind of properties should belongs to your ViewModel classes since they are used for UI logic.
So far, I haven’t answered your question but I do have a similar post regarding a quick architecture. You can read the post here
Another thing I’d like to point out is that if and only if your Service Layer plans on servicing other things such as a Winform application, Mobile web site, etc...then your Service Layer should not be receiving ViewModels.
Your Service Layer should not have the notion of what is a ViewModel. It should only accept, receive, send, etc... POCO classes.
This means that from your Controller, inside your ActionResult, once the ModelState is Valid, you need to transform your ViewModel into a POCO which in turn, will be sent to the method inside your Service Layer.
In other words, I’d use/install the Automapper nugget package and create some extension methods that would convert a ViewModel into a POCO and vice-versa (POCO into a ViewModel).
This way, your AddNewPerson() method would receive a Person object for its parameter instead of receiving a PersonViewModel parameter.
Remember, this is only valid if and only if your Service Layer plans on servicing other things...
If that's not the case, then feel free to have your Service Layer receive, send, add, etc...ViewModels instead of POCOs. This is up to you and your team.
Remember, there are many ways to skin a cat.
Hope this helps.
I am using ASP.NET MVC5, entity framework in my web application. It is expected complex business logic so achieving separation in code based on an individual business concern is required. I am using Code First with existing database approach. I have created 3 ADO.NET Entity Data Model in design wizard. so separate dbContext with its model. My issue arise when i created 3rd dbContext which has one table share from one of the model i have created initially. the error is Metadata Exception was unhandle by user code. I believe is something to do with Meta data but not sure how to approach this problem?
what i am trying to achieve, if one webpage (one business function) to only two table, why load whole data in memory plus decoupling will improve maintainability and flexibility to extend application without disturbing existing code!
The key to using Bounded Contexts is use of
Ignore entity
modelBuilder.Ignore<MyUnNecessaryEntity>();
and/or
change the DataBAse initializer on a MINI context to none
Database.SetInitializer(new ContextInitializerNone<MyContext>());
I like teh idea of ONLY 1 context is responsible for keeping a set of tables consistent.
The other contexts can access those tables using the same POCO definitions. They can be a subset of pocs. The context is reduced and or has NO initializer.
There is a good article worth a read from Julie Lerman on MSDN on the topic of Bounded Contexts.
I've been reading up on DDD a little bit, and I am confused how this would fit in when using an ORM like NHibernate.
Right now I have a .NET MVC application with fairly "fat" controllers, and I'm trying to figure out how best to fix that. Moving this business logic into the model layer would be the best way to do this, but I am unsure how one would do that.
My application is set up so that NHibernate's session is managed by an HttpModule (gets session / transaction out of my way), which is used by repositories that return the entity objects (Think S#arp arch... turns out a really duplicated a lot of their functionality in this). These repositories are used by DataServices, which right now are just wrappers around the Repositories (one-to-one mapping between them, e.g. UserDataService takes a UserRepository, or actually a Repository). These DataServices right now only ensure that data annotations decorating the entity classes are checked when saving / updating.
In this way, my entities are really just data objects, but do not contain any real logic. While I could put some things in the entity classes (e.g. an "Approve" method), when that action needs to do something like sending an e-mail, or touching other non-related objects, or, for instance, checking to see if there are any users that have the same e-mail before approving, etc., then the entity would need access to other repositories, etc. Injecting these with an IoC wouldn't work with NHibernate, so you'd have to use a factory pattern I'm assuming to get these. I don't see how you would mock those in tests though.
So the next most logical way to do it, I would think, would be to essentially have a service per controller, and extract all of the work being done in the controller currently into methods in each service. I would think that this is breaking with the DDD idea though, as the logic is now no longer contained in the actual model objects.
The other way of looking at that I guess is that each of those services forms a single model with the data object that it works against (Separation of data storage fields and the logic that operates on it), but I just wanted to see what others are doing to solve the "fat controller" issue with DDD while using an ORM like NHibernate that works by returning populated data objects, and the repository model.
Updated
I guess my problem is how I'm looking at this: NHibernate seems to put business objects (entities) at the bottom of the stack, which repositories then act on. The repositories are used by services which may use multiple repositories and other services (email, file access) to do things. I.e: App > Services > Repositories > Business Objects
The pure DDD approach I'm reading about seems to reflect an Active Record bias, where the CRUD functions exist in the business objects (This I call User.Delete directly instead of Repository.Delete from a service), and the actual business object handles the logic of things that need to be done in this instance (Like emailing the user, and deleting files belonging to the user, etc.). I.e. App > (Services) > Business Objects > Repositories
With NHibernate, it seems I would be better off using the first approach given the way NHibernate functions, and I am looking for confirmation on my logic. Or if I'm just confused, some clarification on how this layered approach is supposed to work. My understanding is that if I have an "Approve" method that updates the User model, persists it, and lets say, emails a few people, that this method should go on the User entity object, but to allow for proper IoC so I can inject the messagingService, I need to do this in my service layer instead of on the User object.
From a "multiple UI" point of view this makes sense, as the logic to do things is taken out of my UI layer (MVC), and put into these services... but I'm essentially just factoring the logic out to another class instead of doing it directly in the controller, and if I am not ever going to have any other UI's involved, then I've just traded a "fat controller" for a "fat service", since the service is essentially going to encapsulate a method per controller action to do it's work.
DDD does not have an Active Record slant. Delete is not a method that should be on an Entity (like User) in DDD.
NHibernate does support a DDD approach very well, because of how completely divorced it remains from your entity classes.
when that action needs to do something
like sending an e-mail, or touching
other non-related objects
One piece of the puzzle it seems you are missing is Domain Events. A domain entity shouldn't send an email directly. It should raise an event in the Domain that some significant event has happened. Implement a class whose purpose is to send the email when the event occurs, and register it to listen for the Domain Event.
or, for instance, checking to see if
there are any users that have the same
e-mail before approving
This should probably be checked before submitting the call to "approve," rather than in the function that does the approving. Push the decision up a level in calling code.
So the next most logical way to do it,
I would think, would be to essentially
have a service per controller
This can work, if it's done with the understanding that the service is an entry point for the client. The service's purpose here is to take in parameters in a DTO from the front end/client and translate that into method calls against an entity in order to perform the desired funcitonality.
The only limitations NHibernate creates for classes is all methods/properties must be virtual and a class must have a default constructor (can be internal or protected). Otherwise, it does not [edit] interfere with object structure and can map to pretty complex models.
The short answer to you question is yes, in fact, I find NHibernate enhances DDD - you can focus on developing (and altering) your domain model with a code first approach, then easily retro-fit persistence later using NHibernate.
As you build out your domain model following DDD, I would expect that much of the business logic that's found you way into you MVC controllers should probably reside in your domain objects. In my first attempt at using ASP.NET MVC I quickly found myself in the same position as yourself - fat controllers and an anemic domain model.
To avoid this, I'm now following the approach of keeping a rich domain model that implements the business logic and using MVC's model as essentially simple data objects used by my views. This simplifies my controllers - they interact with my domain model and provide simple data objects (from the MVC model) to the views.
Updated
The pure DDD approach I'm reading about seems to reflect an Active Record bias...
To me the active record pattern means entities are aware of their persistance mechanism and an entity maps directly to a database table record. This is one way of using NHibernate e.g. see Castle Active Record, however, I find this pollutes domain enitities with knowledge of their persistence mechanism. Instead, typically, I'll have a repository per aggregate root in my domain model which implements an abstract repository. The abstract repository provides basic CRUD methods such as:
public IList<TEntity> GetAll()
public TEntity GetById(int id)
public void SaveOrUpdate(TEntity entity)
public void Delete(TEntity entity)
.. which my concrete repositories can supplement/extend.
See this post on The NHibernate FAQ which I've based a lot of my stuff on. Also remember, NHibernate (depending on how you set up your mappings) will allow you to de-persist a complete object graph, i.e. your aggregate root plus all the objects hanging off it and once you've finished working with it, can cascade saves through you entire object graph, this certainly isn't active record.
...since the service is essentially going to encapsulate a method per controller action to do it's work...
I still think you should consider what functionality that you currently have in your controllers should, more logically, be implemented within your domain objects. e.g. in your approval example, I think it would be sensible for an entity to expose an approve method which does whatever it needs to do to within the entity and if, as in your example, needs to send emails, delegate this to a service. Services should be reserved for cross-cutting concerns. Then, once you've finished working with your domain objects, pass them back to your repository to persist changes.
A couple of books I've found useful on these topics are:
Domain-Driven Design by Eric Evans
Applying Domain-Driven Design and Patterns by Jimmy Nilsson
I'm trying to get started with the repository pattern and ASP.NET MVC, and I can't help but believe I'm missing something. Forgive me if this is a stupid question, but it seems to me like an implementation violates DRY exponentially. For example, in my (admittedly novice) understanding in order to implement this, I would have to:
Create my database model (Currently using Linq to Sql)
Create a IRepository for each concept (table or group of related tables)
Create an implementation for each IRepository
Do we return L2S objects or some sort of DTO?
Create viewmodels which either are containers or copies of the data
Use some method of DI (Windsor or Unity?) on the controllers
While I realize scalability and portability come at an expense, it just feels like I'm missing something?
I tried to implement the Repository Pattern in LINQ 2 SQL and it doesn't work very well, mainly because L2S doesn't use POCOs and you have to map to DTOs all the time as you mention. Although you could use something like AutoMapper, L2S just isn't a very good fit for the Repository Pattern.
If you're going to use the Repository Pattern (and I would recommend it), try a different data access technology such as NHibernate or Entity Framework 4.0's POCO support.
Also you wouldn't create a Repository for each and every table, you create a Repository per Domain Aggregate, and use the Repository to access the Aggregate's Root entity only. For instance, if you have an e-commerce app, with Order and OrderItem entities, an Order has one-or-many OrderItems. These 2 entities are part of a single Aggregate, and the Order entity is the Aggregate Root. You'd only create an OrderRepository in this case, NOT an OrderItemRepository as well. If you want to add new OrderItems you'd do so by getting a reference to the Order entity, then adding the new OrderItem to the Order's Items collection, then saving the Order using your OrderRepository. This technique is called Domain Driven Design, and it's a very powerful paradigm to use if you have a complex Domain Model and business rules in your application. But it can be over kill in simple applications, so you have to ask yourself does the complexity of your Domain Model warrant using this approach.
In terms is adhering to DRY, normally I create a base Repository class that has common methods for Save, Delete, FetchById, that sort of thing. As long as my Repository classes implement this base class (OrderRepository, ProductRepository etc.) then they get these methods for free and the code is DRY. This was easy to do in NHibernate because of POCO support, but impossible to do in Linq 2 SQL.
Don't worry too much about sending your Domain Models directly to the view, most dedicated ViewModels look almost identical to the Domain Model anyway, so what's the point. Although I tend to avoid using the DM for posting data back to the server because of under/overposting security concerns.
If you follow this POCO approach (and ditch LINQ 2 SQL, honestly!!), you end up with only one class (your POCO entity) instead of 3 (L2S class, DTO and ViewModel).
It is possible to implement the Repository Pattern badly, so tread carefully, read a few tutorials, blog posts books etc. (I recommend Steven Sanderson's book, especially look at the Pre-Requisites chapter) But once mastered, it becomes a very powerful way to organise the complexity of hydrating Model objects to and from a data-store. And if you use Repository interfaces (IOrderRepository etc.) and have them injected via an IOC Container, you also gain the benefits of maintainability and unit testability.
Do you understand why your doing these things or are you just following along with a blog article or other source?
Don't implement the Repository pattern because its the new hotness. Implement it because you understand how these separation of concerns helps your project and overall quality of your code.
From your ?'s in your question it sounds like you need to do some more reading before you implement. Your probably missing a meaningful understanding of the overall architecture approach. Please don't take this the wrong way and I'm not trying to be negative.
Side Rant:
Obviously something is missing from the newest repository hotness picture because the confusion about implementation details like single Repository vs. Many/Grouped "DTO or not to DTO" are just so ambiguous and subjective. This is a "nickle question" that pops up again and again.
This has been brought up before, at first glance certain aspects of properly separating concerns does seem to violate DRY.
As you've mentioned MVC have you read Steve Sanderson's Pro ASP.NET MVC 2 Framework book? It spends a great deal of time explaining why using the repository pattern is a good idea.
You might find that, for the projects you're working on, it isn't appropriatte, that's okay. Don't use it and see if you come across problems that this could have addressed. You don't need to be a developer for long to realise how crucial it is to keep different parts of your application as loosely coupled as possible.
We're developing a pretty large application with MVC 2 RC2 and we've received some feedback on the way we're using the Entity Framework's Lazy Loading.
We're just getting the entities in the controller and sending them as models to the Views, that is causing that the view code asks the Database for the navigation properties we are using in it. We have read about this and it appears is not a good design, but we were wondering why?
Can you help us understand this design problem?
Thanks!
The main issue here is coupling. The idea behind a model, which is the "M" in "MVC", is that it has no external dependencies. It is the "core" of your application. The dependency tree of a well-designed app architecture should look something like this:
+---------------------------------------> Views
| |
| |
| v
Controllers ----+-> Model Transformer -----> View Model
| \ |
| \ |
v \ v
Data Access <---- Persistence --------> Domain Model
| /
| /
v /
Mapper ------+
Now I realize it's not exactly convincing to just say "here's an architecture, this is what you should use", so let me explain what's happening here:
Controller receives a request.
Controller calls out to some kind of persistence layer (i.e. repository).
Persistence layer retrieves data, then uses a mapper to map to a domain model.
Controller uses a transformer to change the domain model into a view model.
Controller selects the necessary View and applies the View Model to it.
So, why is this good?
The domain model has no dependencies. This is a very good thing, it means that it's easy to perform validation, write tests, etc. It means that you can change anything else anywhere in your architecture and it will never break the model. It means that you can reuse the model across projects.
The persistence layer returns instances of the domain model. The means that it can be modeled as a totally abstract, platform-agnostic interface. A component that needs to use the persistence layer (such as the controller) does not take on any additional dependencies. This is ideal for Dependency Injection of the persistence layer and, again, testability. The combination of persistence, data access, and mapper can live in its own assembly. In larger projects you might even be able to further decouple the mapper and have it operate on a generic record set.
The Controller only has two downstream dependencies - the domain model and the persistence layer. The model should rarely change, as that is your business model, and since the persistence layer is abstract, the controller should almost never need to be changed (except to add new actions).
The Views depend on a separate UI model. This insulates them from changes in the domain model. It means that if your business logic changes, you do not need to change every single view in your project. It allows the views to be "dumb", as views should be - they are not much more than placeholders for view data. It also means that it should be simple to recreate the view using a different type of UI, i.e. a smart client app, or switch to a different view engine (Spark, NHaml, etc.)
Now, when using O/R Mappers such as Linq to SQL or Entity Framework, it is very tempting to treat the classes they generate as your domain model. It certainly looks like a domain model, but it is not. Why?
The entity classes are tied to your relational model, which over time can and will diverge significantly from your domain model;
The entity classes are dumb. It is difficult to support any complex validation scenarios or integrate any business rules. This is referred to as an anemic domain model.
The entity classes have hidden dependencies. Although they may appear to be ordinary POCOs, they may in fact have hidden references to the database (i.e. lazy loading of associations). This can end up causing database-related issues to bubble up to the view logic, where you are least able to properly analyze what's going on and debug.
But most importantly of all, the "domain model" is no longer independent. It cannot live outside whatever assembly has the data access logic. Well, it sort of can, there are ways to go about this if you really work at it, but it's not the way most people do it, and even if you pull this off, you'll find that the actual design of the domain model is constrained to your relational model and specifically to how EF behaves. The bottom line is that if you decide to change your persistence model, you will break the domain model, and your domain model is the basis for just about everything else in your app.
Entity Framework classes are not a domain model. They are part of a data-relational model and happen to have the same or similar names that you might give to classes in a domain model. But they are worlds apart in terms of dependency management. Using classes generated from an ORM tool as your domain model can only result in an extremely brittle architecture/design; every change you make to almost any part of the application will have a slew of predictable and unpredictable cascade effects.
There are a lot of people who seem to think that you don't need a cohesive, independent domain model. Usually the excuse is that (a) it's a small project, and/or (b) their domain model doesn't really have any behaviour. But small projects become large, and business rules become (far) more complicated, and an anemic or nonexistent domain model isn't something you can simply refactor away.
That is in fact the most insidious trait of the entities-as-model design; it seems to work fine, for a while. You won't find out how much of a mistake this is until a year or two down the road when you're drowning in defect reports and change requests while desperately trying to put together a real domain model piecemeal.
One potential issue with this design is that the view might iterate through the model objects (that are lazy loaded) more than once and cause an unnecessary overhead. For instance, if a Web page displays the same data in two different forms in a couple of different locations in the page, it'll loop through the query twice and causes two round-trips to the database (Look at the tags under the question and in the sidebar on this page and assume they came from a single query). The view could deal with this problem by caching the results once and loop twice on the cached data, but this is not what a view should deal with. It should present the data given to it without worrying about these stuff.
The problem is that your UI is tied more directly to your entity than necessary.
If your entity is encapsulated by a ViewModel, then your UI can not only contain the entity (the data it wishes to eventually save), but it can also add more fields and more data that can be used by the controller to make decisions, and be used by the View to control the display. In order to pass the same data around outside of a ViewModel would require that you use action method parameters and ViewData constructs, which does not scale, especially for complex ViewModels.
The view is stupid and ignorant. It should not and doesn't want to know anything. Its very shallow and focusses only on display. This is a good thing, as the view does what it does best.
By doing it your way, you leak, what should be data concerns, to the view, and furthermore you limit you view only to recieve data from your entity as the strongly typed model.
Furthermore you let you dependency on EF go all the way to the view and penetrates you app, where you should try to be as loosely coupled to that dependency as you can.