I am trying to implement Automapper to map a ViewModel to an Entity where one of the properties of the Entity is also an Entity.
I want my converter to use NHibernate's ISession.Load<> method to load this.
So the question is what is the best way of injecting ISession into my ITypeConverter implementation? Also one thing to keep in mind is that ISession that gets injected will be disposed off, so I would need to inject a new ISession everytime when a mapping needs to happen?
We do this in our systems, and have things like Guid->Entity type converters. However, we scope our ISessions per HttpContext, so a new ISession will not be injected per ITypeConverter. However, AutoMapper does instantiate a new ITypeConverter instance every time it's needed.
But two entities coming together from different ISession instances will lead into trouble. Just make sure you share a single ISession instance per HttpContext, and you'll be set.
I don't know about nHibernate sorry and no one seems to want to answer this question so...
The way I would tackle this is to maybe write my own Custom Model Binder. It can then be resonsible for mapping my ViewModel to my Entity.
You will also have access to the HttpRequest object so you can get all your text fields out and map them to your entity.
I hope this helps even though it's not specific to your question.
Related
So I'm pretty sure I'm on the right track, just need some help implementing this the way I want to. So to accomplish this readonly entity, I know I need to subclass the EFContextProvider and override a method. I know I can accomplish this by overriding SaveChanges, but I'm wondering if I should override the Dictionary property instead, leaving the readonly property out of that Dictionary. Will this effect anything? If so, what will it effect?
I assume you have client logic that prevents it from attempting to save changes to a "read-only" entity. That leaves guarding the server which should reject any request that attempts to save a "read-only" entity, right?
If so, I would put logic in the EFContextProvider.beforeSaveEntity that tests if the entity is savable and throws if it is not. I often make my entities inherit from ISaveable (my interface) and throw an exception when I see an entity that is not.
I'm learning ASP.NET MVC and I'm having some questions that the tutorials I've read until now haven't explored in a way that covers me. I've tried searching, but I didn't see any questions asking this. Still, please forgive me if I have missed an existing ones.
If I have a single ASP.NET MVC application that has a number of models (some of which related and some unrelated with each other), how many DbContext subclasses should I create, if I want to use one connection string and one database globally for my application?
One context for every model?
One context for every group of related models?
One context for all the models?
If the answer is one of the first two, then is there anything I should have in mind to make sure that only one database is created for the whole application? I ask because, when debugging locally in Visual Studio, it looks to me like it's creating as many databases as there are contexts. That's why I find myself using the third option, but I'd like to know if it's a correct practice or if I'm making some kind of mistake that will come back and bite me later.
#jrummell is only partially correct. Entity Framework will create one database per DbContext type, if you leave it to its own devices. Using the concept of "bounded contexts" that #NeilThompson mentioned from Julie Lerhman, all you're doing is essentially telling each context to actually use the same database. Julie's method uses a generic pattern so that each DbContext that implements it ends up on the same database, but you could do it manually for each one, which would look like:
public class MyContext : DbContext
{
public MyContext()
: base("name=DatabaseConnectionStringNameHere")
{
Database.SetInitializer(null);
}
}
In other words, Julie's method just sets up a base class that each of your contexts can inherit from that handles this piece automatically.
This does two things: 1) it tells your context to use a specific database (i.e., the same as every other context) and 2) it tells your context to disable database initialization. This last part is important because these contexts are now essentially treated as database-first. In other words, you now have no context that can actually cause a database to be created, or to signal that a migration needs to occur. As a result, you actually need another "master" context that will have every single entity in your application in it. You don't have to use this context for anything other than creating migrations and updating your database, though. For your code, you can use your more specialized contexts.
The other thing to keep in mind with specialized contexts is that each instantiation of each context represents a unique state even if they share entities. For example, a Cat entity from one context is not the same thing as a Cat entity from a second context, even if they share the same primary key. You will get an error if you retrieved the Cat from the first context, updated it, and then tried save it via the second context. That example is a bit contrived since you're not likely to have the same entity explicitly in two different contexts, but when you get into foreign key relationships and such it's far more common to run into this problem. Even if you don't explicitly declare a DbSet for a related entity, it an entity in the context depends on it, EF will implicitly create a DbSet for it. All this is to say that if you use specialized contexts, you need to ensure that they are truly specialized and that there is zero crossover at any level of related items.
I use what Julie Lerman calls the Bounded Context
The SystemUsers code might have nothing to do with Products - so I might have a System DbContext and a Shop DbContext (for example).
Life is easier with a single context in a small app, but for larger application it helps to break the contexts up.
Typically, you should have one DbContext per database. But if you have separate, unrelated groups of models, it would make sense to have separate DbContext implementations.
it looks to me like it's creating as many databases as there are
contexts.
That's correct, Entity Framework will create one database per DbContext type.
I am trying to return JsonResult using MVC controller standard Json(object) method. My object of type Model1 is built by Fluent NHibernate.
Model1 has property of type Model2. In debug mode I see that the environment creates a proxy descendant class of Model2 called Castle.Proxies.Model2Proxy. This is used internally by Fluent Nhibernate, I believe, to satisfy my mappings. And in run time, actual model1.Model2 is of type Castle.Proxies.Model2Proxy.
The problem is that when my Model1 is serialized, Model2 is being serialized too. And the serializer seems to try to serialize all the properties of this object, including those generated by Castle and not needed by me. I would be OK with it if it did not cause an exception. Namely, somewhere inside this object a circular reference presents and the exception is caused by it. Here is the exception text:
System.InvalidOperationException: A circular reference was detected while serializing an object of type 'System.Reflection.RuntimeModule'
I double checked my domain and found no circular references there, so I am blaming Castle. Am I correct? Is Castle really to blame for that? If so, what are my options? How do I tell serializer to ignore Castle properties? Particularly, how do I tell it to serialize the defined type, not the actual one?
I tend to covering my domain models with ViewModels to fight this issue, which is a recommended approach, but I would really love to know another cure, if it exists.
in general, it's not good practice to serialize your model entities.
this is because you want to have complete control over what you serialize and send to the client.
when you serialize your model entities, you might be serializing the whole object graph associated with them, which you don't necessarily need / want.
(for example- if you want the user to view just a Model1 entity, you might be sending along also a Model2 entity, along with its Model3 collection etc.)
the standard way to deal with it is using some sort of DTOs, which are adapted to display precisely what you want to display. for example:
public class Model1DTO
{
public int Id;
public string Name;
public string Model2Name;
//whatever other properties you need to display
}
I'm trying to clean up my action methods in an ASP.NET MVC project by making use of view models. Currently, my view models contain entities that might have relationships with other entities. For example, ContactViewModel class might have a Contact, which might have an Address, both of which are separate entities. To query for a list of Contact objects, I might do something like the following.
IList<Contact> contacts;
using (IContactRepository repository = new ContactRepository())
{
contacts = repository.Fetch().ToList();
}
EditContactViewModel vm = new EditContactViewModel(contacts);
return View(vm);
This method brings on a few problems. For example, the repository is queried within a using statement. By the time the view renders, the context has gone out of scope, making it impossible for the view to query the Address associated with the Contact. I could enable eager loading, but I'd rather not. Furthermore, I don't like that the entity model has bled over into my view (I feel like it's a bad idea for my View to have knowledge of the relationship between Contact and Address, but feel free to disagree with me).
I have considered creating a fattened class that contains properties from both the Contact and Address entities. I could then project the Contact and Address entities into my new, flattened object. One of my concerns with this approach is that my action methods may get a little busy and I don't think AutoMapper is able to map two or more objects into a single type.
What technique is/are preferred for overcoming my concerns?
Automapper will work for your case. What you have is an object graph, a thing has some more things, which Automapper handles fine.
Taking these concerns in order...
First, if you are worried about the using statement and the repository (I don't know if it is LINQ-to-SQL or LINQ-to-Entities, but it doesn't matter), what I would recommend you do is implement IDisposable on your Controller, and then store the repository in a field either on the model or in the controller or somewhere where you have access to it in the view (if you need it, if the model has knowledge of it while the object is "alive" then you just need to keep it around for the life of the controller).
Then, when the request is complete, the Dispose method on your controller is called and you can dispose of the repository there.
Personally, I have a method on my base controller class which looks like this:
protected T AddDisposable<T>(T disposable) where T : class, IDisposable
{
// Error checking.
if (disposable == null) throw new ArgumentNullException("disposable");
// Add to list
...
}
Basically, it allows you to store the IDisposable implementations, then in the IDisposable implementation of the controller, it iterates through the list, disposing of everything.
Regarding the exposure of the address on the entity model, I don't see this as a bleed issue, personally. The address is part of the composition of the contact (IMO), so it would be wrong to not have it there.
However, I don't disagree if you don't want it there because you want to focus on one type in one controller at a time, etc, etc.
To that end, you would want to create Data Transfer Objects which basically map between the type you expose in the view model and your entity model.
Reviewing Conery's storefront, and I dont understand why he used Linqs auto-generated classes (ie Order class) and then he has another Order class defined that is not a partial class. WHen using repository pattern should one manually create the classes, and disregard Datacontext altogether?
If you don't decouple your front end from the linq classes using an intermediary class, you can't control with the data context gets garbage collected. Typically with data context types of instances you want to rid of them as soon as you're done using them. Here's how you might want to do this with the linq to sql context:
using (MyDataContext data = new MyDataContext())
{
SomeThing thing = data.Things(t => t.ID == 1);
return thing;
}
... the MyDataContext instance is gone
With the "using" block, you're disposing of the instance of MYDataContext at the last "}". However, if you do this you'll get an error then trying to use "thing" because the data context instance is gone. If you don't dispose of the data context, it's left hanging around until it's eventually garbage collected.
If you introduce an intermediary class to decouple the linq to sql code from the calling app you can still get rid of your data context instance and return the same data (just in a different object):
using (MyDataContext data = new MyDataContext())
{
SomeThing thing = data.Things(t => t.ID == 1);
SometThingElse otherThing = ConvertSomethingToSomethingElse(thing);
return otherThing;
}
... the MyDataContext instance is gone
Hope that helps.
Rob has answered on this question in one of his show.
He Using POCO classes to be aware from all dataaccess classes. For example when he change LINQ-to-SQL to NHibernate all he will need to do i change his "mappings" in his filters, and he will not have to make any changes in bussiness logic.
He said in one of his recent videos he doesn't like the way LINQ to SQL does mapping. I agree though I think it is complete overkill.
I'd say you're not breaking any major design patterns as long as you're sticking to the repository pattern itself. I think it's a matter of choice to have 2 sets of classesa, allbeit a bad one, still a choice.