I am writing a MVC3 application, using NInject DI and repository pattern. Ninject is set up so that the repositories have a per-request lifetime.
I am putting the context object into the Http Request object, using the following code:
public static MessengerEntities GetContext()
{
if (!HttpContext.Current.Items.Contains("_db_context"))
{
HttpContext.Current.Items.Add("_db_context", new MessengerEntities());
}
return (MessengerEntities)HttpContext.Current.Items["_db_context"];
}
Then each repository calls this procedure to get either an existing or a new context object, e.g.:
public class TestRepository : ITestRepository
{
private MessengerEntities context = ContextHelper.GetContext();
#region ITestRepository Members
private string _testProperty = "blah";
public string testProperty
{
get
{
_testProperty = context.UserLogins.Where(n => n.inactive == null || !n.inactive.Value).ToList().Count.ToString();
return _testProperty;
}
set
{
_testProperty = value;
}
}
#endregion
}
(Later on, I plan to use a generic IRepository pattern, but for now I am just using this test repository.)
My question is: when the Request object is disposed of, will it also dispose of the context object in the Items collection? In other words, will it call Dispose on each object that may be stored in that collection?
I know there are a lot of discussions about this issue here, but they all seem to involve scenarios that are not quite the same as mine, so it's kind of hard to divine the answer.
Related
I am developing a web application in ASP.NET MVC5.
Like all basic web applications it also has a login page where a user can authenticate himself. Once authenticated I want to store a couple of user-related items in the Session so I don't have to query the database every time to reconstruct the authenticated user.
After having read Mark Seemann's book about Dependency Injection I want to loosely couple all my layers and make sure that everything can easily be replaced.
At the moment my SessionProvider is by default using the Session object, but maybe in the future this could change to another type of storage mechanism.
The approach I have taken is by using Ambient Context which he explained with the TimeProvider example, but I am wondering if this is the right approach for this functionality and if it is thread safe (also for unit testing).
Is my solution proper or how would you implement such a mechanism? This has been in my head for days now so who can help me define the best solution?
Thanks!
public abstract class SessionProvider
{
private static SessionProvider _current;
static SessionProvider()
{
_current = new DefaultSessionProvider();
}
public static SessionProvider Current
{
get { return _current; }
set
{
if (value == null)
{
throw new ArgumentNullException();
}
_current = value;
}
}
public abstract string UserName { get; set; }
}
My local default:
public class DefaultSessionProvider : SessionProvider
{
public override string UserName
{
get { return (string) HttpContext.Current.Session["username"]; }
set { HttpContext.Current.Session["username"] = value; }
}
}
So I have access in my entire solution to my SessionProvider, whether this is a real session object or a database-driven storage mechanism...
SessionProvider.Current.UserName = "myUserName";
Once authenticated I want to store a couple of user-related items in
the Session so I don't have to query the database every time to
reconstruct the authenticated user.
Well, it looks like you're working on some sort of caching mechanism. It doesn't really matter if it's in a Session or in Redis cache, or any other type of cache. And this cache is key-value storage. I would create cache interface, something like that:
interface ICache
{
object this[string key] {get; set;}
}
And create concrete classes. SessionCache in your case:
public SessionCache : ICache
{
private IHttpSessionState _session;
public SessionCache(IHttpSessionState session)
{
_session = session;
}
// ICache implementation goes here...
}
So you'll narrow down the problem to dependency-inject Session object to concrete class (SessionCache). With Ninject you can do something like:
.WithConstructorArgument("session",ninjectContext=>HttpContext.Session);
And after that you can finally make your controllers dependent on ICache.
In your unit tests project you can create another ICache concrete class, something like DummyCache with in-memory cache. So you can test your controllers without sticking to Session object.
I am creating an application with ASP.NET MVC and Entity framework code first. I am using repository and unit of work pattern with influence of from following link.
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
Here I have question about the implementation of Unit Of Work in that link unit of work is implemented via directly writing entities in class itself like.
public class UnitOfWork : IDisposable
{
private SchoolContext context = new SchoolContext();
private GenericRepository<Department> departmentRepository;
public GenericRepository<Department> DepartmentRepository
{
get
{
if (this.departmentRepository == null)
{
this.departmentRepository = new GenericRepository<Department>(context);
}
return departmentRepository;
}
}
}
Do you think that implementation is good enough because every time I add/remove entities I need to change my Unit of work class. I believe that Unit of work should not be dependent on entities. Because in my application based on Client feedback we are going to frequently add/remove entities.
I may sound stupid but let me know your views on that.
The Unit of Work pattern is already implemented in Entity Framework.
The DbContext is your Unit of Work.
Each IDbSet is a Repository.
using (var context = new SchoolContext()) // instantiate our Unit of Work
{
var department = context.Departments.Find(id);
}
There are a few flavors of the UnitOfWorkPattern. The one you are describing is a show everything, there is a hide everything approach as well. In the hide approach the unit of work references the DbContext.SaveChanges() method and nothing else; sounds like what you want.
public YourContext : DbContext, IContext{}
public interface IUnitOfWork{
void Commit();
}
public UnitOfWork : IUnitOfWork{
private readonly IContext _context;
//IOC should always inject the same instance of this, register it accordingly
public UnitOfWork(IContext context){
_context = context;
}
void Commit(){
// try catch the validation exception if you want to return the validations this
// way if your confident you've already validated you can put a void here or
// return the intfrom save changes make sure you handle the disposing properly,
// not going into that here you also may be doing other stuff here, have multiple
// "contexts" to save in a single transaction or we have contextProcessors that
// do stuff based on items in the context
_context.SaveChanges();
}
}
This leaves the issue of how you get your repositories into the classes that need them if you are not taking them from the UnitOfWork. This is best handled by an IOC framework. Again here there are a couple options. Once is to register the UnitOfWork as a single instance per request and have it injected into your custom Repository class.
public interface IRepository<T>
{
IQueryable<T> Records();
//other methods go here
}
public Repository : IRepository<T>
{
private IContext _context;
// same instance of context injected into the unit of work, this why when you Commit
// everything will save, this can get tricky if you start adding Add, Update and stuff
// but EF does have the support needed.
public Repository(IContext context)
{
_context = context;
}
public Records()
{
return _context.Set<T>();
}
}
public class SomeService : ISomeService{
private readonly _myObjectRepository;
public SomeService(IRepository<MyObject> myObjectRepository){
_myObjectRepository = myObjectRepository;
}
}
Personally I consider the IDbSet an sufficient abstraction so I no longer create repositories. In
order to inject the IDbSets from the context though you need to register them as instances that you
extract from the context in your IOC setup. This can be complex and depending on your skills you
could find yourself in the situation where you have to register each IDbSet which I know you are trying to avoid.
What's nice about using the IDbSet is you have access to simple methods like Add and can avoid some of the more complex parts of working with Entity and DbEntity in a generic sense.
public class SomeService : ISomeService {
private readonly _myObjectSet;
// requires specialized IOC configurations because you have to pull this instance from
// the instance of the context, personally don't know how to do this with a single
// registration so this has the same problem as having to add each new repository to the
// unit of work. In this case each new Entity I add to the context requires I add an IOC
// registration for the type.
public SomeService(IDbSet<MyObject> myObjectSet){
_myObjectSet= myObjectSet;
}
}
Try passing the SchoolContext to the GenericRepository:
public GenericRepository<T>
{
private SchoolContext _context;
public GenericRepository(SchoolContext context)
{
_context = context;
}
public Get(int id)
{
return _context.Set<T>().Find(id);
}
}
And use:
using(var context = new SchoolContext())
{
var departmentRepository = new GenericRepository<Department>(context);
var department = departmentRepository.Get(1);
}
I have this really basic code in a MVC controller action. It maps an Operation model class to a very basic OperationVM view-model class .
public class OperationVM: Operation
{
public CategoryVM CategoryVM { get; set; }
}
I need to load the complete list of categories in order to create a CategoryVM instance.
Here's how I (try to) create a List<OperationVM> to show in the view.
public class OperationsController : Controller {
private SomeContext context = new SomeContext ();
public ViewResult Index()
{
var ops = context.Operations.Include("blah...").ToList();
Mapper.CreateMap<Operation, OperationVM>()
.ForMember(
dest => dest.CategoryVM,
opt => opt.MapFrom(
src => CreateCatVM(src.Category, context.Categories)
// trouble here ----------------^^^^^^^
)
);
var opVMs = ops.Select(op => Mapper.Map<Operation, OperationVM>(op))
.ToList();
return View(opVMs);
}
}
All works great first time I hit the page. The problem is, the mapper object is static. So when calling Mapper.CreateMap(), the instance of the current DbContext is saved in the closure given to CreateMap().
The 2nd time I hit the page, the static map is already in place, still using the reference to the initial, now disposed, DbContext.
The exact error is:
The operation cannot be completed because the DbContext has been disposed.
The question is: How can I make AutoMapper always use the current context instead of the initial one?
Is there a way to use an "instance" of automapper instead of the static Mapper class?
If this is possible, is it recommended to re-create the mapping every time? I'm worried about reflection slow-downs.
I read a bit about custom resolvers, but I get a similar problem - How do I get the custom resolver to use the current context?
It is possible, but the setup is a bit complicated. I use this in my projects with help of Ninject for dependency injection.
AutoMapper has concept of TypeConverters. Converters provide a way to implement complex operations required to convert certain types in a separate class. If converting Category to CategoryVM requires a database lookup you can implement that logic in custom TypeConverter class similar to this:
using System;
using AutoMapper;
public class CategoryToCategoryVMConverter :
TypeConverter<Category, CategoryVM>
{
public CategoryToCategoryVMConverter(DbContext context)
{
this.Context = context;
}
private DbContext Context { get; set; }
protected override CategoryVM ConvertCore(Category source)
{
// use this.Context to lookup whatever you need
return CreateCatVM(source, this.Context.Categories);
}
}
You then to configure AutoMapper to use your converter:
Mapper.CreateMap<Category, CategoryVM>().ConvertUsing<CategoryToCategoryVMConverter>();
Here comes the tricky part. AutoMapper will need to create a new instance of our converter every time you map values, and it will need to provide DbContext instance for constructor. In my projects I use Ninject for dependency injection, and it is configured to use the same instance of DbContext while processing a request. This way the same instance of DbContext is injected both in your controller and in your AutoMapper converter. The trivial Ninject configuration would look like this:
Bind<DbContext>().To<SomeContext>().InRequestScope();
You can of course use some sort of factory pattern to get instance of DbContext instead of injecting it in constructors.
Let me know if you have any questions.
I've found a workaround that's not completely hacky.
Basically, I tell AutoMapper to ignore the tricky field and I update it myself.
The updated controller looks like this:
public class OperationsController : Controller {
private SomeContext context = new SomeContext ();
public ViewResult Index()
{
var ops = context.Operations.Include("blah...").ToList();
Mapper.CreateMap<Operation, OperationVM>()
.ForMember(dest => dest.CategoryVM, opt => opt.Ignore());
var opVMs = ops.Select(
op => {
var opVM = Mapper.Map<Operation, OperationVM>(op);
opVM.CategoryVM = CreateCatVM(op.Category, context.Categories);
return opVM;
})
.ToList();
return View(opVMs);
}
}
Still curious how this could be done from within AutoMapper...
The answer from #LeffeBrune is perfect. However, I want to have the same behavior, but I don't want to map every property myself. Basically I just wanted to override the "ConstructUsing".
Here is what I came up with.
public static class AutoMapperExtension
{
public static void ConstructUsingService<TSource, TDestination>(this IMappingExpression<TSource, TDestination> mappingExression, Type typeConverterType)
{
mappingExression.ConstructUsing((ResolutionContext ctx) =>
{
var constructor = (IConstructorWithService<TSource, TDestination>)ctx.Options.ServiceCtor.Invoke(typeConverterType);
return constructor.Construct((TSource)ctx.SourceValue);
});
}
}
public class CategoryToCategoryVMConstructor : IConstructorWithService<Category, CategoryVM>
{
private DbContext dbContext;
public DTOSiteToHBTISiteConverter(DbContext dbContext)
{
this.dbContext = dbContext;
}
public CategoryVM Construct(Category category)
{
// Some commands here
if (category.Id > 0)
{
var vmCategory = dbContext.Categories.FirstOrDefault(m => m.Id == category.Id);
if (vmCategory == null)
{
throw new NotAllowedException();
}
return vmCategory;
}
return new CategoryVM();
}
}
// Initialization
Mapper.Initialize(cfg =>
{
cfg.ConstructServicesUsing(type => nInjectKernelForInstance.Get(type));
cfg.CreateMap<Category, CategoryVM>().ConstructUsingService(typeof(CategoryToCategoryVMConstructor));
};
I've ran into some problems in an application where the .net process is running out of memory. One change I made in the application has been adding a lot of Linq to Sql classes. I'm wondering if there is an issue on how I'm creating my DataContext.
I could create my datacontext by creating the datacontext when I need it. Obviously if I was changing data, I would create a variable and hold the datacontext, because I would need the same data context in multiple statements.
Technique 1
public class SchoolRepository
{
DataBaseDataContext GetCtx()
{
return new DataBaseDataContext();
}
public List<School> GetSchools()
{
return GetCtx().Schools.ToList();
}
}
Here is another way I could create the DataContext. In this case I have a class field which holds a reference to a DataContext.
Technique 2:
public class SchoolRepository
{
private DataBaseDataContext _ctx = null;
DataBaseDataContext ctx
{
get { return _ctx = (_ctx ?? new DataBaseDataContext()); }
}
public List<School> GetSchools()
{
return ctx.Schools.ToList();
}
}
I have been using the second way (with a class variable), and I'm wondering if that could be causing the context to stick around longer than the first way--- because it would stick around as long as an instance of my class stuck around.
Perhaps I'm grasping at straws here-- but I'm wondering if one way is "safer" than the other way.
Busy doing some work on an existing web app and concerned about the thread safety of the ObjectContext being used in a BaseRepository class. The code that is causing my spidey sense to tingle is:
// within base repository
private SiteDataContext context;
public SitepDataContext Context
{
get
{
if (context == null)
context = new SiteDataContext();
return context;
}
}
// inherited repository
public class InheritedRepository1 : BaseRepository
{
public SomeEntity Get()
{
var something = Context.SomeEntity.First();
}
}
public class InheritedRepository2 : BaseRepository
{
public SomeOtherEntity Get()
{
var something = Context.SomeOtherEntity.First();
}
}
My understanding is:
the ObjectContext is not threadsafe and may be shared across threads in this instance.
A single objectcontext should be used across an http request. Multiple objectcontexts are being created from various repositories to render a page.
The objectcontext does not seem to be closed, disposed off at any point in the http request. This could be a problem if transactions are being used and transactions are committed from threads than did not begin them.
Would appreciate any feedback on these 3 points above as my experience is primarily based on NHibernate.
You could implement the Repository and Unit of Work patterns.
Considering the IIS uses Thread pool to manage requests, my solution is to create one and only one ThreadStatic DataContext for each request, and clear it after request ending.
public class DataContextManager
{
[ThreadStatic]
private static MyDataContext dataContext = null;
public static MyDataContext GetContext()
{
if (dataContext == null)
{
dataContext = new MyDataContext();
}
return dataContext;
}
public static void Clear()
{
dataContext = null;
}
}