I'm needing to cache some data using System.Web.Caching.Cache. Not sure if it matters, but the data does not come from a database, but a plethora of custom objects.
The ASP.NET MVC is fairly new to me and I'm wondering where it makes sense for this caching to occur?
Model or Controller?
At some level this makes sense to cache at the Model level but I don't necessarily know the implications of doing this (if any). If caching were to be done at the Controller level, will that affect all requests, or just for the current HttpContext?
So... where should application data caching be done, and what's a good way of actually doing it?
Update
Thanks for the great answers! I'm still trying to gather where it makes most sense to cache given different scenarios. If one is caching the entire page, then keeping it in the view makes sense but where to draw the line when it's not the entire page?
I think it ultimately depends on what you are caching. If you want to cache the result of rendered pages, that is tightly coupled to the Http nature of the request, and would suggest a ActionFilter level caching mechanism.
If, on the other hand, you want to cache the data that drives the pages themselves, then you should consider model level caching. In this case, the controller doesn't care when the data was generated, it just performs the logic operations on the data and prepares it for viewing. Another argument for model level caching is if you have other dependencies on the model data that are not attached to your Http context.
For example, I have a web-app were most of my Model is abstracted into a completely different project. This is because there will be a second web-app that uses this same backing, AND there's a chance we might have a non-web based app using the same data as well. Much of my data comes from web-services, which can be performance killers, so I have model level caching that the controllers and views know absolutely nothing about.
I don't know the anwser to your question, but Jeff Atwood talks about how the SO team did caching using the MVC framework for stackoverflow.com on a recent hanselminutes show that might help you out:
http://www.hanselminutes.com/default.aspx?showID=152
Quick Answer
I would start with CONTROLLER caching, use the OutputCache attribute, and later add Model caching if required. It's quicker to implement and has instant results.
Detail Answer (cause i like the sound of my voice)
Here's an example.
[OutputCache(Duration=60, VaryByParam="None")]
public ActionResult CacheDemo() {
return View();
}
This means that if a user hits the site (for the cache requirements defined in the attribute), there's less work to get done. If there's only Model caching, then even though the logic (and most likely the DB hit) are cached, the web server still has to render the page. Why do that when the render result will always be the same?
So start with OutputCaching, then move onto Model caching as you performance test your site.
Output caching is also a lot simpler to start out with. You don't have to worry about web farm distributed caching probs (if you are part of a farm) and the caching provider for the model.
Advanced Caching Techniques
You can also apply donut caching -> cache only part of the UI page :) Check it out!
I would choose caching at the model level.
(In general, the advice seems to be to minimize business logic at the controller level
and move as much as possible into model classes.)
How about doing it like this:
I have some entries in the model represented by the class Entry
and a source of entries (from a database, or 'a plethora of custom objects').
In the model I make an interface for retrieving entries:
public interface IEntryHandler
{
IEnumerable<Entry> GetEntries();
}
In the model I have an actual implementation of IEntryHandler
where the entries are read from cache and written to cache.
public class EntryHandler : IEntryHandler
{
public IEnumerable<Entry> GetEntries()
{
// Check if the objects are in the cache:
List<Entry> entries = [Get entries from cache]
if (entries == null)
{
// There were no entries in the cache, so we read them from the source:
entries = [Get entries from database or 'plethora of custom objects']
[Save the retrieved entries to cache for later use]
}
return entries;
}
}
The controller would then call the IEntryHandler:
public class HomeController : Controller
{
private IEntryHandler _entryHandler;
// The default constructor, using cache and database/custom objects
public HomeController()
: this(new EntryHandler())
{
}
// This constructor allows us to unit test the controller
// by writing a test class that implements IEntryHandler
// but does not affect cache or entries in the database/custom objects
public HomeController(IEntryHandler entryHandler)
{
_entryHandler = entryHandler;
}
// This controller action returns a list of entries to the view:
public ActionResult Index()
{
return View(_entryHandler.GetEntries());
}
}
This way it is possible to unit test the controller without touching real cache/database/custom objects.
I think the caching should somehow be related to the model. I think the controller shouldn't care more about the data. The controller responsibility is to map the data - regardless where it come from - to the views.
Try also to think why you need to cache? do you want to save processing, data transmission or what? This will help you to know where exactly you need to have your caching layer.
It all depends on how expensive the operation is. If you have complicated queries then it might make sense to cache the data in the controller level so that the query is not executed again (until the cache expires).
Keep in mind that caching is a very complicated topic. There are many different places that you can store your cache:
Akamai / CDN caching
Browser caching
In-Memory application caching
.NET's Cache object
Page directive
Distributed cache (memcached)
Related
We are optimising a site and have read about the issue of the initial view lookup taking a long time. Subsequent lookups of the views are then much faster. Mini-profiler shows that a lot of the time is in the initial find view (I know I can use a ~ path to reduce this) and whatever else is done at this stage.
Where is the caching done? How long are view lookups etc cached? Can I see what is cached? Can we do anything to cause it to pre-load so there isn't a delay?
We have many views that are often not visited for hours and I don't want sudden peaks and troughs in performance.
We are using Azure and have a number of web role instances. Can I assume that each web role has its own cache of the view lookup? Can we centralise the caching so that it only occurs once per application?
Also I read MVC4 is faster at finding views? Does anyone have any figures?
The default cache is 15min and is stored in the HttpContext.Cache, this is all managed by the System.Web.Mvc.DefaultViewLocationCache class. Since this uses standard ASP.NET caching you could use a custom cache provider that gets its cache from WAZ AppFabric Cache or the new caching preview (there is one on NuGet: http://nuget.org/packages/Glav.CacheAdapter). Using a shared cache will make sure that only 1 instance needs to do the work of resolving the view. Or you could go and build your own cache provider.
Running your application in release mode, clearing unneeded view engines, writing the exact path instead of simply calling View, ... are all ways to speed up the view lookup process. Read more about it here:
http://samsaffron.com/archive/2011/08/16/Oh+view+where+are+thou+finding+views+in+ASPNET+MVC3+
http://blogs.msdn.com/b/marcinon/archive/2011/08/16/optimizing-mvc-view-lookup-performance.aspx
You can pre-load the view locations by adding a key for each view to the cache. You should format it as follows (where this is the current VirtualPathProviderViewEngine):
string.Format((IFormatProvider) CultureInfo.InvariantCulture, ":ViewCacheEntry:{0}:{1}:{2}:{3}:{4}:", (object) this.GetType().AssemblyQualifiedName, (object) prefix, (object) name, (object) controllerName, (object) areaName);
I don't have any figures if MVC4 is faster, but it looks like the DefaultViewLocationCache code is the same as for MVC3.
To increase my cachetime to 24 hours I used the following in the Global.asax
var viewEngine = new RazorViewEngine
{ViewLocationCache = new DefaultViewLocationCache(TimeSpan.FromHours(24))};
//Only allow Razor view to improve for performance
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(viewEngine);
Also this article ASP.NET MVC Performance Issues with Render Partial was also interesting.
Will look at writing my own ViewLocationCache to take advantage of shared Azure caching.
I am experiencing some bizarre problems with Nhibernate within my MVC web application.
There is not 1 consistent error, I keep getting loads of random ones:
Transaction not successfully started
New request is not allowed to start because it should come with valid transaction descriptor
Unexpected row count: -1; expected: 1
To give a little context to the setup, I am using Ninject to DI the sessions and other Nhibernate related objects, currently I am using RequestScope however I have tried SingletonScope. I have a large and complicated data model, which is read out as a whole, but persisted back in separate parts, as these can all be edited and saved individually.
An example would be having a Customer object, which contains a address object, a contact object, friends object, previous orders object etc etc...
So the whole object is read out, then mapped to the UI domain models and then displayed in different partials within the page. Each partial can be updated individually via ajax, so you may update 1 section or you could update them all together. It seems mainly to give me the problems when I try to persist them all together (so 2-4 simultanious ajax requests to persist chunks of the model).
Now I have integration tests that work fine, which just test the persistence and retrieval of entities. As a whole and individually and all pass fine, however in the web app they just seem to keep throwing random exceptions, and originally refused to persist outside of the Nhibernate cache. I found a way round this by wrapping most units of work within transactions, which got the data persisting but started adding new errors to the mix.
Originally I was thinking of just scrapping Nhibernate from the project, as although I really want its persistance/caching layer, it just didnt seem to be flexible enough for my domain, which seems odd as I have used it before without much problem, although it doesn't like 1-1 mappings.
So has anyone else had flakey transaction/nhibernate issues like this within an ASP MVC app... I know this may be a bit vague as the errors dont point to one thing, and it doesn't always error, so its like stabbing in the dark, but I am out of ideas so any help would be great!
-- Update --
I cannot post all relevant code as the project is huge, but the transaction bit looks like:
using (var transaction = sessionManager.Session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Do unit of work
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
Some of the main problems I have had on this project have stemmed from:
There are some 1-1 relationships with composite keys, but logically it makes sense
The Nhibernate domain entities go through a mapping layer to become the UI domain entities, then vice versa when saving. Problem here is that with the 1-1 mappings, when persisting the example Address I have to make a Surrogate Customer object with the correct Id then merge.
There is ALOT of Ajax that deals with chunks of the overall model (I talk like there is one single model, but there are quite a few top level models, just one that is most important)
Some notes that may help. I use windsor but imagine the concepts are the same. Sounds like there may be a combination of things.
SessionFactory should be created as singleton and session should be per web request. Something like:
Bind<ISessionFactory>()
.ToProvider<SessionFactoryBuilder>()
.InSingletonScope();
Bind<ISession>()
.ToMethod( context => context.Kernel.Get<ISessionFactory>().OpenSession() )
.InRequestScope();
Be careful of keeping transactions open for too long, keep them as short lived as possible to avoid deadlocks.
Check your queries are running as as expected by using a tool like NHProf. Often people load up too much of the graph which impacts performance and can create deadlocks.
Check your mappings for things like not.lazyload() and see if you actually need the additional data in the queries and keep results returned to a min. Check your queries execution plans and ensure adequate indexes are in place.
I have had issues with mvc3 action filters being cached, which meant transactions were not always started, but would attempt to be closed causing issues. Moved all my transaction commits into ActionResults in the controllers to keep transaction as short as possible and close to the action.
Check your cascades in your mappings and keep the updates to a minimum.
Let's start with this basic scenario:
I have a bunch of Tables that are essentially rarely changed Enums (e.g. GeoLocations, Category, etc.) I want to load these into my EF ObjectContext so that I can assign them to entities that reference them as FK. These objects are also used to populate all sorts of dropdown controls. Pretty standard scenarios so far.
Since a new controller is created for each page request in MVC, a new entity context is created and these "enum" objects are loaded repeatedly. I thought about using a static context object across all instances of controllers (or repository object).
But will this require too much locking and therefore actually worsen perf?
Alternatively, I'm thinking of using a static context only for read-only tables. But since entities that reference them must be in the same context anyway, this isn't any different from the above.
I also don't want to get into the business of attaching/detaching these enum objects. Since I believe once I attach a static enum object to an entity, I can't attach it again to another entity??
Please help, I'm quite new to EF + MVC, so am wondering what is the best approach.
Personally, I never have any static Context stuff, etc. For me, when i call the database (CRUD) I use that context for that single transaction/unit of work.
So in this case, what you're suggesting is that you wish to retrieve some data from the databse .. and this data is .. more or less .. read only and doesn't change / static.
Lookup data is a great example of this.
So your Categories never change. Your GeoLocations never change, also.
I would not worry about this concept on the database/persistence level, but on the application level. So, just forget that this data is static/readonly etc.. and just get it. Then, when you're in your application (ie. ASP.NET web MVC controller method or in the global.asax code) THEN you should cache this ... on the UI layer.
If you're doing a nice n-tiered MVC app, which contains
UI layer
Services / Business Logic Layer
Persistence / Database data layer
Then I would cache this in the Middle Tier .. which is called by the UI Layer (ie. the MVC Controller Action .. eg. public void Index())
I think it's important to know how to seperate your concerns .. and the database stuff is should just be that -> CRUD'ish stuff and some unique stored procs when required. Don't worry about caching data, etc. Keep this layer as light as possible and as simple as possible.
Then, your middle Tier (if it exists) or your top tier should worry about what to do with this data -> in this case, cache it because it's very static.
I've implemented something similar using Linq2SQL by retrieving these 'lookup tables' as lists on app startup and storing them in ASP's caching mechanism. By using the ASP cache, I don't have to worry about threading/locking etc. Not sure why you'd need to attach them to a context, something like that could easily be retrieved if necessary via the table PK id.
I believe this is as much a question of what to cache as how. When your are dealing with EF, you can quickly run into problems when you try to persist EF objects across different contexts and attempt to detach/attach those objects. If you are using your own POCO objects with custom t4 templates then this isn't an issue, but if you are using vanilla EF then you will want to create POCO objects for your cache.
For most simple lookup items (i.e numeric primary key and string text description), you can use Dictionary. If you have multiple fields you need to pass and back with the UI then you can build a more complete object model. Since these will be POCO objects they can then be persisted pretty much anywhere and any way you like. I recommend using caching logic outside of your MVC application such that you can easily mock the caching activity for testing. If you have multiple lists you need to cache, you can put them all in one container class that looks something like this:
public class MyCacheContainer
{
public Dictionary<int, string> GeoLocations { get; set; }
public List<Category> Categories { get; set; }
}
The next question is do you really need these objects in your entity model at all. Chances are all you really need are the primary keys (i.e. you create a dropdown list using the keys and values from the dictionary and just post the ID). Therefore you could potentially handle all of the lookups to the textual description in the construction of your view models. That could look something like this:
MyEntityObject item = Context.MyEntityObjects.FirstOrDefault(i => i.Id == id);
MyCacheContainer cache = CacheFactory.GetCache();
MyViewModel model = new MyViewModel { Item = item, GeoLocationDescription = GeoLocations[item.GeoLocationId] };
If you absolutely must have those objects in your context (i.e. if there are referential entities that tie 2 or more other tables together), you can pass that cache container into your data access layer so it can do the proper lookups.
As for assigning "valid" entities, in .Net 4 you can just set the foreign key properties and don't have to actually attach an object (technically you can do this in 3.5, but it requires magic strings to set the keys). If you are using 3.5, you might just try something like this:
myItem.Category = Context.Categories.FirstOrDefault(c => c.id == id);
While this isn't the most elegant solution and does require an extra roundtrip to the DB to get a category you don't really need, it works. Doing a single record lookup based on a primary key should not really be that big of a hit especially if the table is small like the type of lookup data you are talking about.
If you are stuck with 3.5 and don't want to make that extra round trip and you want to go the magic string route, just make sure you use some type of static resource and/or code generator for your magic strings so you don't fat finger them. There are many examples here that show how do assign a new EntityKey to a reference without going to the DB so I won't go into that on this question.
Using ASP.NET MVC, I've implemented an autocomplete textbox using the approach very similar to the implementation by Ben Scheirman as shown here: http://flux88.com/blog/jquery-auto-complete-text-box-with-asp-net-mvc/
What I haven't been able to figure out is if it's a good idea to cache the data for the autocomplete textbox, so there won't be a roundtrip to the database on every keystroke?
If caching is prefered, can you guide me in the direction to how to implement caching for this purpose?
You have a couple things to ask yourself:
Is the data I'm pulling back dynamic?
If not, how often do I expect this call to occur?
If the answers are, 1- not really and 2 - call to happen frequently, you should cache it.
I don't know how your data access is setup, but I simply throw my data into cache objects like so:
public IQueryable<Category> FindAllCategories()
{
if (HttpContext.Current.Cache["AllCategories"] != null)
return (IQueryable<Category>)HttpContext.Current.Cache["AllCategories"];
else
{
IQueryable<Category> allCats = from c in db.Categories
orderby c.Name
select c;
// set cache
HttpContext.Current.Cache.Add("AllCategories", allCats, null, System.Web.Caching.Cache.NoAbsoluteExpiration, new TimeSpan(0, 0, 30, 0, 0), System.Web.Caching.CacheItemPriority.Default, null);
return allCats;
}
}
This is an example of one of my repository queries, based off of LINQ to SQL. It first checks the cache, if the entry exists in cache, it returns it. If not, it goes to the database, then caches it with a sliding expiration.
You sure can Cache your result, using the attribute like:
[OutputCache(Duration=60, VaryByParam="searchTerm")]
ASP.net will handle the rest.
I think caching in this case would require more work than simply storing every request. You'd want to focus more on the terms being searched than individual keys. You'd have to keep track of what terms are more popular and cache combinations of characters that make up those terms. I don't think simply caching every single request is going to get you any performance boost. You're just going to have stale data in your cache.
Well, how will caching in asp.net prevent server round trips? You'll still have server round trips, at best you will not have to look up the database if you cache. If you want to prevent server roundtrips then you need to cache at the client side.
While it's quite easily possible with Javascript (You need to store your data in a variable and check that variable for relevant data before looking up the server again) I don't know of a ready-tool which does this for you.
I do recommend you consider caching to prevent round-trips. In fact I have half a mind to implement javascript caching in one of my own websites reading this.
At the moment, i got quite badly fashioned view model.
Classes looks like this=>
public class AccountActionsForm
{
public Reader Reader { get; set; }
//something...
}
Problem is that Reader type comes from domain model (violation of SRP).
Basically, i'm looking for design tips (i.e. is it a good idea to split view model to inputs/outputs?) how to make my view model friction-less and developer friendly (i.e. - mapping should work automatically using controller base class)?
I'm aware of AutoMapper framework and i'm likely going to use it.
So, once more - what are common gotchas when trying to create proper view model? How to structure it? How mapping is done when there's a multiple domain object input necessary?
I'm confused about cases when view needs data from more than 1 aggregate root. I'm creating app which has entities like Library, Reader, BibliographicRecord etc.
In my case - at domain level, it makes no sense to group all those 3 types into LibraryReaderThatHasOrderedSomeBooks or whatnot, but view that should display list about ordered books for specific reader in specific library needs them all.
So - it seems fine to create view OrderedBooksList with OrderedBooksListModel view model underneath that holds LibraryOutput, ReaderOutput and BibliographicRecordOutput view models. Or even better - OrderedBooksListModel view model, that leverages flattening technique and has props like ReaderFirstName, LibraryName etc.
But that leads to mapping problems because there are more than one input.
It's not 1:1 relation anymore where i kick in one aggregate root only.
Does that mean my domain model is kind a wrong?
And what about view model fields that live purely on UI layer (i.e. enum that indicates checked tab)?
Is this what everyone does in such a cases?
FooBarViewData fbvd = new FooBarViewData();
fbvd.Foo = new Foo(){ A = "aaa"};
fbvd.Bar = new Bar(){ B = "bbb"};
return View(fbvd);
I'm not willing to do this=>
var fbvd = new FooBarViewData();
fbvd.FooOutput = _mapper.Map<Foo,FooOutput>(new Foo(){ A = "aaa"});
fbvd.BarOutput = _mapper.Map<Bar,BarOutput>(new Bar(){ B = "bbb"});
return View(fbvd);
Seems like a lot of writing. :)
Reading this at the moment. And this.
Ok. I thought about this issue a lot and yeah - adding another abstraction layer seems like a solution =>
So - in my mind this already works, now it's time for some toying.
ty Jimmy
It's tough to define all these, but here goes. We like to separate out what we call what the View sees from what the Controller builds. The View sees a flattened, brain-dead DTO-like object. We call this a View Model.
On the Controller side, we build up a rich graph of what's needed to build the View Model. This could be just a single aggregate root, or it could be a composition of several aggregate roots. All of these together combine into what we call the Presentation Model. Sometimes the Presentation Model is just our Persistence (Domain) Model, but sometimes it's a new object altogether. However, what we've found in practice is that if we need to build a composite Presentation Model, it tends to become a magnet for related behavior.
In your example, I'd create a ViewFooBarModel, and a ViewFooBarViewModel (or ViewFooBarModelDto). I can then talk about ViewFooBarModel in my controller, and then rely on mapping to flatten out what I need from this intermediate model with AutoMapper.
Here's one item that dawned on us after we had been struggling with alternatives for a long time: rendering data is different from receiving data.
We use ViewModels to render data, but it quickly turned out that when it came to receiving data through forms posting and similar, we couldn't really make our ViewModels fit the concept of ModelBinding. The main reason is that the round-trip to the browser often involves loss of data.
As an example, even though we use ViewModels, they are based on data from real Domain Objects, but they may not expose all data from a Domain Object. This means that we may not be able to immediately reconstruct an underlying Domain Object from the data posted by the browser.
Instead, we need to use mappers and repositories to retrieve full Domain Objects from the posted data.
Before we realized this, we struggled much with trying to implement custom ModelBinders that could reconstruct a full Domain Object or ViewModel from the posted data, but now we have separate PostModels that model how we receive data.
We use abstract mappers and services to map a PostModel to a Domain Object - and then perhaps back to a ViewModel, if necessary.
While it may not make sense to group unrelated Entities (or rather their Repositories) into a Domain Object or Service, it may make a lot of sense to group them in the Presentation layer.
Just as we build custom ViewModels that represents Domain data in a way particularly suited to a specific application, we also use custom Presentation layer services that combine things as needed. These services are a lot more ad-hoc because they only exist to support a given view.
Often, we will hide this service behind an interface so that the concrete implementation is free to use whichever unrelated injected Domain objects it needs to compose the desired result.