I have a public static/singleton class with IsDataModified() which is affected by change in database, file, type of user, api, etc, processes immidiately, just returns a bool variable.
The frequency of modification of output data varies extremely from a minute to months, so I won't use sliding expiration, instead let duration be MAX or infinite.
But what I'm looking for is
List item
request by browser
MVC filter to check if cache missing or IsDataModified()
Update cache and return
Else return existing cache
I tried extending OutputCache, setting duration to very large number, but once the page is cached the filters are not triggered.
Basically I do not want the duration specified to be the deciding factor as to when cache will expire, rather IsDataModified() should be the deciding factor.
One approach I think is to create a simple filter and use output cache or similar object through code behind, but I could not find OutputCacheAttribute giving a cached viewresult.
Is this possible? Please suggest.
So I have implemented a solution built on top of Redis (memcache is a lot messier). I use an open source Redis Output Cache provider which basically creates a key corresponding to the URL of the page. Whenever the underlying data is changed for one of the pages I remove the value from Redis where the key maches some pattern. (My data sort of has a hierarchy so I delete the cache for more items when it is a piece of data from the parent that is updated).
Using a similar approach of deleting the cached page when the data is updated would probably also work for you. On a side note, I am thinking of trying to change my process so that I have a background service that creates the page when data is updated and replaces the cache so that the first users don't have a slow response after the page is first removed from the cache.
Related
I have an endpoint where I can request multiple pieces of data (such as https://example.com/things?ids=1,2,3) that I'm querying using Siesta. I'm trying to figure out the proper behavior of my persistent entity cache if I only have some of the things cached. So, if I have thing 1 and thing 2, but not thing 3 cached, I'd like to return a partial hit and have Siesta also query my server with the original URL. However, my understanding is that if EntityCache.readEntity returns anything, then Siesta assumes that the query was fully fulfilled, and does not continue on to make the network request.
Is there a good way for me to implement a partial hit, or do I need to return nil from readEntity and wait for a response from the server?
Have your cache return the partial content with an Entity.timestamp in the distant past. (It’s fine to use zero.) This will cause the 1,2 partial content to appear immediately on launch and when offline, but loadIfNeeded() will consider that content stale and thus still trigger the request for 1,2,3.
I am caching lookup data in my mvc application, I have the following code:
// GET: Category Types
public JsonResult GetAuditGrants(int auditID)
{
AuditDAL ad = new AuditDAL();
if (System.Web.HttpContext.Current.Cache["AuditGrants"] == null)
{
System.Web.HttpContext.Current.Cache["AuditGrants"] = ad.GetAuditIssueGrants(auditID);
}
var types = (IEnumerable<Grant>)System.Web.HttpContext.Current.Cache["AuditGrants"];
return this.Json(types.ToList());
}
If expiration is not set, by default when does the data expire in cache? Is recommended and should it be stored in the webconfig for consistency for lookup data in my app?
To answer your first question, we can consult MSDN. According to its documentation, adding an object using the Item property (or indexer) is equivalent to calling the Insert method, whose documentation states:
The object added to the cache using this overload of the Insert method
is inserted with no file or cache dependencies, a priority of Default,
a sliding expiration value of NoSlidingExpiration, and an absolute
expiration value of NoAbsoluteExpiration.
Your second question is really pretty application-specific. The best practice is to profile your application. If your application is experiencing a ton of cache-misses and your cache stays small, then you might want to extend the expiration sliding window by using one of the Add or Inserts overloads that give you that control. In that case, storing your selected parameters in the app settings seems like a good idea.
One thing to remember about this cache, however: it is per-app domain. If you have multiple web frontends, or even an IIS server configured to launch more than one worker process for your app, then you may not be getting the most out of your caching strategy. In that case, you might need to use something that can offer persistence to multiple instances of your app. We use Redis, but there are many other options.
We are optimising a site and have read about the issue of the initial view lookup taking a long time. Subsequent lookups of the views are then much faster. Mini-profiler shows that a lot of the time is in the initial find view (I know I can use a ~ path to reduce this) and whatever else is done at this stage.
Where is the caching done? How long are view lookups etc cached? Can I see what is cached? Can we do anything to cause it to pre-load so there isn't a delay?
We have many views that are often not visited for hours and I don't want sudden peaks and troughs in performance.
We are using Azure and have a number of web role instances. Can I assume that each web role has its own cache of the view lookup? Can we centralise the caching so that it only occurs once per application?
Also I read MVC4 is faster at finding views? Does anyone have any figures?
The default cache is 15min and is stored in the HttpContext.Cache, this is all managed by the System.Web.Mvc.DefaultViewLocationCache class. Since this uses standard ASP.NET caching you could use a custom cache provider that gets its cache from WAZ AppFabric Cache or the new caching preview (there is one on NuGet: http://nuget.org/packages/Glav.CacheAdapter). Using a shared cache will make sure that only 1 instance needs to do the work of resolving the view. Or you could go and build your own cache provider.
Running your application in release mode, clearing unneeded view engines, writing the exact path instead of simply calling View, ... are all ways to speed up the view lookup process. Read more about it here:
http://samsaffron.com/archive/2011/08/16/Oh+view+where+are+thou+finding+views+in+ASPNET+MVC3+
http://blogs.msdn.com/b/marcinon/archive/2011/08/16/optimizing-mvc-view-lookup-performance.aspx
You can pre-load the view locations by adding a key for each view to the cache. You should format it as follows (where this is the current VirtualPathProviderViewEngine):
string.Format((IFormatProvider) CultureInfo.InvariantCulture, ":ViewCacheEntry:{0}:{1}:{2}:{3}:{4}:", (object) this.GetType().AssemblyQualifiedName, (object) prefix, (object) name, (object) controllerName, (object) areaName);
I don't have any figures if MVC4 is faster, but it looks like the DefaultViewLocationCache code is the same as for MVC3.
To increase my cachetime to 24 hours I used the following in the Global.asax
var viewEngine = new RazorViewEngine
{ViewLocationCache = new DefaultViewLocationCache(TimeSpan.FromHours(24))};
//Only allow Razor view to improve for performance
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(viewEngine);
Also this article ASP.NET MVC Performance Issues with Render Partial was also interesting.
Will look at writing my own ViewLocationCache to take advantage of shared Azure caching.
Using ASP.NET MVC, I've implemented an autocomplete textbox using the approach very similar to the implementation by Ben Scheirman as shown here: http://flux88.com/blog/jquery-auto-complete-text-box-with-asp-net-mvc/
What I haven't been able to figure out is if it's a good idea to cache the data for the autocomplete textbox, so there won't be a roundtrip to the database on every keystroke?
If caching is prefered, can you guide me in the direction to how to implement caching for this purpose?
You have a couple things to ask yourself:
Is the data I'm pulling back dynamic?
If not, how often do I expect this call to occur?
If the answers are, 1- not really and 2 - call to happen frequently, you should cache it.
I don't know how your data access is setup, but I simply throw my data into cache objects like so:
public IQueryable<Category> FindAllCategories()
{
if (HttpContext.Current.Cache["AllCategories"] != null)
return (IQueryable<Category>)HttpContext.Current.Cache["AllCategories"];
else
{
IQueryable<Category> allCats = from c in db.Categories
orderby c.Name
select c;
// set cache
HttpContext.Current.Cache.Add("AllCategories", allCats, null, System.Web.Caching.Cache.NoAbsoluteExpiration, new TimeSpan(0, 0, 30, 0, 0), System.Web.Caching.CacheItemPriority.Default, null);
return allCats;
}
}
This is an example of one of my repository queries, based off of LINQ to SQL. It first checks the cache, if the entry exists in cache, it returns it. If not, it goes to the database, then caches it with a sliding expiration.
You sure can Cache your result, using the attribute like:
[OutputCache(Duration=60, VaryByParam="searchTerm")]
ASP.net will handle the rest.
I think caching in this case would require more work than simply storing every request. You'd want to focus more on the terms being searched than individual keys. You'd have to keep track of what terms are more popular and cache combinations of characters that make up those terms. I don't think simply caching every single request is going to get you any performance boost. You're just going to have stale data in your cache.
Well, how will caching in asp.net prevent server round trips? You'll still have server round trips, at best you will not have to look up the database if you cache. If you want to prevent server roundtrips then you need to cache at the client side.
While it's quite easily possible with Javascript (You need to store your data in a variable and check that variable for relevant data before looking up the server again) I don't know of a ready-tool which does this for you.
I do recommend you consider caching to prevent round-trips. In fact I have half a mind to implement javascript caching in one of my own websites reading this.
I am using the Redirect After Post pattern in my ASP.NET MVC application. I have
the following scenario:
User goes to /controller/index where he is asked to fill a form.
Form values are POSTed to /controller/calculate.
The Calculate action performs calculation based on input and instantiates a complex object containing the results of the operation. This object is stored in TempData and user is redirected to /controller/result.
/controller/result retrieves the result from TempData and renders them to the user.
The problem with this approach is that if the user hits F5 while viewing the results in /controller/result the page can no longer be rendered as TempData has been expired and the result object is no longer available.
This behavior is not desired by the users. One possible solution would be instead of redirecting after the POST, just rendering the results view. Now if the user hits F5 he gets a browser dialog asking if he wants to repost the form. This also was not desired.
One possible solution I thought of was to serialize the result object and passing it in the URL before redirecting but AFAIK there are some limitations in the length of a GET request and if the object gets pretty big I might hit this limitation (especially if base64 encoded).
Another possibility would be to use the Session object instead of TempData to persist the results. But before implementing this solution I would like to know if there's a better way of doing it.
UPDATE:
Further investigating the issue I found out that if I re-put the result object in TempData inside the /controller/result action it actually works:
public ActionResult Result()
{
var result = TempData["result"];
TempData["result"] = result;
return View(result);
}
But this feels kind of dirty. Could there be any side effects with this approach (such as switching to out-of-process session providers as currently I use InProc)?
Store it in the Session with some unique key and pass the key as part of the url. Then as long as the session is alive they can use the back/forward button to their heart's content and still have the URL respond properly. Alternatively, you could use the ASP cache, but I'd normally reserve that for objects that are shared among users. Of course, if you used the parameters to the calculation as the key and you found the result in the cache, you could simply re-use it.
I think redirect after post makes much more sense when the resulting Url is meaningfull.
In your case it would mean that all data required for the calculation is in the Url of /controller/result.
/controller/calculate would not do the calculation but /controller/result.
If you can get this done thinks get pretty easy: You hash the values required for the calculation and use it as the key for the cache. If the user refreshes he only hits the cache.
If you cant have a meaningfull url you could post to /controller/index. If the user hits F5 calculation would start again, but a cache with the hash as key would help again.
TempData is generally considered useful for passing messages back to the user not for storing working entities (a user refresh will nuke the contents of TempData).
I don't know of more appropriate place than the session to store this kind of information. I think the general idea is keep session as small as possible though. Personally I usually write some wrappers to add and remove specific objects to session. Cleaning them up manually where possible.
Alternatively you can store in a database in which you purge stale items on a regular basis.
I might adopt a similar idea to a lot of banks on their online banking sites by using one-time keys to verify all POSTs. You can integrate it into a html helper for forms and into your service layer (for example) for verification.
Let's say that you only want to post any instance of a form once. Add a guid to the form. If the form does not post back and the data is committed then you want to invalidate the guid and redirect to the GET action. If say the form was not valid, when the page posts back you need a new (valid) guid there in the form waiting for the next post attempt.
GUIDs are generated as required and added to a table in your DB. As they are invalidated (by POSTS, whether successful or not) they are flagged in the table. You may want to trim the table at 100 rows.. or 1000, depending on how heavy your app will be and how many rendered but not yet posted forms you may have at any one time.
I haven't really fine tuned this design but i think it might work. It wont be as smelly as the TempData and you can still adhere to the PRG pattern.
Remember, with PRG you dont want to send the new data to the GET action in a temp variable of some sort. You want to query it back from the data store, where it's now committed to.
As Michael stated, TempData has a single purpose -> store an object for one trip only and only one trip. As I understand it, TempData is essentially using the same Session object as you might use but it will automatically remove the object from the session on the next trip.
Stick with Session imho rather than push back in to TempData.