I am caching lookup data in my mvc application, I have the following code:
// GET: Category Types
public JsonResult GetAuditGrants(int auditID)
{
AuditDAL ad = new AuditDAL();
if (System.Web.HttpContext.Current.Cache["AuditGrants"] == null)
{
System.Web.HttpContext.Current.Cache["AuditGrants"] = ad.GetAuditIssueGrants(auditID);
}
var types = (IEnumerable<Grant>)System.Web.HttpContext.Current.Cache["AuditGrants"];
return this.Json(types.ToList());
}
If expiration is not set, by default when does the data expire in cache? Is recommended and should it be stored in the webconfig for consistency for lookup data in my app?
To answer your first question, we can consult MSDN. According to its documentation, adding an object using the Item property (or indexer) is equivalent to calling the Insert method, whose documentation states:
The object added to the cache using this overload of the Insert method
is inserted with no file or cache dependencies, a priority of Default,
a sliding expiration value of NoSlidingExpiration, and an absolute
expiration value of NoAbsoluteExpiration.
Your second question is really pretty application-specific. The best practice is to profile your application. If your application is experiencing a ton of cache-misses and your cache stays small, then you might want to extend the expiration sliding window by using one of the Add or Inserts overloads that give you that control. In that case, storing your selected parameters in the app settings seems like a good idea.
One thing to remember about this cache, however: it is per-app domain. If you have multiple web frontends, or even an IIS server configured to launch more than one worker process for your app, then you may not be getting the most out of your caching strategy. In that case, you might need to use something that can offer persistence to multiple instances of your app. We use Redis, but there are many other options.
Related
I have two fields in SAP Fiori App: Template_ID and Offer_ID.
I want to choose value in Offer_ID depending on Template_ID field value.
For solving this problem I've tried to do this steps:
When the user click on Template_ID field in Back-End runs the method:
CL_CUAN_CAMPAIGN_DPC->contentset_get_entityset().
This method has returning paramater et_result. In et_result I have the necessary field temp_id.
For saving temp_id value I created a global attribute in class ZCL_CUAN_CLASS.
ZCL_CUAN_CLASS=>GV_CONTENT = VALUE #( et_result[ 1 ]-temp_ID OPTIONAL ).
I'll use this global attribute as an input parameter for my second method:
CL_CUAN_CAMPAIGN_DPC->GET_OFFER_BY_TEMPLATE().
This method returns to me the internal table with the offer_id, which belongs to my choosen temp_id.
But when the user click on Offer_ID field on Web UI, in debugging I see that my global attribute is blank.
May be it's because of session or something else, but it's blank.
OData is a stateless protocol, meaning the server responds your query, then forgets you were ever there. By definition, this does not allow you to transport main memory content from one request to the next.
User interfaces on the other hand usually require state. It can be gained through one of the following options:
Stateful user interface
As Haojie points out, one solution is to store the data that was selected in the user interface and submit it as a filter criterion back to the server with the next request. Having a stateful user interface is the standard solution for stateless server apps.
Stateful persistence
Another option is to store the data permanently in the server's database, in ABAP preferredly in a business object. This object has a unique identifier, probably a GUID, that you can reference in your requests to identify the process you are working on.
Draft persistence
If not all information is available in one step, such as in a multi-step wizard, should not become "active" right away, or you want to be able to switch devices while working on a multi-step process, drafts are an option. Drafts are regular business objects, with the one specialty that they remain inert until the user triggers a final activation step.
Soft state
For performance optimizations, you can have a look at SAP Gateway's soft state mode, which allows you to buffer some data to be able to respond to related requests more quickly. This is generally discouraged though, as it contradicts the stateless paradigm of OData.
Stateful protocol
In some cases, stateless protocols like OData are not the right way to go. For example, banking apps still prefer to pertain state to avoid that users remain logged in infinitely, and thus becoming vulnerable to attacks like CSRF. If this is the case for you, you should have a look at ABAP WebDynpro for your user interface. Generally, stateful server protocols are considered inferior because they bind lots of server resources for long times and thus cannot handle larger user numbers.
When ther user click on OfferId field, it will start a NEW session and of course what you store as GV_CONTENT in class ZCL_CUAN_CLASS is lost.
What you should do is that for the second request you should send to backend with filter Template_ID so in your CL_CUAN_CAMPAIGN_DPC->GET_OFFER_BY_TEMPLATE() method, you can further process the result by Template_ID.
Or SET/GET Parameter.
I have a public static/singleton class with IsDataModified() which is affected by change in database, file, type of user, api, etc, processes immidiately, just returns a bool variable.
The frequency of modification of output data varies extremely from a minute to months, so I won't use sliding expiration, instead let duration be MAX or infinite.
But what I'm looking for is
List item
request by browser
MVC filter to check if cache missing or IsDataModified()
Update cache and return
Else return existing cache
I tried extending OutputCache, setting duration to very large number, but once the page is cached the filters are not triggered.
Basically I do not want the duration specified to be the deciding factor as to when cache will expire, rather IsDataModified() should be the deciding factor.
One approach I think is to create a simple filter and use output cache or similar object through code behind, but I could not find OutputCacheAttribute giving a cached viewresult.
Is this possible? Please suggest.
So I have implemented a solution built on top of Redis (memcache is a lot messier). I use an open source Redis Output Cache provider which basically creates a key corresponding to the URL of the page. Whenever the underlying data is changed for one of the pages I remove the value from Redis where the key maches some pattern. (My data sort of has a hierarchy so I delete the cache for more items when it is a piece of data from the parent that is updated).
Using a similar approach of deleting the cached page when the data is updated would probably also work for you. On a side note, I am thinking of trying to change my process so that I have a background service that creates the page when data is updated and replaces the cache so that the first users don't have a slow response after the page is first removed from the cache.
I have a user details area that is split out on to about 6 different pages (details, contact, preferences, etc). I need to have access to all of the user details after login.
I was going to add the user details to a Session like this after login so that I could access them all from the different pages without having to call the database on each page;
Session["name"] = name;
Session["bla"] = bla;
However I've googled and some people talk about saving the session to a database, which sounds like i may as well not use it. The users may not access every page or require access to all of this information, so it could be seen as a bit needless to add them all to a Session.
Is there a recommended practice for storing user information like this? I also have an ID that needs to be shown on every page. Perhaps something like this is better for a session and the more detailed info pages to keep with their own database calls?
.
EDIT: I am using Umbraco 7.2.8 and am getting the member details from the MemberService. I am worried that it hits the database each time though. My code to get the Member details and also the custom member properties (currently in each controller) is;
// Get the details of the user currently logged in
var profileModel = Members.GetCurrentMemberProfileModel();
// Get the custom properties for the member
var member = memberService.GetByUsername(profileModel.UserName);
model.Firstname = profileModel.Name;
model.Email = profileModel.Email;
model.specialID = member.Properties["specialID"].Value.ToString();
Any pointers would be great!
If you use the built in Umbraco Member service (link 1, link 2) to manage your users, you'll have a relatively simple way to get current logged in member. It's also easy to manage the member profiles with custom data fields and so on. No need to think too much about sessions and such.
Edit: take a look in here - specifically the GetCurrentPersistedMember() method - Umbraco is using caching to save current member
There are three popular ways to store the data in memory
1) Caching
2) Session
3) Static Classes
Out of above three I will always prefer Caching, as numerous articles suggested and I too agree that Sessions are comparatively slow than Caching & Static classes. I would always prefer caching over sessions.
But whatever you use, make sure that you are initializing them at single place, so that all the sessions and caching used in whole application is known to every developer. This will help in code re-usability & reduces the duplication of initializing the same value again in system. It will also help in code-maintainability.
Using ASP.NET MVC, I've implemented an autocomplete textbox using the approach very similar to the implementation by Ben Scheirman as shown here: http://flux88.com/blog/jquery-auto-complete-text-box-with-asp-net-mvc/
What I haven't been able to figure out is if it's a good idea to cache the data for the autocomplete textbox, so there won't be a roundtrip to the database on every keystroke?
If caching is prefered, can you guide me in the direction to how to implement caching for this purpose?
You have a couple things to ask yourself:
Is the data I'm pulling back dynamic?
If not, how often do I expect this call to occur?
If the answers are, 1- not really and 2 - call to happen frequently, you should cache it.
I don't know how your data access is setup, but I simply throw my data into cache objects like so:
public IQueryable<Category> FindAllCategories()
{
if (HttpContext.Current.Cache["AllCategories"] != null)
return (IQueryable<Category>)HttpContext.Current.Cache["AllCategories"];
else
{
IQueryable<Category> allCats = from c in db.Categories
orderby c.Name
select c;
// set cache
HttpContext.Current.Cache.Add("AllCategories", allCats, null, System.Web.Caching.Cache.NoAbsoluteExpiration, new TimeSpan(0, 0, 30, 0, 0), System.Web.Caching.CacheItemPriority.Default, null);
return allCats;
}
}
This is an example of one of my repository queries, based off of LINQ to SQL. It first checks the cache, if the entry exists in cache, it returns it. If not, it goes to the database, then caches it with a sliding expiration.
You sure can Cache your result, using the attribute like:
[OutputCache(Duration=60, VaryByParam="searchTerm")]
ASP.net will handle the rest.
I think caching in this case would require more work than simply storing every request. You'd want to focus more on the terms being searched than individual keys. You'd have to keep track of what terms are more popular and cache combinations of characters that make up those terms. I don't think simply caching every single request is going to get you any performance boost. You're just going to have stale data in your cache.
Well, how will caching in asp.net prevent server round trips? You'll still have server round trips, at best you will not have to look up the database if you cache. If you want to prevent server roundtrips then you need to cache at the client side.
While it's quite easily possible with Javascript (You need to store your data in a variable and check that variable for relevant data before looking up the server again) I don't know of a ready-tool which does this for you.
I do recommend you consider caching to prevent round-trips. In fact I have half a mind to implement javascript caching in one of my own websites reading this.
I'm needing to cache some data using System.Web.Caching.Cache. Not sure if it matters, but the data does not come from a database, but a plethora of custom objects.
The ASP.NET MVC is fairly new to me and I'm wondering where it makes sense for this caching to occur?
Model or Controller?
At some level this makes sense to cache at the Model level but I don't necessarily know the implications of doing this (if any). If caching were to be done at the Controller level, will that affect all requests, or just for the current HttpContext?
So... where should application data caching be done, and what's a good way of actually doing it?
Update
Thanks for the great answers! I'm still trying to gather where it makes most sense to cache given different scenarios. If one is caching the entire page, then keeping it in the view makes sense but where to draw the line when it's not the entire page?
I think it ultimately depends on what you are caching. If you want to cache the result of rendered pages, that is tightly coupled to the Http nature of the request, and would suggest a ActionFilter level caching mechanism.
If, on the other hand, you want to cache the data that drives the pages themselves, then you should consider model level caching. In this case, the controller doesn't care when the data was generated, it just performs the logic operations on the data and prepares it for viewing. Another argument for model level caching is if you have other dependencies on the model data that are not attached to your Http context.
For example, I have a web-app were most of my Model is abstracted into a completely different project. This is because there will be a second web-app that uses this same backing, AND there's a chance we might have a non-web based app using the same data as well. Much of my data comes from web-services, which can be performance killers, so I have model level caching that the controllers and views know absolutely nothing about.
I don't know the anwser to your question, but Jeff Atwood talks about how the SO team did caching using the MVC framework for stackoverflow.com on a recent hanselminutes show that might help you out:
http://www.hanselminutes.com/default.aspx?showID=152
Quick Answer
I would start with CONTROLLER caching, use the OutputCache attribute, and later add Model caching if required. It's quicker to implement and has instant results.
Detail Answer (cause i like the sound of my voice)
Here's an example.
[OutputCache(Duration=60, VaryByParam="None")]
public ActionResult CacheDemo() {
return View();
}
This means that if a user hits the site (for the cache requirements defined in the attribute), there's less work to get done. If there's only Model caching, then even though the logic (and most likely the DB hit) are cached, the web server still has to render the page. Why do that when the render result will always be the same?
So start with OutputCaching, then move onto Model caching as you performance test your site.
Output caching is also a lot simpler to start out with. You don't have to worry about web farm distributed caching probs (if you are part of a farm) and the caching provider for the model.
Advanced Caching Techniques
You can also apply donut caching -> cache only part of the UI page :) Check it out!
I would choose caching at the model level.
(In general, the advice seems to be to minimize business logic at the controller level
and move as much as possible into model classes.)
How about doing it like this:
I have some entries in the model represented by the class Entry
and a source of entries (from a database, or 'a plethora of custom objects').
In the model I make an interface for retrieving entries:
public interface IEntryHandler
{
IEnumerable<Entry> GetEntries();
}
In the model I have an actual implementation of IEntryHandler
where the entries are read from cache and written to cache.
public class EntryHandler : IEntryHandler
{
public IEnumerable<Entry> GetEntries()
{
// Check if the objects are in the cache:
List<Entry> entries = [Get entries from cache]
if (entries == null)
{
// There were no entries in the cache, so we read them from the source:
entries = [Get entries from database or 'plethora of custom objects']
[Save the retrieved entries to cache for later use]
}
return entries;
}
}
The controller would then call the IEntryHandler:
public class HomeController : Controller
{
private IEntryHandler _entryHandler;
// The default constructor, using cache and database/custom objects
public HomeController()
: this(new EntryHandler())
{
}
// This constructor allows us to unit test the controller
// by writing a test class that implements IEntryHandler
// but does not affect cache or entries in the database/custom objects
public HomeController(IEntryHandler entryHandler)
{
_entryHandler = entryHandler;
}
// This controller action returns a list of entries to the view:
public ActionResult Index()
{
return View(_entryHandler.GetEntries());
}
}
This way it is possible to unit test the controller without touching real cache/database/custom objects.
I think the caching should somehow be related to the model. I think the controller shouldn't care more about the data. The controller responsibility is to map the data - regardless where it come from - to the views.
Try also to think why you need to cache? do you want to save processing, data transmission or what? This will help you to know where exactly you need to have your caching layer.
It all depends on how expensive the operation is. If you have complicated queries then it might make sense to cache the data in the controller level so that the query is not executed again (until the cache expires).
Keep in mind that caching is a very complicated topic. There are many different places that you can store your cache:
Akamai / CDN caching
Browser caching
In-Memory application caching
.NET's Cache object
Page directive
Distributed cache (memcached)