We have an MVC project that is going to handle displaying a trace logging mechanism that collects data from several applications that all relate to one master application suite. The Trace logging tool is a service that collects exceptions and other various logging information and places them in a database for later consumption. This MVC project is part of that consumption.
As I'm sure you can tell, there is a lot of data that is returned via Entity / LINQ. Right now the developer is getting all the data back and is using a session variable to hold this data ( i think he said it's like a good 3-5 MB worth of data that is getting returned.). Only 512 traces are sent back to the view / browser. The user then has the ability to filter by anything they type via AJAX call. The developer is using the old Session["name"] object to put the data in and is using LINQ to filter through it on the server so that he is not hitting the Tracing service everytime a filter is selected / typed.
It works locally, but not remotely. I'm thinking there is an issue on IIS, but haven't looked into that yet.
I was wondering if Sessions are the best approach for large data like this, or if there is a better recommendation instead of Sessions that would be better.. I know MVC is stateless and I try to keep it clean of anything but TempData as best I can, but unsure how to tackle this otherwise
It could be better to use HttpRunTime.Cache to store the data as it has flexibility in how it is expired, especially if the traces are global to the application, e.g.
private List<string> GetApplicationSuiteTraces()
{
List<string> applicationSuiteTraces = Cache["ApplicationSuiteTraces"];
if(applicationSuiteTraces == null)
{
applicationSuiteTraces = Service.GetTraces();
Cache.Add("ApplicationSuiteTraces", applicationSuiteTraces, null, DateTime.Now.AddSeconds(600), Cache.NoSlidingExpiration, CacheItemPriority.High, null);
}
return applicationSuiteTraces;
}
Related
I am using Redis for asp.net MVC output cache. Some of my views take a fair bit of processing, currently I have an overnight process that generates the required data for the views and puts it in Redis cache so the views can render much quicker, however the data is only in the cache for the purpose of the initial render of the view and then the view is cached by output cache config.
It would be MUCH better if I could just render the view and put that directly into the cache from the overnight console program. How would I do this? I gather I would need to insert to Redis with the same key that ASP.NET MVC would give and call whatever internal render method that asp.net MVC uses?
I don't need instructions for inserting to Redis, rather what is the render method I need to call and how are the key names constructed for asp.net MVC OutputCache.
I am using asp.net MVC 5, however, bonus kudos if you can also answer for Core to futureproof the answer!
Please no suggestions of generating static files, that's not what I want, Thanks.
How are the key names constructed for asp.net mvc outputcache?
This part is easy to answer if you consult the source code for OutputCacheAttribute. The keys depend on the settings (e.g. the keys will have more data in them if you have set VaryByParam). You can determine the keys by checking how the attribute populates uniqueID for you use case. Notice that the keys are concatenated and then hashed (since they could get very long) and then base64-encoded. Here is the relevant code:
internal string GetChildActionUniqueId(ActionExecutingContext filterContext)
{
StringBuilder uniqueIdBuilder = new StringBuilder();
// Start with a prefix, presuming that we share the cache with other users
uniqueIdBuilder.Append(CacheKeyPrefix);
// Unique ID of the action description
uniqueIdBuilder.Append(filterContext.ActionDescriptor.UniqueId);
// Unique ID from the VaryByCustom settings, if any
uniqueIdBuilder.Append(DescriptorUtil.CreateUniqueId(VaryByCustom));
if (!String.IsNullOrEmpty(VaryByCustom))
{
string varyByCustomResult = filterContext.HttpContext.ApplicationInstance.GetVaryByCustomString(HttpContext.Current, VaryByCustom);
uniqueIdBuilder.Append(varyByCustomResult);
}
// Unique ID from the VaryByParam settings, if any
uniqueIdBuilder.Append(GetUniqueIdFromActionParameters(filterContext, SplitVaryByParam(VaryByParam)));
// The key is typically too long to be useful, so we use a cryptographic hash
// as the actual key (better randomization and key distribution, so small vary
// values will generate dramtically different keys).
using (SHA256Cng sha = new SHA256Cng())
{
return Convert.ToBase64String(sha.ComputeHash(Encoding.UTF8.GetBytes(uniqueIdBuilder.ToString())));
}
}
You'll notice later the uniqueID is used as a key into the internal cache:
ChildActionCacheInternal.Add(uniqueId, capturedText, DateTimeOffset.UtcNow.AddSeconds(Duration));
What is the render method I need to call?
Short answer: ExecuteResult.
Long answer: Holy crap, you are asking a lot here. Essentially you wish to instantiate some objects within the console process and call methods which will faithfully recreate the output that would have been created if you called it from within the AppDomain where the web site usually runs.
Web applications often rely on initialization and state that is created when the application starts up (e.g. setting up the composition root/IoC, or setting up Automapper, that sort of thing), so you'd have to run the initialization of your web site. A specific view may rely on contextual information such as the URL, cookies, and querystring parameters; it may rely on configuration; it may call internal services, which also rely on configuration, as well as the AppDomain account being set up a certain way; it may need to use things like client certificates which may be set up in the service account's personal store, etc.
Here is the general procedure of what the console app would have to do:
Instantiate the site's global object, calling its constructor, which may attempt to wire up events to the pipeline.
You will need to mock the pipeline and handle any events raised by the site. You will also need to raise events in a manner that simulates the way the ASP.NET pipeline works.
You will need to implement any quirks in the ASP.NET pipeline, e.g. in addition to raising events you will also need to call handlers that aren't subscribed to the events if they have certain predefined names, such as Application_Start.
You will need to emulate the HTTP request by constructing or mocking pipeline objects, such as HttpContext.
You will need to fire request-specific events at your code in the correct order to simulate HTTP traffic.
You will need to run your routing logic to determine the appropriate controller to instantiate, then instantiate it.
You will need to read metadata from your action methods to determine which filters to apply, then instantiate them, and allow them to subscribe to yet more events, which you must publish.
In the end you will need to get the ActionResult object that results from the action method and call its ExecuteResult method.
I don't think this is a feasible approach, but I'd like to hear back from you if you succeed at it.
What you really ought to do
Your console application should simply fire HTTP requests at your application to populate the cache in a manner consistent with actual end user usage. This is how everyone else does it.
If you wish to replace the cached page before it has expired, you can invalidate the cache by restarting the app pool, or by using a dependency.
If you are worried about your response time statistics, change the manner in which you measure them so that you exclude any time window where this refresh is occuring.
If you are worried about impacts to a Google crawl, you can modify the host load schedule and set it to 0 during your reset window.
If you really don't want to exercise the site
If you insist that you don't want to exercise the site to create the cache, I suggest you make the views lighter weight, and look at caching at lower layers in your application.
For example, if the reason your views take so long to render is that they must run complicated queries with a lot of joins, consider implementing a database cache in the form of a denormalized table. You can run SQL Agent jobs to populate the denormalized table on a nightly basis, thus refreshing your cache. This way the view can be lightweight and you won't have to cache it on the web server.
For another example, if your web application calls RESTful services that take a long time to run, consider implementing cache-control headers in your service, and modify your REST client to honor them, so that repeated requests for the same representation won't actually require a service call. See Caching your REST API.
Currently we are using Breeze.js and Angular to develop our applications. Due to some persistent legacy issues, we have two databases ('Kenya' and 'Rwanda') that cannot be merged at this time, but have the same schema and metadata. Most of the time, the client knows which database to hit and passes the request through the .withParameters() function or the .saveOptions() function. Sometimes we want to request the same query from both databases (for example, if we are requesting a list of all available countries), and we use a EntityManager wrapper on the client to manage this and request the same query from each database. This is implemented through a custom EFContextProvider which uses the data returned to determine the appropriate database and creates the appropriate context in CreateContext().
To further complicate things, in some instances one or the other database won't exist (these are local deployments created through filtered replication), but the client won't know this. Therefore, when querying for a list of all countries, it issues two requests and one will cause failures because the context cannot be instantiated properly.
This is easy enough to detect on the Server. What I would like to do is to detect whether the requested context is available and, if not, return a 200 response and an empty set.
I can detect this in the Breeze DBContextProvider CreateContext() method, but cannot figure out how to cause the request to fallback gracefully to a empty-set response.
Thanks
Not exactly what I was looking for, but it probably makes more sense since most of the work is being done on the client-side:
Instead of trying to change the controller, I added a getAvailableDatabases to the C# controller actions and use that to determine which of the databases I will query from the client.
I'm having a bit performance problem here: the code below is part from my custom VirtualPathProvider, I've overwritten the GetCacheKey, and GetCacheDependency so they can cache my razor views properly.
public override string GetCacheKey(string virtualPath)
{
var key = string.Empty;
var fileResult = VerifyFilePath(virtualPath);
if (fileResult.RefinedAccessPath.IsNotNullOrEmpty())
key = EncryptHelper.MD5Encrypt(fileResult.RefinedAccessPath);
else
key = EncryptHelper.MD5Encrypt(fileResult.VirtualPath);
return key;
}
public override string GetFileHash(string virtualPath, System.Collections.IEnumerable virtualPathDependencies)
{
var fileResult = VerifyFilePath(virtualPath);
var hash = string.Empty;
if (fileResult.RefinedAccessPath.IsNotNullOrEmpty())
hash = EncryptHelper.MD5Encrypt(fileResult.RefinedAccessPath);
else
hash = Previous.GetFileHash(fileResult.VirtualPath, virtualPathDependencies);
return hash;
}
public override System.Web.Caching.CacheDependency GetCacheDependency(string virtualPath, System.Collections.IEnumerable virtualPathDependencies, DateTime utcStart)
{
var fileResult = VerifyFilePath(virtualPath);
switch (fileResult.Result)
{
case ExistenceResult.FoundInCloudAfterRebuildPath:
case ExistenceResult.FoundInCloudDirectly:
return new OSiteCacheDependency(fileResult.LastModified, ositeVirtualPathHelper.SiteID.ToString(), utcStart);
default:
if (fileResult.RefinedAccessPath.IsNotNullOrEmpty())
return new System.Web.Caching.CacheDependency(fileResult.RefinedAccessPath);
else
return null;
}
}
However currently I'm a bit concerned whether my code is correct or not - because when I test it on my local PC, it works perfectly, however if I upload it to Azure websites, it takes AGES to get the pages rendered.
The views are stored on Azure Blob storage, and I put log entries on the GetFile and find they are cached, however it does look like the website is getting constantly recompiled on each page (yes each page, because when it is compiled if I refresh the Azure website page it gets displayed instantly, but not other pages that I haven't visited)
So my first guess is - Azure website performance is very poor, however then I upgraded it to P3 Large Instance Web App Service Plan and got still the same problem. So it made me thinking do I have any error the in VirtualPathProvider again? As GetFile() method is not always hit and the visited page gets displayed immediate after refresh, I'm sure the caching is also working, so it leaves me thinking whether there's any other compilation happening during the process that causes each page taking so much time for the first load?
Can anyone help please...
Thanks in advance.
Hi for those who are interested to know the outcome of this issue:
well I didn't find out a perfect solution however I did find out that after deploying the website onto a Cloud Service it solved the issue almost immediately....
There's no difference in code, in deployment or anything in the views, but I guess there's a problem on dedicated resources comparing Azure Websites v.s. Cloud service (even with the fact I had tried Large instance Azure website, it didn't compete with the Standard medium size Cloud service instance for some reason.)
I'm sure millions are using Azure so there's no doubt my code is certainly having some critical performance issue. I have to firstly make it functioning by using Cloud Service, and then try to find a way to optimize it after the deployment (otherwise our clients will drive us mad!)
So what happened:
Complexity 1) Our application is actually a multi-tenant ASP.NET MVC website which renders websites for our customers (i.e. website builder). We allow customers to have their own code in Razor Views so that comes with a sacrifice of performance impact in compilation (hence our problem in the topic)
Complexity 2) All our clients' websites are totally different and views are stored in Azure Blob Storage! (We have another separate back-end system for them to manage these websites) Therefore, we cannot use local file system for the views - i.e. default ASP.NET MVC View Engine won't work, and by doing a basic custom view engine will not work either which lead us implementing our own VirutalPathProvider and CacheDependency
Complexity 3) Our website are managed using our own internal OAuth server, so basically all data retreived in the website are through internal API calls, which delays the view rendering time too.
We've been working weekends and nights in the past few weeks solving complexity 2) and now getting extremely frustrated by the poor performance at the moment.
Literally we sit together at 2am mid-night thinking what went wrong and made the fundamental key factors as above. We will tackle around Complexity 3) for sure, but for Complexity 1) we have to temporarily choose Cloud Service to solve the issue (Still slow, but at least the websites can be opened in an acceptable time)
Frustration also came when we use our own dedicated server, VPS and even Azure VM everything worked with no issue (websites rendered in the platform can be opened in our local debug mode within 2 seconds no mention the remote connection to Azure SQL and Blob Storage.)
So myths are still there, ultimate solution not found yet. But decision has been made that we will work with the current speed and current Cloud Service for now, and make further investigation later when we settle down our customers first...
Anyone has any clue or realize what we've done wrong based on my writings above, please let me know - highly appreciated!
I am trying to convert my asp.net mvc4 app, which had fairly heavy use of SessionState, into a stateless app. I understand that I can store this information in the DB, and intend to do so.
My question, though, is about my particular architecture. My app has a main 'page' consisting of a number of partial view panels, which each have actions in them that can affect the other panels. What i've been doing up to now is storing the entire state of the viewModel (lots of inter-related EF list collections and 'record' objects) in the session, and its been working great. Except when the session just randomly dies.
So, I need to get this data out of the session, and into the DB where I can rebuild the thing at need. My concern is that, if I store the info in the database, every single action done on screen might affect 3-5 different panels, each with their own State updates, thats a minimum of 10 round trips to the DB for every interaction!
What are some strategies I can use to make this idea more scalable?
EXTRA INFO
The view in question here is a sort of POS shopping cart system. There are panels for selecting events, selecting/adding items to the cart, editing cart items, selecting contacts, editing contacts, displaying the cart items, displaying the cart 'subtotals', and finally, a panel with a [checkout] button.
Selecting a new event will change the list of available items. Selecting an item to add to the cart will change the cart item list, subtotals, as well as the checkout panel. Same for editing a cart item.
The main concern is how to recover from a lost session, as I've found the built-in asp.net session code too unreliable. My testers have encountered issues with sessions timing out, and then my app not having any kind of recovery process. When its installed on 1500 sites, each with an average of 10 users, its going to be a plague of lost session issues, and I need to combat that before it becomes a real problem.
I agree that I'm not going stateless...wrong choice of words used in a rush. I'm just trying to move that state into a form that I can rely on past the session failure. My main idea presently is to continue using the session as the local cache for the viewModel data, but to have a fallback operation that can rebuild the viewModel from DB if the session one is lost somehow.
You shouldn't necessarily be using a database to store (what sounds like) data that only needs to be persisted in the short term.
If these changes to the other partials are only relevant in the context of the current "master view," then I would suggest using jQuery AJAX to send off the requests, parse the response JSON and update the other views. Tutorials on jQuery AJAX and ASP.NET MVC are easy to find, if you don't already have the knowledge:
http://www.codeproject.com/Articles/41828/JQuery-AJAX-with-ASP-NET-MVC
This way, you don't need to make a bunch of round trips. If the changes need to be persisted beyond the context of the current view, make ONE round trip to the database to perform the update and then simply update all of the other partials from the in-memory response from the AJAX call.
You don't need to read from secondary storage multiple times when you already have all of the information you need in-memory. Just do the reading and writing once.
I decided to go with a hybrid approach. I'm still using session, but I'm building out a DB 'recovery' option, so that if the session portion is lost, the DB will be able to provide the values needed to rebuild the session seamlessly.
Seems to be working well, so far.
I have a Silverlight 3 business app set up with RIA Services. I use a domain datasource to connect to the backend and fetch the data and populate a series of dataforms and grids bound to this datasource.
The issue is that we require tight security and currently when a user logs out and another logs back in on the same machine the forms/grids briefly display information from the last login before the DDS gets the new set.
What's the best approach to wiping out all the data when the user logs out in Silverlight? For legal reasons we can't chance any data hanging around, so is there a way to tell Silverlight to go back to its initial state?
Thanks,
Found my solution: turns out that by simply removing the NavigationCacheMode="Enabled" from each of the pages, logging out now correctly clears out all the form/grid data cleanly and simply.
Before finding this solution though I did run across this method to clear all the form/grid data on a page but for my case, the solution above works much better.