Insert to asp.net mvc outputcache from console program - asp.net-mvc

I am using Redis for asp.net MVC output cache. Some of my views take a fair bit of processing, currently I have an overnight process that generates the required data for the views and puts it in Redis cache so the views can render much quicker, however the data is only in the cache for the purpose of the initial render of the view and then the view is cached by output cache config.
It would be MUCH better if I could just render the view and put that directly into the cache from the overnight console program. How would I do this? I gather I would need to insert to Redis with the same key that ASP.NET MVC would give and call whatever internal render method that asp.net MVC uses?
I don't need instructions for inserting to Redis, rather what is the render method I need to call and how are the key names constructed for asp.net MVC OutputCache.
I am using asp.net MVC 5, however, bonus kudos if you can also answer for Core to futureproof the answer!
Please no suggestions of generating static files, that's not what I want, Thanks.

How are the key names constructed for asp.net mvc outputcache?
This part is easy to answer if you consult the source code for OutputCacheAttribute. The keys depend on the settings (e.g. the keys will have more data in them if you have set VaryByParam). You can determine the keys by checking how the attribute populates uniqueID for you use case. Notice that the keys are concatenated and then hashed (since they could get very long) and then base64-encoded. Here is the relevant code:
internal string GetChildActionUniqueId(ActionExecutingContext filterContext)
{
StringBuilder uniqueIdBuilder = new StringBuilder();
// Start with a prefix, presuming that we share the cache with other users
uniqueIdBuilder.Append(CacheKeyPrefix);
// Unique ID of the action description
uniqueIdBuilder.Append(filterContext.ActionDescriptor.UniqueId);
// Unique ID from the VaryByCustom settings, if any
uniqueIdBuilder.Append(DescriptorUtil.CreateUniqueId(VaryByCustom));
if (!String.IsNullOrEmpty(VaryByCustom))
{
string varyByCustomResult = filterContext.HttpContext.ApplicationInstance.GetVaryByCustomString(HttpContext.Current, VaryByCustom);
uniqueIdBuilder.Append(varyByCustomResult);
}
// Unique ID from the VaryByParam settings, if any
uniqueIdBuilder.Append(GetUniqueIdFromActionParameters(filterContext, SplitVaryByParam(VaryByParam)));
// The key is typically too long to be useful, so we use a cryptographic hash
// as the actual key (better randomization and key distribution, so small vary
// values will generate dramtically different keys).
using (SHA256Cng sha = new SHA256Cng())
{
return Convert.ToBase64String(sha.ComputeHash(Encoding.UTF8.GetBytes(uniqueIdBuilder.ToString())));
}
}
You'll notice later the uniqueID is used as a key into the internal cache:
ChildActionCacheInternal.Add(uniqueId, capturedText, DateTimeOffset.UtcNow.AddSeconds(Duration));
What is the render method I need to call?
Short answer: ExecuteResult.
Long answer: Holy crap, you are asking a lot here. Essentially you wish to instantiate some objects within the console process and call methods which will faithfully recreate the output that would have been created if you called it from within the AppDomain where the web site usually runs.
Web applications often rely on initialization and state that is created when the application starts up (e.g. setting up the composition root/IoC, or setting up Automapper, that sort of thing), so you'd have to run the initialization of your web site. A specific view may rely on contextual information such as the URL, cookies, and querystring parameters; it may rely on configuration; it may call internal services, which also rely on configuration, as well as the AppDomain account being set up a certain way; it may need to use things like client certificates which may be set up in the service account's personal store, etc.
Here is the general procedure of what the console app would have to do:
Instantiate the site's global object, calling its constructor, which may attempt to wire up events to the pipeline.
You will need to mock the pipeline and handle any events raised by the site. You will also need to raise events in a manner that simulates the way the ASP.NET pipeline works.
You will need to implement any quirks in the ASP.NET pipeline, e.g. in addition to raising events you will also need to call handlers that aren't subscribed to the events if they have certain predefined names, such as Application_Start.
You will need to emulate the HTTP request by constructing or mocking pipeline objects, such as HttpContext.
You will need to fire request-specific events at your code in the correct order to simulate HTTP traffic.
You will need to run your routing logic to determine the appropriate controller to instantiate, then instantiate it.
You will need to read metadata from your action methods to determine which filters to apply, then instantiate them, and allow them to subscribe to yet more events, which you must publish.
In the end you will need to get the ActionResult object that results from the action method and call its ExecuteResult method.
I don't think this is a feasible approach, but I'd like to hear back from you if you succeed at it.
What you really ought to do
Your console application should simply fire HTTP requests at your application to populate the cache in a manner consistent with actual end user usage. This is how everyone else does it.
If you wish to replace the cached page before it has expired, you can invalidate the cache by restarting the app pool, or by using a dependency.
If you are worried about your response time statistics, change the manner in which you measure them so that you exclude any time window where this refresh is occuring.
If you are worried about impacts to a Google crawl, you can modify the host load schedule and set it to 0 during your reset window.
If you really don't want to exercise the site
If you insist that you don't want to exercise the site to create the cache, I suggest you make the views lighter weight, and look at caching at lower layers in your application.
For example, if the reason your views take so long to render is that they must run complicated queries with a lot of joins, consider implementing a database cache in the form of a denormalized table. You can run SQL Agent jobs to populate the denormalized table on a nightly basis, thus refreshing your cache. This way the view can be lightweight and you won't have to cache it on the web server.
For another example, if your web application calls RESTful services that take a long time to run, consider implementing cache-control headers in your service, and modify your REST client to honor them, so that repeated requests for the same representation won't actually require a service call. See Caching your REST API.

Related

Breeze.js: Returning empty set when requested database does not exist

Currently we are using Breeze.js and Angular to develop our applications. Due to some persistent legacy issues, we have two databases ('Kenya' and 'Rwanda') that cannot be merged at this time, but have the same schema and metadata. Most of the time, the client knows which database to hit and passes the request through the .withParameters() function or the .saveOptions() function. Sometimes we want to request the same query from both databases (for example, if we are requesting a list of all available countries), and we use a EntityManager wrapper on the client to manage this and request the same query from each database. This is implemented through a custom EFContextProvider which uses the data returned to determine the appropriate database and creates the appropriate context in CreateContext().
To further complicate things, in some instances one or the other database won't exist (these are local deployments created through filtered replication), but the client won't know this. Therefore, when querying for a list of all countries, it issues two requests and one will cause failures because the context cannot be instantiated properly.
This is easy enough to detect on the Server. What I would like to do is to detect whether the requested context is available and, if not, return a 200 response and an empty set.
I can detect this in the Breeze DBContextProvider CreateContext() method, but cannot figure out how to cause the request to fallback gracefully to a empty-set response.
Thanks
Not exactly what I was looking for, but it probably makes more sense since most of the work is being done on the client-side:
Instead of trying to change the controller, I added a getAvailableDatabases to the C# controller actions and use that to determine which of the databases I will query from the client.

MVC 3 passing large data (5MB)

We have an MVC project that is going to handle displaying a trace logging mechanism that collects data from several applications that all relate to one master application suite. The Trace logging tool is a service that collects exceptions and other various logging information and places them in a database for later consumption. This MVC project is part of that consumption.
As I'm sure you can tell, there is a lot of data that is returned via Entity / LINQ. Right now the developer is getting all the data back and is using a session variable to hold this data ( i think he said it's like a good 3-5 MB worth of data that is getting returned.). Only 512 traces are sent back to the view / browser. The user then has the ability to filter by anything they type via AJAX call. The developer is using the old Session["name"] object to put the data in and is using LINQ to filter through it on the server so that he is not hitting the Tracing service everytime a filter is selected / typed.
It works locally, but not remotely. I'm thinking there is an issue on IIS, but haven't looked into that yet.
I was wondering if Sessions are the best approach for large data like this, or if there is a better recommendation instead of Sessions that would be better.. I know MVC is stateless and I try to keep it clean of anything but TempData as best I can, but unsure how to tackle this otherwise
It could be better to use HttpRunTime.Cache to store the data as it has flexibility in how it is expired, especially if the traces are global to the application, e.g.
private List<string> GetApplicationSuiteTraces()
{
List<string> applicationSuiteTraces = Cache["ApplicationSuiteTraces"];
if(applicationSuiteTraces == null)
{
applicationSuiteTraces = Service.GetTraces();
Cache.Add("ApplicationSuiteTraces", applicationSuiteTraces, null, DateTime.Now.AddSeconds(600), Cache.NoSlidingExpiration, CacheItemPriority.High, null);
}
return applicationSuiteTraces;
}

How Can I let the End User Control AppSettings Themselves?

I have a system I've built in MVC 3 that currently provides a yearly submission cycle where the system proceeds through a serious of seven steps tied to dates stored in the web.config as AppSettings. However, each year, I always have to roll the system back and forth between previous steps in order to accommodate the end users. I would like to give the administrator the ability to control the system status without having to contact a developer. What is the best way to do this?
I plan to build a page with proper validation that lets the administrator set the dates. I've considered a couple options of how I should store those date, but none of them seem correct. Our entire permission system uses these dates, and various bits of text on the pages turns on and off based on what period we're currently in.
So far I've come up with two options:
Option 1: Create a database table – This was my first thought. I’ve set up properties on the MvcApplication class in the global.asax and pulled them from the database. Using a lazy loader, I can set the properties the first time they're needed. However, when they change in the database, I don't have a way to force the system to “reset” and read the date changes. If I do this action on Begin_Request(), I'm constantly opening the connection and resetting the properties for each file that the web browser opens on the server, regardless if it's static content or not.
I could directly fetch the dates from the database every time I need one of the dates, but then I'm having to redo a lot of functionality to reduce repeated database calls. I'd like to cache the dates for each request, and only pull them when I need them,
Option 2: Allow editing a config file through the application – I've looked up how to split the web.config file so I can have a separate file that just contains the appSettings. Then I could just update the new config file from a controller action. I think this would work nicely, and not require me to rewrite any of the existing functionality, but it feels like I would be introducing a bad design pattern into the code.
I'd vote for the database. For the sake of performance you can cache those parameter values in a static class inside your app and provide a method to reread them from DB in the same class. So:
When a user makes request, check if those properties are already cached. If they are - use cached values, if no - read them from DB
When administrator makes changes to those parameters - store them to database and enforce your static caching class to reread them from DB.
I would suggest an approach that doesn't care whether the settings are stored in database or key/value pairs in config file.
Since you want the settings to be accessed globally by all users you can cache the settings and the cache implementation should be generic and distributed. There are plenty of online resources available how to create such an interface.
Since you want the cache to be sync with the underlying data you have to set cache dependencies (AppFabric won't supports sql cache dependency see this thread, while NCache supports both sql and file).
I would store the values in a database and use a distributed cache to persist the data across the web farm. MS AppFabric Caching has worked well for me. You will need to implement a standard caching pattern (check the cache, if null load from db and insert into cache).I would probably just create a static Load() method that abstracts this logic away. When the admins update the db you could update the cache or just delete the cachekey.
Therr are other considerations to be added to performance. Namely if you modify the config file thr application pool is re iniyializrd, while the database solution doesnt cause application reinitialization
...so do you need to re initialize the app after the changes or not?...If there i no way to avoid the inizialization whitout drastic changmes to the application ptobably the config filr solution is better

Rails - Store unique data for each open tab/window

I have an application that has different data sets depending on which company the user has currently selected (dropdown box on sidebar currently used to set a session variable).
My client has expressed a desire to have the ability to work on multiple different data sets from a single browser simultaneously. Hence, sessions no longer cut it.
Googling seems to imply get or post data along with every request is the way, which was my first guess. Is there a better/easier/rails way to achieve this?
You have a few options here, but as you point out, the session system won't work for you since it is global across all instances of the same browser.
The standard approach is to add something to the URL that identifies the context in which to execute. This could be as simple as a prefix like /companyx/users instead of /users where you're fetching the company slug and using that as a scope. Generally you do this by having a controller base class that does this work for you, then inherit from that for all other controllers that will be affected the same way.
Another approach is to move the company identifying component from the URL to the host name. This is common amongst software-as-a-service providers because it makes sharding your application much easier. Instead of myapp.com/companyx/users you'd have companyx.myapp.com/users. This has the advantage of preserving the existing URL structure, and when you have large amounts of data, you can partition your app by customer into different databases without a lot of headache.
The answer you found with tagging all the URLs using a GET token or a POST field is not going to work very well. For one, it's messy, and secondly, a site with every link being a POST is very annoying to work with as it makes navigating with the back-button or forcing a reload troublesome. The reason it has seen use is because out of the box PHP and ASP do not have support routes, so people have had to make do.
You can create a temporary database table, or use a key-value database and store all data you need in it. The uniq key can be used as a window id. Furthermore, you have to add this window id to each link. So you can receive the corresponding data for each browser tab out of the database and store it in the session, object,...
If you have an object, lets say #data, you can store it in the database using Marshal.dump and get it back with Marshal.load.

When do Symfony's user attributes get written to session?

I have a Symfony app that populates the "widgets" of a portal application and I'm noticing something (that seems) odd. The portal app has iframes that make calls to the Symfony app. On each of those calls, a random user key is passed on the query string. The Symfony app stores that key its session using myUser->setAttribute(). If the incoming value is different from what it has in session, it overwrites the session value.
In pseudo-code (and applying a synchronous nature for clarity even though it may not exist):
# Widget request arrives with ?foo=bar
if the user attribute 'foo' does not equal 'bar'
overwrite the user attribute 'foo' with 'bar'
end
What I'm noticing is that, on a portal page with multiple widgets (read: multiple requests coming in more or less simultaneously) where the value needs to be overwritten, each request is trying to overwrite. Is this a timing problem? When I look at the log prints, I'd expect the first request that arrives to overwrite and subsequent requests to see that the user attribute they received matches what was just put into cache by the initial request.
In this scenario, it could be that subsequent requests begin (and are checked) even before the first one--the one that should overwrite the cached value--has completely finished. Are session values not really available to subsequent requests until one request has completed entirely or could there be something else that I'm missing?
Thanks.
Attributes of the user do not get written to storage until the end of the request (in sfUser::shutdown). Attributes get loaded into sfUser at the beginning of a request. So in this case, the second request would have to be initiated after the first request is finished. Your best options are probably
Add hardRead and hardWrite methods to sfUser (look at what sfUser::initialize and sfUser::shutdown do respectively).
Use another method of storing the information that has better support for concurrency. The database or potentially the caching system you're using could work. For example, I think this could be done using APC cache.
Note that depending on what class you're using for storage, user attributes may not get written to $_SESSION at all. Symfony supports using many methods for storing user attributes (e.g. database, cache).

Resources