I've noticed that every time we alter the response cookies through:
HttpContext.Response.Cookies.Add(myCookie)
Header becomes:
Cache-Control: public, no-cache="Set-Cookie"
and Output Cache is invalidated.
It is very annoying and I was wondering if any one noticed similiar issues while output caching.
You could always switch to using a server-side caching model, such as System.Web.Caching.Cache or System.Runtime.Caching.MemoryCache, which would share caching of objects between users while still allowing communication with the browser.
Frankly, this server-side is the first caching model I have used. I only recently started using output caching and I find it very limited by comparison. Its only advantages are that it caches the page on the client side under certain scenarios and that it caches content rather than the data that generates the content (saving some CPU cycles). Its main disadvantage is that you have to disable it under certain conditions, such as during authentication or writing cookies. You never have to disable server-side caching - not even for application pool recycles - because it doesn't hinder communication with the browser.
For the best of both worlds, you could combine both approaches so whatever backend process that you don't want executed multiple times provide cached data when the view is generated. Then you would have client-side caching in most cases, and would rely on the server side caching when updating cookies. It could take more memory to use this approach, but that tradeoff might be worth it in your case.
Related
I'd like my iOS application (at least certain endpoints) to have the following network behavior:
Always use the cache, whenever it's available, no matter the age (draw the UI right away)
If the data is stale, also make a network request (the UI has stale data during this period, but it's still probably pretty close)
If network data returns, update the cache and make any UI updates that are required.
I prefer a behavior like this because I can then set my caching policy very aggressively (long cache times). For data that updates infrequently, this results in rapid UI returns in the common case and a model layer that is kept up to date essentially in the background (from the user's perspective)
I'm reading about NSURLCache, but I don't see a cache policy, or even a combination of two policies that I'm confident in.
Options:
Use ReturnCacheDataDontLoad to always get cache. If failure or old cache use ReloadIgnoringLocalCacheData for the HTTP fetch. (have to check myself? age is inspectable?)
Use ReturnCacheDataDontLoad to always get cache. Then use UseProtocolCachePolicy with the cache time set to very low and ignore the response if it returns from cache (can I tell if it returns from cache? this says not reliably)
Separate the two concerns. Use ReturnCacheDataDontLoad for all user-initiated requests, only firing a network request right away if there is no cache at all. Separately, have a worker that keeps an eye on stored models, updating them in the background whenever they appear old.
Extend NSURLCache
Use something OTS that already does this? (-AFNetworking just uses NSURLSession caching. +EVURLCache forces disk caching but expects the data to be seeded on app install.
We are thinking of using HttpRuntime.Cache for storing data accessed frequently by all users, but wanted to know what are the performance implications of using HttpRuntime.Cache? Are the contents of the cache transported in every http request and response? How much information can be reasonably stored in there?
What are the performance implications of using HttpRuntime.Cache?
Normally, Cache is stored in server's memory, unless server are configured as Web Farm or Web Garden. As the result, accessing to Cache is really fast compare is Database.
Are the contents of the cache transported in every http request and
response?
No.
How much information can be reasonably stored in there?
Virtually, there is no limit. However, you only want to cache the information - you often needed and do not change very often. In addition, you do not want to cache images and files - Cache is not meant for that.
I was thinking about setting up a project with Web API. Basically build the API first and program the web site using this API.
Although it's sound promising I was wondering:
If I separate the logic in a nice way, I might end up retrieving data on a web-page through multiple API call's, which in turn are multiple connections with the server with all the overhead etc..
For example, if I use, let's say 8 different API call's on one page, I can't imagine it won't have an impact on the web-page's performance.
So, have I misunderstood something? Or is this kind of overhead negligible - or does the need for multiple call's indicates that the design is wrong?
Thanks in advance.
Well, we did it. Web API server providing the REST access to all the data. Independent UI Clients consuming it as the only access-point to underlying peristence.
The first request takes some time. It is significantly longer. It must init all the UI Client stuff, and get the least needed data from a server. (Menu, user, access rights, metadata...list-view data)
The point, the real advantage, is hidden in the second, the third... request. Lot of stuff is already there on a UI Client. And even, if this is requested again, caching (Server, Client, both) could be introduced.
So, this would mean more requests (at least during the UI Client start up)... but it does not imply ... slower application.
The maintenance benefit is hidden (maybe it is not hidden, it should be obvious) in the Separation of Concern. On the server, we are no longer solving the issue, where to place the user data handling, the base-controller or child-controller... should there by the Master-page, the Layout-controller...
Solved. We are taking care about single, specific stuff, published via REST. One method, one business operation. And that's the dream if we'd like to keep that application alive and be the repairman and extender.
One aspect is that you can display the page to the end user very very fast . Once the page is loaded, use Jquery async calls and any Javscript template tool (like angularjs or mustacheJs) to call the web api simultaneously to build the client page views.
I have used this approach in multiple project and experience of the user is tremendous.
Most modern browsers support 6-8 parallel connections to the same site. So you do have to be careful about that. Unless you are connecting to that many separate systems, I would try to reduce the number of connections. Or ensure the calls are called asynchronously by different events to reduce the chance of parallel connections.
Making a series of HTTP calls to obtain data for your page will have an overhead. Only testing will tell you how that might impact in your scenario.
There is little point using Web API just because you can. You should have a legitimate reason for building a RESTful API. Even then, if it is primarily for your own consumption, design it to deliver a ViewModel for each page in one call.
I use rails to present automated hardware testing results; our tests are run mainly via TCL. Recently, we have implemented a "log4TCL" which is basically a translated version of log4J. The log files have upwards of 40000 lines, each of which is written to the database as a logline record, and load time for the view is too long to be considered usable. I have tried to use ajax requests to speed things up, but the initial query/page load accounts for ~75% of the full page load.
My solution is page caching. I cannot use the rails included page caching because each log report is a different instance of "log_viewer". The report is generated using a test_run_id parameter. Rails-included page caching only caches one instance of "log_viewer.html". What I need is "log_viewer_#{test_run_id}.html". I have implemented a way of doing this. The reports age out after one week and are purged from the test_runs/log_viewer_cache directory to save disk space. If an older report is needed, loading the page re-generates the report with a fresh age-out timer.
I have come to the conclusion that this is the way to go. My concern is that I have not found any other implementations such as this anywhere which leads me to believe that I have missed an inherent flaw in my design. Any input would be much appreciated.
EDIT: For clarification, the "Dynamic" content of this report is what takes too long to load. I need to cache multiple instances of what action/fragment caching is not concerned with.
Caching is something that I kind of ignored for a long time, as projects that I worked on were on local intranets with very little activity. I'm working on a much larger Rails 3 personal project now, and I'm trying to work out what and when I should cache things.
How do people generally determine this?
If I know a site is going to be relatively low-activity, should I just cache every single page?
If I have a page that calls several partials, is it better to do fragment caching in those partials, or page caching on those partials?
The Ruby on Rails guides did a fine job of explaining how caching in Rails 3 works, but I'm having trouble understanding the decision-making process associated with it.
Don't ever cache for the sake of it, cache because there's a need (with the exception of something like the homepage, which you know is going to be super popular.) Launch the site, and either parse your logs or use something like NewRelic to see what's slow. From there, you can work out what's worth caching.
Generally though, if something takes 500ms to complete, you should cache, and if it's over 1 second, you're probably doing too much in the request, and you should farm whatever you're doing to a background process…for example, fetching a Twitter feed, or manipulating images.
EDIT: See apneadiving's answer too, he links to some great screencasts (albeit based on Rails 2, but the theory is the same.)
You'll want to think about caching several kinds of things:
Requests that are hit a lot, and seldom change
Requests that are "expensive" to draw, lots of database calls, etc. Also hopefully these seldom change.
The other side of caching that shouldn't go without mention, is expiration. Its also often the harder part. You have to know when a cache is no longer good, and clear it out so fresh content will be generated. Sweepers, or Observers, depending on how you implement your cache can help you with this. You could also do it just based on a time value, allow caches to have a max-age and clear them after that no matter what.
As for fragment vs full page caching, think of it in terms of how often those parts are updated. If 3 partials of a page are never updated, and one is, maybe you want to cache those 3, and allow that 1 to be fetched live for so you can have up to the second accuracy. Or if the different partials of a page should have different caching rules: maybe a "timeline" section is cached, but has a cache-age of 1 minute. While the "friends" partial is cached for 12 hours.
Hope this helps!
If the site is relatively low activity you shouldn't cache any page. You cache because of performance problems, and performance problems come about because you have too much data to query, too many users, or worse, both of those situations at the same time.
Before you even think about caching, the first thing you do is look through your application for the requests that are taking up the most time. Not the slowest requests, but the requests your application spends the most aggregate time performing. That is if you have a request A that runs 10 times at 1500ms and request B that runs 5000 times at 250ms you work on optimizing B first.
It's actually pretty easy to grep through your production.log and extract rendering times and URLs to combine them into a simple report. You can even do that in real-time if you want.
Once you've identified a problematic request, you go about picking apart what it's doing to service the request. The first thing is to look for any queries that can be combined by using eager loading or by looking ahead a bit more to anticipate what you'll need. The next thing is to ensure you're not loading data that isn't used.
So many times you'll see code to list users and it's loading 50KB per person of biographical data, their Facebook and Twitter handles, literally everything about them, and all you use is their name.
Fetch as little as you need, and fetch it in the most efficient way you can. Use connection.select_rows when you don't need models.
The next step is to look at what kind of queries you're running, and how they're under-performing. Ensure your indexes are all set properly and are being used. Check that you're not doing complicated JOIN operations that could be resolved by a bit of tactical de-normalization.
Have a look at what data you are storing in your application, and try and find things that can be removed from your production database and warehoused somewhere else. Cycle your data out regularly when it's no longer relevant, preserve it in a separate database if you need to.
Then go over and have a look at how your database server is tuned. Does it have sufficiently large buffers? Is it on hardware that could be upgraded with more memory at a nominal cost? Too many people are running a completely un-tuned database server and with a few simple settings they can get ten-fold performance increases.
If, and only if, you still have a performance problem at this point then you might want to consider caching.
You know why you don't cache first? It's because once you cache something, that cached data is immediately stale. If parts of your application use this data under the assumption it's always up to date, you will have problems. If you don't expire this cache when the data does change, you will have problems. If you cache the data and never use it again, you're just clogging up your cache and you will have problems. Basically you'll have lots of problems when you use caching, so it's often a last resort.