Cache Location in MVC - asp.net-mvc

I am learning the concept of caching in MVC and was going through this article. There is a property of the OutputCache attribute 'Location' with following possible values:
Any (Default): Content is cached in three locations: the web server, any proxy servers, and the web browser.
Client: Content is cached on the web browser.
Server: Content is cached on the web server.
ServerAndClient: Content is cached on the web server and and the web browser.
None: Content is not cached anywhere.
I want to know when would we use the Location value Client and Server and why would we prefer one over the other.

If you want to cache user specific information go for server option. for eg. if your view returns the logged in user, then you need to use server option. If your view content is same for all the users, then use client option, it will be cached in the browser.

Related

Using Service Worker - MVC view that uses fetched data when online

I am using the Service Worker example on this page. I have implemented the service worker (ASP.NET MVC application running within an SSL localhost site) and I receive confirmation (console message) that it is registered.
Within the URL's to cache I have added
an index page (MyController/Index) which performs a "fetch" (part of the Fetch API)
as well as the URL that is fetch'd (MyController/GetData) which goes
and gets the data from the database
All looks OK when I am online - i.e. all pages I have specified in the URLs to cache are all successfully retrieved (as per the Chrome|DevTools|Network tab).
When I go offline (via the Chrome|DevTools|Application|Service-Workers|Offline checkbox), if I go to pages that I have not listed in the URL's to cache, I get the "Offline" page I have specified in the service worker (which is correct). However when I navigate to the Index page (MyController/Index - mentioned above) which I have listed in the URL's to cache, the view appears however the "fetch" (to MyController/GetData) on that page fails - I wondered what the expected result of this is.
I was of the assumption that the data (retrieved through MyController/GetData) would be cached and when I go offline that cached data would be substituted if the fetch failed (i.e. the NetworkFirst strategy)
Can anyone point me in the direction of what should occur and ideally an MVC example
Justin was right - my MyController/GetData method was actually using a 'Post' method and in my fetch handler it was attempting to always fetch non-Get requests from the network - hence the failure when offline. I just changed the request to a Get request and my service worker fallback logic got the data from cache

HTTP Redirect on a browser without showing intermediate window

I have two servers someserver.com and anotherserver.com
What I need is when a user clicks on someserver.com he or she will be redirected to anotherserver.com
Currently when I do a redirect programatically on the server (ASP.NET MVC IIS)
what a user see is: 1) someserver.com is loaded 2)anotherserver.com is loaded.
What i want is when a user clicks on someserver.com he sees only anotherserver.com in his browser.
Does http protocol allow it?
Thanks!
There are a bunch of approaches to this but the simplest one of you own the domain is to just use domain forwarding st the dns level. Then it won't involve your web server at all. Otherwise, the browser will always have to load the first site, if only briefly, before loading the second one. You can optimize this by just sending a redirect, which should be barely noticeable. Another option would be client-side JavaScript, but if you know on the client that you want to go to the new URL, you could just use a standard hyperlink to it at that point (so I assume this is not an option).

Nginx Auth Proxy

I have multiple services which have their own web-server, listening on different ports, for e.g. :
http://127.0.0.1:5000 (service A)
https://127.0.0.1:3000 (service B)
I need a way to restrict access to them without tweaking each of them individually. So, I have an OAuth server hosted as well (port 2333). I have configured the OAuth server to be able to redirect you to a certain URL, if you successfully authenticated through it. So, for e.g. if I access this URL:
https://127.0.0.1:2333/oauth/authorise?service=A&redirect_uri=http://127.0.0.1:5000
It will ask for authentication (or search for cookie) and redirect me to the desired URL. This works OK if I manually access that URL, but I need it automated (every time you try accessing the initial URL, get redirected to the OAuth).
I need the following scenario:
Insert URL http://127.0.0.1:5000 in browser
Get redirected to https://127.0.0.1:2333/oauth/authorise?service=A&redirect_uri=http://127.0.0.1:5000
The OAuth server takes care of the rest
For this, I was thinking of using nginx to redirect, but I don't know how to configure it. Any ideas?
How are those services hosted? You have a couple of options:
Doing a redirect to (https://127.0.0.1:2333) on the / path for A service (on the source code).
Doing a redirect on the server configuration (which is more low level and should be faster)
The first option gives you more control, and you can do other things easily (like checking if the user is logged in). The second option is faster, but some things are harder to implement, as you will modify the server configuration)

Errror in Proxy server when caching HTML page

Problem:
UserA and UserB are in a network with server proxy.
UserA opens page "www.myapp.com/initPage.htm".
If UserB opens the same page, then he will see the page with information from UserA.
For the proxy server is the same page so it returns the information it has cached.
More Info:
Each user has different JSESSIONID and stored in the attribute set-cookie of the header response.
The URL is the same for the two users but the information depends on the JSESSIONID.
The proxy server don't stores the JSON calls only HTML pages.
I tried to solve the problem with this solution but did not work.
Architecture:
My application is implemented with Spring Security 3.1 and Struts2.
Work on Apache2 server, which is connected to a Tomcat7 through mod_jk module and configured with "workers.properties" file.
How I can tell the proxy server will never save the HTML page?
Best regards and thanks.
Finally, this solution worked but adding the filter in the first position in the web.xml.
Best regards and thanks.

Make an ASP.NET MVC application Web Farm Ready

What will be the most efficient way to make an ASP.NET MVC application web-farm ready.
Most importantly sharing the current user's information (Context) and (not so important) cached objects such as look-up items (States, Street Types, counties etc.).
I have heard of/read MemCache but haven't seen a simple applicable way (documentation) on how to implement and test it.
Request context
Any request that hits a web farm gets served by an available IIS server. Context gets created there and the whole request gets served by the same server. So context shouldn't be a problem. A request is a stateless execution pipeline so it doesn't need to share data with other servers in any way shape or form. It will be served from the beginning to the end by the same machine.
User information is read from a cookie and processed by the server that serves the request. It depends then if you cache complete user object somewhere.
Session
If you use TempData dictionary you should be aware that it's stored inside Session dictionary. In a server farm that means you should use other means than InProc sessions, because they're not shared between IIS servers across the farm. You should configure other session managers that either use a DB or others (State server etc.).
Cache
When it comes to cache it's a different story. To make it as efficient as possible cache should as well be served. By default it's not. But looking at cache it barely means that when there's no cache it should be read and stored in cache. So if a particular server farm server doesn't have some cache object it would create it. In time all of them would cache some shared publicly used data.
Or... You could use libraries like memcached (as you mentioned it) and take advantage of shared cache. There are several examples on the net how to use it.
But these solutions all bring additional overhead of several things (like network and third process processing and data fetching etc.) if nothing else. So default cache is the fastest and if you explicitly need shared cache then decide for one. Don't share cache unless really necessary.

Resources