ASP.NET caching startup values - asp.net-mvc

I'm converting an application from C# WebForms to MVC.
The application gets settings from a centralized location using Web Services. These are settings you would typically find in a Web.Config, but the desire of the company is to store these values in a centralized location for all apps.
Currently, any time you request an application setting, it checks HttpContext.Cache to see if you've already retrieved the settings. If you haven't, it makes the web service call, and stores the settings (100+ objects that are essentially key/values) in the HttpContext.Cache. So the call to get application settings only occurs once.
Should I be looking at another way to do this? I was thinking the settings should just be a REST service call where you pass the key and get a value (the current service is an *.ashx which is really not ideal for exception handling amoung other reasons). But obviously this would result in more web requests. What is considered a best practice here? Is the current method fine and I should just leave the code working the same in the MVC app?

It is better to load all resources in one call:
fewer HTTP requests are better for performance. Http requests can have a latency around 50 ms thus it will take much longer to get all values one by one 50 x 100 = 5000 ms => 5 secs
if the external web service goes down then your application still works because you already downloaded and cached all values
I will keep the current solution if it works and focus on new things instead of rewriting a working code.

Related

Zero downtime/blue-green deployment of Single Page Application (SPA)

Yesterday together with the team we were discussing the possibility of using zero downtime deployments to support our single page application.
While discussing it we identified one edge case for it.
After user loads the page in his browser it cannot be removed from memory until he reloads the page. It means that if user loads the page and starts working with the website (for example starts typing a long article like I am doing now) then he cannot receive an updated version of it until he reloads the page.
We could ignore the fact that user sees old application version in his browser but there 2 points listed below.
In case we introduce a breaking change to HTTP Api that is used to serve spa then the user will not be able to save his article (data loss!) or can receive some other error when performing other backend related action.
When user navigates to a new page without reloading SPA he can receive a template of the next page or of some control that is incompatible with outer old container. It can kead to broken markup or application logic.
We cannot force user to relogin as he can be in the middle of typing his article and it is just a bad UX.
Taking all theses points into account one could propose the following solution:
User 1 loads v1 of the SPA into his browser.
Alongside with auth token the version information is sent to browser (using JWT for example).
We want to deploy v2 version of our application. We spin up the v2 version but do not disable v1.
User 2 loads v2 of SPA into his browser
User 1 goes to the next page in SPA. Load balancer checks the version information in his token and routes the traffic of the user 1 to v1 server.
User 2 gets routed in the same way to v2.
User 1 logs out the app and closes the browser.
User 1 logs in back - this time he receives v2.
After v1 application does not receive any traffic for a long time it gets disposed.
In this approach however it is possible to have multiple versions alive, more than 2 (for example if user stays online for whe whole day or two). It means that we will not be able to migrate the database to the new schema until the last user gets logged out (image how it could work for sites like Facebook). It is not a problem to have multiple versions however, such tools as Docker and Rancher allow us to do it easily.
Also in the step 7. User needs to reload the page or close the browser-otherwise he still will be working with v1 and we cannot force him to the next version.
The question I have is what approach do you use to do zero downtime/blue-green deployment of single page applications?
How do you manage the lifetime of "blue" version of your application when you are switching traffic to "green" version, especially in respect to existing "blue" client applications.
Did you solve these issues, do you know any other solution?
I've been struggling with this problem for quite some time and tried several approaches and one specifically worked really well:
Use hashed names when bundling the SPA (including images, et al)
Use a static asset bucket (e.g.: AWS S3) and upload all assets to it before the deployment process kicks in
Enforce internal guidelines to minimize API contracts to be broken (i.e: fields from an endpoint should only be removed after X releases)
Deploy with usual blue/green strategy
Rationale
Using a bucket with hashed bundles ensures that if a customer gets the old version of the SPA, all of its assets will be available before/during/after any deployment process.
Enforcing internal guidelines to not break API compatibility is sometimes tricky but it comes from the very same principles applied to any public API. Embracing/adapting an API deprecation policy from big players helps when communicating with the team with a concrete example.
One approach you might consider is gradual reloading of the SPA in such a moment, when it is not burdensome (or even unnoticeable) for end user.
Suggested approach:
Colored versions of the system (components providing back-end services, API and front-end) "know" (runtimes are provided with) their "color". Component providing users with front-end application embeds this color information into the SPA. This is then sent (via cookie or custom HTTP header) with every request SPA is making to the backend.
Component that routes API calls (API gateway, load balancer, nginx, HAproxy, custom Zuul-based router etc) is aware of this color information and uses it to direct traffic to infrastructure of proper color.
Additionally there is a public URL (not provided by "colored" infrastructure - for example S3 file provided via CloudFront or other proxy) with latest version color. SPA is checking this version every given period of time (60 or 120 seconds). If version does not match the one SPA was provided when loading then on the major next route change page is reloaded "physically", instead of realizing this navigation in browser only.
You can choose which route changes are verifying this version in such a way that it is least obtrusive to the user (possibly almost unnoticeable).
If you choose some of the routes that are used every day by all users then pretty soon all users will migrate to the latest color. Those who have unused opened browser window for long periods of time (computer hibernated for two weeks?) can be handled by forcing reload after certain period of inactivity.
I hope I managed to make myself sound at last a bit cohesive :-)
Regards,
Wojtek
Not sure why would you go for a complete overhaul of your UI since their is always a learning curve involved.Practically in real world it would be a bad idea to switch over to a new UI immediately. You would allow customers to switch over to the new interface over a period of time and then disable older version after a forewarning. Not worth the effort of having such real time switch. A/B testing could be a way to introduce customers to the new interface and then do an actual rollout.
The technique you're describing is called blue-green deployment; You start with your existing server (blue) and add your updated server (green). All new traffic from that point on is redirected to the green environment. The blue environment is only there for servicing existing http connections and also for an optional "roll back" in case the green environment hits major problems. Eventually the "blue" environment can be retired when it has finished servicing all of its requests.
This technique requires that the two systems be somewhat similar. Database schema for instance may make it inpractical.

How to keep the application alive without a client

My goal is to send an email out every 5 min from my application even if there isn't a browser open to the application.
I'm using FluentScheduler to manage the tasks; which works up until the server decides to kill the application from inactivity.
My big constraints are:
I can't touch the server. It is how it is and I have to work around it.
I can't rely on a client refreshing a browser or anything else along the lines of using client side scripts.
I can't use any scheduler that uses a database.
What I have been focusing on is trying to create an artificial postback.
Note: The server is load balanced, so a solution could use that
Is there any way that I can keep my application from getting killed by the server?
You could use a monitoring service like https://www.pingdom.com/ to ping the server at regular intervals. Just make sure it hits an endpoint that invokes .NET code and not a static resource.

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

IIS instances sharing data

I don't know if this is common or something but I wanted to check. So I am building a site on an iis7 server and coming across a weird problem. Whenever I have 2 clients accessing the site it seems they are sharing info. Here is an example, when one client does a search for a particular item, the other client goes to the search page and see's the results of client's one search results. I am using a global class to store this information on my code behind.
So here is my question, my understanding of servers was that if two clients accessed the server they were running on different instances of the site, meaning that even if I have a global class in my code it would be as if two machines were running it. Am I wrong in this understanding?
Also are there settings in IIS that I need to change for this to work?
In asp.net, you can use Session variables which are unique serialized token type things stored in server memory. You can store html form info in these sessions so another page on your site can read it.
The syntax in your MVC controller action to create a Session would be:
Session["MyFormData"] = someObject;
http://msdn.microsoft.com/en-us/library/ms178581.aspx

High Availability ASP.NET MVC

When building an ASP.NET MVC application with a goal of high availability, is it a good practice to keep the session state on the SQL Server, if there is no state server available?
The point here realy is that you have 2-3 webservers like you mentioned in the comment to Craigs answer.
One way is to use SQL-server sessionstate which has its own problems http://idunno.org/articles/277.aspx.
If you have this one SQL-Server I would be carefull, because the DB for sessionstate will put heavy load on it. Each request will write to the db.
We use 2 webservers and a Loadbalancer that has sticky sessions. If your first request ends up in server 1 then all your requests are handled by server 1. (Its a bit more sophisticated but you get the idea.)
This might not allways be the best solution, but at least on our site (its a shop where user typically stay 20-30minutes) it works well. We use only little SessionState and have most of the userspecific stuff stored by the ProfileSystem. But I guess the ProfileSystem will also fail if requests go to different servers.
I'd suggest AppFabric Caching (f.k.a. Velocity) instead.

Resources