Handling MVC Session State on Multi-Server Production Environment - asp.net-mvc

So started working for a company on my first production job as a developer where I am writing a MVC web application. Told for certain data we didn't want to have any persistent storage, so just keep it in session. I just finished setting up the production environment so that we can do automatic deployment to the servers.
I do so in a rolling deployment manner. Drain connections from a machine, take it down, deploy new code, bring it back up then do the next machine.
From my first test this seems to kill the session data, which was what I was worried about. Is there a way within IIS to transfer session data when a user switches machines, or do I need some sort of shared file storage to recover in the case the user has to be removed from a machine to load new code onto it.
I am using Big-Ip for my load balancer that does the draining and I don't think it necessarily knows anything about IIS. First experience with the complexities of production level deployment with no downtime requirements. I imagine, I'll need a file storage backup to 'recover' if necessary. Just want to make sure I'm not missing something.

Best practice for a server farm (even if it's just two) is to NOT use session for anything. Apart from the fact that Session can cause performance problems (Session access is serialized and can slow performance as your load increases), Session is also considered transient storage, and can literally disappear at any time. When the IIS app pool is restarted, session is lost (either intentionally or unintentionally). Also, IIS can dump sessions even before they expire if it starts running low on resources.
Session is basically unpredictable, and unreliable. Anything you put in session should be rebuildable from your app if it finds the session data is gone.
Of course, this is for in-proc session. You can use a state server tied to a database, but this too will affect performance. Especially if you make heavy use of session.
In general, design your apps to NOT use session, unless it's for trivial things that can easily be recreated if they are no longer found.

Related

Is it possible to receive a webhook to my app before Heroku Postgres goes read-only?

I have an application that handles some data in memory.
I'd like to close the operations and persist the data into DB so that a reboot wouldn't destroy it.
My app opens some resources in various third parties and it I'd like to close them. After that the app can happily go down and wait until it reboots.
What I found is that Heroku has various webhooks for application deployment state changes and so on. But I couldn't find a way to trigger a webhook before the DB becomes read only.
I would like to have a webhook that tells me that "in 5 minutes PostgreSQL will become read only". And then later the app can reboot and for now it doesn't matter.
Also I couldn't find any info if this is even possible. I couldn't find an email for support as well.
Is there a way to do it? Is it even possible?
(I have an Event-Sourced app that saves event data into DB but persists the data in-memory as it runs. So I don't want to continuously bash all of my state into the DB).
It sounds like there is some amount of confusion with regards to your understanding about the various parts of dyno and database uptime on Heroku.
Firstly, a database going into read-only mode is a very rare event usually associated with a critical failure. Based on what behavior you're seeking and some of your comments, it seems like you may be confusing database state changes with dyno state changes. Dynos (representing the servers for your application runtime), are restarted once per 24 hours roughly and these servers are ephemeral. Thus the memory is blown away. The 'roughly' part accounts for fuzzing so that all of your dynos aren't restarting at the same time which would cause availability issues.
I don't think you actually need a webhook here. Conveniently, shortly before a dyno is due to be cycled (and blow away your memory) it will receive a SIGTERM and be given 30 seconds to clean up after itself. That SIGTERM can be trapped and you can then save your data to the database.

How to optimize Rails for one kiosk user?

I am playing around with using Rails to underpin a kiosk. This is a terminal where there is only one local user at a time.
Under this system, a browser like Chrome would access the Rails app.
Things I assume would be helpful:
Super-fast, very lightweight Rails server (I'm using Puma).
Eliminating standard processes/assumptions that are meant for internet website contexts (caching, CDNs, middleware, etc.).
In some level of detail preferably, how should one set up a Rails app for maximum performance in a single-user kiosk?
This might sound like a non-answer, but the approach I would take is to use Rails in its default (production) configuration, and optimise performance issues as they arise in your test bed. Running Rails in production mode will likely give you more than enough performance if you have a dedicated machine for a single user (often you'll have many clients to a single Rails instance). Without testing the application, you could sink a considerable amount of time into optimisations that don't impact the user experience.
It may be worth sitting Rails behind Apache/nginx (Passenger is a well understood way to get a Rails app on Apache) to serve your static assets, but from the information provided so far I'd be surprised if performance optimisation was necessary at this stage.
A challenge that might be worth considering at this stage is how you'll deploy changes to your kiosk/set of kiosks. Will they be brought in for updates or need to have changes applied over-the-air? That will likely impact how you deploy it onto the machine, and in my experience is a harder thing to change later on.

SSH tunneling to a remote DB from Heroku?

I'm thinking of deploying a small Rails app on Heroku. In an effort to save money, I'd like my app to use an external database (to which I have free access), rather than a Heroku-hosted database. The trouble is that the free database only accepts local connections. To access it from Heroku, I'd need to do so via an SSH tunnel.
Is it possible for a Heroku app to persist its data in an external DB accessed via SSH? If so, how?
(For bonus points, here's a second question: is this a good idea? On the one hand, this scheme would save me from paying for a Heroku database. On the other hand, it means having to encrypt all my database traffic. I imagine that this would massively slow down my web dynos, and reduce the number of requests they can serve. Would the money I save on the database get used up paying for more dynos? Am I likely to come out ahead by doing this?)
Yes, you can.
It is possible to set up a tunnel on Heroku to an external database.
You don't want to do it for the reasons the O.P. mentions (to avoid paying for a local database) for the reasons #sgrif mentions (it would be painfully slow and probably not really save anything)
But there are legitimate reasons for wanting to tunnel to an external database, for example if data is residing in a legacy system that you need to analyze.
Rather than simply repeat myself (it's long), here's a link to the recipe that worked for me: SSH tunneling from Heroku
No, and even if it was an option it's a really bad idea as you'd be adding massive latency to every request, since you'd for all intents and purposes have to open a new tunnel for every request.
Your best option is likely to use Heroku's development or starter tiers. The free development tier will work if your database is less than 10,000 rows. Their $15/mo starter tier works for up to 1,000,000 rows.

ASP.NET MVC Session state using state partitioning, MongoDB or Memcached or...?

My team is currently building a new SaaS application for our company (Amilia.com). We are in "alpha" release and the application was built to be deployed on a web farm.
For our session provider, we are using Sql Server mode (in DEV and TEST) and it seems to be not "scalable", hence we are looking for the best solution for handling sessions in asp.net (mvc3 in our case). We are currently using Sql Server but we would like to switch to an other system due to license cost.
We target 20 000 [EDITED, was 100k before] concurrent users. In session, we store a GUID, a string and a Cart object (we try to keep it as little as possible, this object allows us to save 3 queries at each request).
Here are the different solutions I've found :
ASP.NET built-in solutions:
No session : impossible in our case (eliminated)
In-Proc Mode : can't be used in a webfarm. (eliminated)
StateServer Mode : can be used in a webfarm but if the server goes down, I lose all my sessions. (eliminated)
StateServer Mode with a PartitionResolver using multiple servers (http://msdn.microsoft.com/en-ca/magazine/cc163730.aspx#S8) If I undestand well, if one of these servers goes down, only a part of my users will lose their session.
SqlServer Mode : can be used in a webfarm, if the server goes down, I can recover my sessions but the process is quite slow. Moreover, that database becomes a bottleneck in case of heavy load.
SqlServer Mode with a PartitionResolver using multiple servers (http://www.bulletproofideas.net/2011/01/true-scale-out-model-for-aspnet-session.html) : If one of these servers goes down, only a part of my users will lose their session. If the user was doing nothing between the downtime, he will recover his previous session otherwise he will be redirected to the signin screen.
Custom solutions :
Use MongoDB as Session storage (http://www.adathedev.co.uk/2011/05/mongodb-aspnet-session-state-store.html) It seems to be a good tradeoff but my knowledge in nosql is quite rudimentary so I cannot see the cons.
Use Memcached : the problem will be the same as StateServer mode and if the memcached server goes down, all my sessions are lost. Furthermore, I think Memcached is not dedicated to store session state ?
Use distributed memcached like ScaleOut (http://highscalability.com/product-scaleout-stateserver-memcached-steroids) : seems to be the best solution but it costs money.
Use repcached and memcached (http://repcached.lab.klab.org/), I've never seen an implementation of that solution.
We could easily go to Ms Azure and use tools provided by it but we have only one application, so if Microsoft doubles the price, we immediately double our infrastructure cost (but that's another subject).
So, what's the best way or at least what's your opinion about this ?
SQL Server session is pretty good. Since you already have a SQL Server database to store your primary data, you can just create another database and store the ASP.NET Session there.
About the scalability, I would say if you have 100,000 concurrent users, then your userbase must be more than 10 millions or more. You should do some practical estimate to see really how long it will take to reach such a concurrent user load. In my previous startup, we had millions of users all around the world, 24x7, but we hardly ever reached 10K concurrent users even though people used our site continuously for hours every day.
If you really have 100,000 concurrent users, license cost would be the least of your worry. With right business model, having 100K concurrent users means you have at least $10M revenue/year.
I have built myoffice.bt.com that uses SQL Server session and all primary data on a single SQL Server instance, but in two databases. Between 8 AM to 10 AM, millions of users hit our site. We hardly have any performance issue. With a dual Core server, 8 GB RAM, you can happily run a SQL Server instance and support such a load as long as you code it right. It all depends on how you have coded. If you have followed performance best practices, you can easily scale to millions of users on a single database server.
Take a look at my performance suggestions from:
http://omaralzabir.com/tag/performance/
I have used memcached clusters only to cache frequently used data. Never used for session for good reasons. There's been several occasions where a memcached server had to be rebooted. If we had used memcached for session, we would have lost all the sessions stored in that instance. So, I would not recommend storing sessions in memcached. But then again, how important is it for your app to maintain data in session? If you have a shopping cart, then as users add products on the cart, it must get persisted in database, not in session. Session is usually for short term storage. For any transactional data, you should never keep it on session, instead store it on relational tables directly.
I am always in support of not using Session. Developers abuse session all the time. Whenever they want to pass data from one page to another, they just put it on the Session. It results in bad design. If you truly want to scale to 100K concurrent user base, design your app to not use session at all. Any transactional data must be stored in database. Cart is a transactional object and thus it's not suitable for holding on Session. At some point you would need to know how many carts get started but never gets placed. So, you will need to store them in database permanently.
Remember, database based session is nothing but databased based serialization. Think very carefully on what you are serializing into database. You will have to clean it up as well since Session_End won't fire for database based session or in fact most of the out of proc sessions. So, essentially you are giving devs ability to just serialize data into database and bypass relational model. It always results in bad coding.
With permanent relational storage, fronted by a high performance cache like memcached, you have much better design to support large user base.
Hope this helps your concerns.

Rescue rails app from server failure

I have a rails app which is now hosted on dedicated server. Today something happened: app doesn't respond and I have no ssh access, restarting doesn't help and I am waiting for tech support to respond. But this is not a question, I just need this app to be online even if server fails. Which is the easiest option? Can I create second server on different hosting and serve from there in case of failure, if so, how to sync db and files? Application is not heavily loaded, I just need it to be available.
Difficult problem to solve. There's no one proven way to make this happen, but in general you need "No Single Point of Failure"
There's an entire science devoted to reliability in web applications -- no way can you get that answered in a SO question.
You can take frequent backups of your database, store them on S3 (and/or somewhere else). You can then
have an image of your applications server at your host
spin it up when your server dies
restore the database
Have the new application server take over responsibility (easiest way: assume the old server's IP address)

Resources