Asp.Net MVC - how would i maintain user state in azure in my application - asp.net-mvc

I know that there are a few questions like this, but the question is more in respect to this specifict situation.
Im developing a platform for taking tests online. A test is a set of images and belonging questions. Its being hosted on Azure and using MVC 4.
How I would love that if the user have taken half the test and the brower crashes or something makes it deviate from the test and comes back, it will give the option to resume.
I have one idea my self, but would like to know if theres other options. I was considering to use the localstorage. When a user starts a test, the information for the test is saved in localstorage and every time he moved on to a new image, it updates the local state. Then if the test-player is loaded it will check if any ongoing tests are avalible.
How could i do it? any one witch similar problem/solution.

Local Storage is not a good choice, because it is specific to each instance. That means if you have two instances of a Web Role (the recommended minimum), then each instance would have it's own local storage. They are not shared, and there is no way to access local storage on a specific machine.
You really have two options. You could use a database like SQL Azure, or use Azure caching. Azure caching is probably easier, since it's super easy to serialize/deserialize complex objects, but the downside is that caching is only valid for 72 hours. If a cached object isn't accessed/updated in 72 hours, it gets purged.

I would not recommend you storing this information on the client browser. The user has access to local storage, cookies, etc ... and could modify it. You could store the test start time in your database on the server. Then every time the user sends a request to the server in order to answer a question you would verify if the test is still active or the maximum allowed time has elapsed.

Related

Preventing Rails from connecting to database during initialization

I am quite new at Ruby/Rails. I am building a service that make an API available to users and ends up with some files created in the local filesystem, without any need to connect to any database. Then, once every few hours, I want to run a piece of ruby code that takes these local files, uploads them to Amazon S3 and registers their location into a Postgres database.
Right now both codes live together in the same project. I am observing that every time a user does something the system connects to the database. I have seen this answer which recommends to eliminate all traces of ActiveRecord in my code, but given that I want to have my background bookkeeping process connect to the database I am stuck on what to do.
Is it possible to define two different profiles (one with database and one without) and specify which profile a certain function call should run on? would this work?
I'm a bit confused by this, the db does not magically connect to the database for kicks on every request, it does so because of a specific request requires it. Generally through ActiveRecord but not exclusively
If your system is connecting every time you make a request, then that implies you have some sort of user metric or authorisation based code in there. Just killing off the database will cause this to fail, and likely you'll have to find it anyways, to then get your system to work. I'd advise locating it.
Things to look for are before_filters in controllers, or database session management, for example, or look for what is in the logs - the query should appear - and that will tell you what is being loaded, modified or whatnot.
It might even work to stop your database, just before doing a user activity, and see where the error leads you. Rinse and repeat until the user activity works, without the database.

Mvc azure storage, auto delete storage after certain time

Im developing a azure website where users can upload blob and metadata. I want uploaded stuff too be deleted after some time.
The only way i can think off is going for a cloudapp instead of a website with a worker role that checks like every hour if the uploaded file has expired and continue and delete it. However im going for a simple website here without workerroles.
I have a function that checks if the uploaded item should be deleted and if the user do something on the page i can easily call this function, BUT.. If the user isnt doing anything and the time runs out it wont delete it because the user never calls the function.. The storage will never be deleted. How would you solve this?
Thanks
Too broad to give one right answer, as you can solve this in many ways. But... from an objective perspective because you're using Web Sites I do suggest you look at Web Jobs and see if this might be the right tool for you (as this gives you the ability to run periodic jobs without the bulk of extra VMs in web/worker configuration). You'll still need a way to manage your metadata to know what to delete.
Regarding other Azure-specific built-in mechanisms, you can also consider queuing delete messages, with an invisibility time equal to the time the content is to be available. After that time expires, the queue message becomes visible, and any queue consumer would then see the message and be able to act on it. This can be your Web Job (which has SDK support for queues) or really any other mechanism you build.
Again, a very broad question with no single right answer, so I'm just pointing out the Azure-specific mechanisms that could help solve this particular problem.
Like David said in his answer, there can be many solutions to your problem. One solution could be to rely on blob itself. In this approach you can periodically fetch the list of blobs in the blob container and decide if the blob should be removed or not. The periodic fetching could be done through a Azure WebJob (if application is deployed as a website) or through a Azure Worker Role. Worker role approach is independent of how your main application is deployed. It could be deployed as a cloud service or as a website.
With that, there are two possible approaches you can take:
Rely on Blob's Last Modified Date: Whenever a blob is updated, its Last Modified property gets updated. You can use that to identify if the blob should be deleted or not. This approach would work best if the uploaded blob is never modified.
Rely on Blob's custom metadata: Whenever a blob is uploaded, you could set the upload date/time in blob's metadata. When you fetch the list of blobs, you could compare the upload date/time metadata value with the current date/time and decide if the blob should be deleted or not.
Another approach might be to use the container name to be the "expiry date"
This might make deletion easier, as you then could just remove expired containers

How Can I let the End User Control AppSettings Themselves?

I have a system I've built in MVC 3 that currently provides a yearly submission cycle where the system proceeds through a serious of seven steps tied to dates stored in the web.config as AppSettings. However, each year, I always have to roll the system back and forth between previous steps in order to accommodate the end users. I would like to give the administrator the ability to control the system status without having to contact a developer. What is the best way to do this?
I plan to build a page with proper validation that lets the administrator set the dates. I've considered a couple options of how I should store those date, but none of them seem correct. Our entire permission system uses these dates, and various bits of text on the pages turns on and off based on what period we're currently in.
So far I've come up with two options:
Option 1: Create a database table – This was my first thought. I’ve set up properties on the MvcApplication class in the global.asax and pulled them from the database. Using a lazy loader, I can set the properties the first time they're needed. However, when they change in the database, I don't have a way to force the system to “reset” and read the date changes. If I do this action on Begin_Request(), I'm constantly opening the connection and resetting the properties for each file that the web browser opens on the server, regardless if it's static content or not.
I could directly fetch the dates from the database every time I need one of the dates, but then I'm having to redo a lot of functionality to reduce repeated database calls. I'd like to cache the dates for each request, and only pull them when I need them,
Option 2: Allow editing a config file through the application – I've looked up how to split the web.config file so I can have a separate file that just contains the appSettings. Then I could just update the new config file from a controller action. I think this would work nicely, and not require me to rewrite any of the existing functionality, but it feels like I would be introducing a bad design pattern into the code.
I'd vote for the database. For the sake of performance you can cache those parameter values in a static class inside your app and provide a method to reread them from DB in the same class. So:
When a user makes request, check if those properties are already cached. If they are - use cached values, if no - read them from DB
When administrator makes changes to those parameters - store them to database and enforce your static caching class to reread them from DB.
I would suggest an approach that doesn't care whether the settings are stored in database or key/value pairs in config file.
Since you want the settings to be accessed globally by all users you can cache the settings and the cache implementation should be generic and distributed. There are plenty of online resources available how to create such an interface.
Since you want the cache to be sync with the underlying data you have to set cache dependencies (AppFabric won't supports sql cache dependency see this thread, while NCache supports both sql and file).
I would store the values in a database and use a distributed cache to persist the data across the web farm. MS AppFabric Caching has worked well for me. You will need to implement a standard caching pattern (check the cache, if null load from db and insert into cache).I would probably just create a static Load() method that abstracts this logic away. When the admins update the db you could update the cache or just delete the cachekey.
Therr are other considerations to be added to performance. Namely if you modify the config file thr application pool is re iniyializrd, while the database solution doesnt cause application reinitialization
...so do you need to re initialize the app after the changes or not?...If there i no way to avoid the inizialization whitout drastic changmes to the application ptobably the config filr solution is better

Letting visitors try out user-only features that write to DB

My site lets people create database entries (which most rails apps do), and I realized that there's a huge drop-off from landing on the site to actually signing up to try it. Basically the service lets users build their own document by combining different components. I'm thinking about adding an interface where visitors who are not yet registered can try out the features (building stuff) and ask them to sign up at the last stage, when they're about to publish their document.
First thing that comes to mind is use HTML5 local storage, but then another idea came to mind: maybe I could create a temporary user whenever a visitor tries out the features, and then later remove them from the database if they don't sign in. I'm not sure if this is safe, but this seems like it might be easier than dealing with all the local storage issues.
What would be the best practice for this type of situation?
HTML5 storage would be an option, tho most likely a lot of client side coding.
Other options would be to have a duplicate table of these 'demo' documents which you can clear every now and again for users that did not sign up. You could also just store the document in the user session, as you don't need it permanently stored, and then store it in the database once they have signed up.

How many users is too much when using session variables?

I'm currently using session variables as a cache to cut down the calls to the database.
I'm wondering at about how many concurrent users does this stop working at? 1,000, 10,000 ....100,000??? Also will iis start flaking out at a certain load? And are there
any alternatives?
I know it depends on how much data I'm storing per user, but I want to hear from other peoples experiences.
I do have it setup so that when the code tries to access a timed out session that it reloads from the database.
I'm currently using iis6 but I could easily use iis7 if it handles sessions better.
Edit: Yes I'm using application variables for non user specific data.
If this is of any concern to you, use a State Server or the SQL Storage options for the session. For almost all applications it will not prove to be a problem though.
You should probably be looking at Memcached if you're getting to this point.
If you have more than 124,889 users, your server will begin to be unresponsive.
Edit: if your data does not change and can be re-used, try caching it in an application scoped variable, i.e. reference data.
It's very unlikely that session variable capacity will ever be the limiting resource for your server. Any particular reason you're asking?
How about using the Cache instead, it allows automatic cache invalidation.
Cache invalidation can be done both based on a timeout and due to it being "kicked out" due to ressources.
You can use the Cache on a per-user-basis by giving each item a user specific key.

Resources