when to choose between mvc post and ajax - asp.net-mvc

We have a system that was created using mvc 3 and has a LOT of ajax calls from our views.
There are a number of performance issues (not linked to the ajax) so we are looking at potentially starting from scratch.
Primarily the screens are setup screens so we get some data back, edit and save.
I'm having a difficult job finding any worthwhile material on when to use ajax and when to stick with good old posts.
Does anyone have any input on a good rule of thumb or links as to when to use what...?
If we did go down the re-write rule it would be using mvc 4.

For a fast and slick UI response, use AJAX, as it does not reload the page each time it performs an operation.
Use GET requests for viewing information, and POST requests for editing/saving.
Now AJAX requests can either be through a GET or POST. GET requests are used for viewing something, without editing and POST requests are used, when you wish to edit something. One uses POST when does not wish to expose the sensitive data. When using POST, the data of a request goes in the body of the request as opposed to GET. In GET, the data requested is appended to the URL.
Eg. GET REQUEST
example.com/blog/?name1=value1&name2=value2 HTTP/1.1
POST REQUEST
example.com/blog/ HTTP/1.1
Host: abc.com
name1=value1&name2=value2
Moreover, a user login page, which contains senetive information will be authenticated using a POST request, whereas queries on Google are GET requests, and we can verify that see our search terms appended to the google.com url.

Use AJAX when your boss says the screen flickers.

This is largely a question of usability, and behavior. As such, it's subjective. You have to ask yourself (or your users).. How do you want the page (or elements) to behave? If you don't care if there is a round trip, then a standard post/redirect/get may be in order. If you want to keep the current page state after an operation, then an ajax call may be a better choice.
They both do the same thing, they only do it in different ways. You have to decide which way you want it to behave in.

I'd say partial posts (AJAX) make sense when the result is that your page does not change significantly (if you're staying on the same page and only posting a small thing and maybe rebuilding a small segment of the page).
If you're rebuilding the entire page with new data, or obviously if you're redirecting elsewhere, a full post makes sense.
AJAX calls are significantly smaller and faster, can still provide the same server stuff (Session, authentication, etc.), and can still return partial views based on a model, so you don't even have to lose your MVC pattern. It's a little more javascript, but if all you're doing is making a small post and expecting a small change to your page, AJAX can dramatically improve user experience, while at the same time reducing bandwidth.

Related

detecting a change in the page including refresh

so i am working in a .tpl file meaning i am open to js, html and php answers. what i want to do is whenever a person refreshes the page, experience a change in the url or exits the browser, my site would take an action based on this change of state. so basically, when they leave that specific page of mines in any way, i would call a function. the reason i want this is because i am saving this editable image on my site. but whenever they leave the page, i want the image the created to be autosaved.
this task splits into client-side and server-side parts. At client side you should bind to interesting browser events, triggering some background http requests to some service URLs of your website, this is probably JS. At the server side, you should provide corresponding reaction to these requests, which is probably PHP.
As long as these service URLs are to be called intermittently by various visitors, be sure to keep an eye on what request came from which client's window. PHP sessions should help you.
I'd propose to work this separately, first to get saving machinery working -- just bind everything to explicit big buttons at the page (page close, url change, etc), then replace each button with the binding to exact JS event. Keep in mind differencies among browsers.

Generic Back Functionality

I need to implement back functionality in my project, for this what i am doing that i am maintain last url in ViewData["RetUrl"] and on next page i am getting previous url from that ViewData["RetUrl"].on this way i had implemented this functionality.This idea is failed when level of pages increased i mean page1>page2>page3, no way to back page3 to page1.i can aonly able to maintain 1 level.
Now i am thinking for a generic kind of implementation which i can easily implement on my next project.Please help me with your idea about this...
I am working on ASP.NET MVC.
It also gets complicated if you recall that not all page requests are GET bit include some POST and possibly other verbs.
I once wanted to do something similar but then abandoned the idea. It's not really that needed. As an idea on how to approach the problem...
At each request record the current page and the verb together in pair like:
GET /users
GET /users/add-user
POST /users/add-user
GET /users
You can store this information in the TempData collection, read it at each request and update it by adding current request details. Then you implement some framework method that will scan the collection skipping all POSTs (or whatever you need) and give you the previous GET url.

Showing status of current request by AJAX

I'm trying to develop an application which modifies a couple of tasks of the famous Online-TODO List RememberTheMilk (rememberthemilk.com) using the REST API.
Unfortunately the modifying takes a lot of time, so I want to give a feedback to the users.
My idea was just to display a couple of text lines (e.g. modifying task 1 of n...).
Therefore I used the periodically_call_remote on my page and called a which reads a Singleton.
In the request I store the text that should be displayed in the same singleton. But I found out, that once I set up a request, the periodically_call_remote does not update the specified div.
My question to this:
1. is this a good way to implement this behaviour?
2. if it is, how do get the periodically_call_remote to work during a submit?
Using a Singleton is most definitely a bad idea. In an advanced production setup it isn't guaranteed that subsequent requests will go to the same process or to the same machine (and subsequently will have a different Singleton). Plus, if you have many users, I don't even want to think about what'll happen to those poor Singletons.
Does any of this stuff actually need to go through your Rails app? It seems like you can call the RTM API via Javascript from the page the user is on and then update the page when the XHR request is complete.

Why should you delete using an HTTP POST or DELETE, rather than GET?

I have been working through Microsoft's ASP.NET MVC tutorials, ending up at this page
http://www.asp.net/learn/mvc/tutorial-32-cs.aspx
The following statement is made towards the bottom of this page:
In general, you don’t want to perform an HTTP GET operation when invoking an action that modifies the state of your web application. When performing a delete, you want to perform an HTTP POST, or better yet, an HTTP DELETE operation.
Is this true? Can anyone offer a more detailed explanation for the rationale behind this statement?
Edit
Wikipedia states the following:
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side effects either on the server
Jon Skeet's answer is the canonical answer. But: Suppose you have a link:
href = "\myApp\DeleteImportantData.aspx?UserID=27"
and the google-bot comes along and indexes your page? What happens then?
GET is conventionally free of side-effects - in other words, it doesn't change the state. That means the results can be cached, bookmarks can be made safely etc.
From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
Apart from purist issues around being idempotent, there is a practical side: spiders/bots/crawlers etc will follow hyperlinks. If you have your "delete" action as a hyperlink that does a GET, then google can merrily delete all your data. See "The Spider of Doom".
With posts, this isn't a risk.
Another example..
http://example.com/admin/articles/delete/2
This will delete the article if you are logged in and have the right privileges. If your site accepts comments for example and a user submits that link as an image; like so:
<img src="http://example.com/admin/articles/delete/2" alt="This will delete your article."/>
Then when you yourself as the admin user come to browse through the comments on your site the browser will attempt to fetch that image by sending off a request to that URL. But because you are logged in whilst the browser is doing this the article will get deleted.
You may not even notice, without looking at the source code as most browsers wont show anything if it can't find an image.
Hope that makes sense.
Please see my answer here. It applies equally to this question.
Prefetch: A lot of web browsers will use prefetching. Which means
that it will load a page before you
click on the link. Anticipating that
you will click on that link later.
Bots: There are several bots that scan and index the internet for
information. They will only issue GET
requests. You don't want to delete
something from a GET request for this
reason.
Caching: GET HTTP requests are not supposed to change state and they should be idempotent. Idempotent means that
issuing a request once, or issuing it
multiple times gives the same result.
I.e. there are no side effects. For
this reason GET HTTP requests are
tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is
for. Several programs are built to
use the HTTP standard, and they assume
that you will use it the way you are
supposed to. So you will have
undefined behavior from a slew of
random programs if you don't follow.
In addition to spiders and requests having to be idempotent there's also a security issue with get requests. Someone can easily send your users an e-mail with
<img src="http://yoursite/Delete/Me" />
in the text and the browser will happily go along and try and access the resource. Using POST isn't a cure for such things (as you can put together a form post in javascript pretty easily) but it's a good start.
About this topic (HTTP methods usage), I recommend reading this blog post: http://blog.codevader.com/2008/11/02/why-learning-http-does-matter/
This is actually the opposite problem: why do not use POST when no data is changed.
Apart from all the excellent reasons mentioned on here, GET requests could be logged by the recipient server, such as in the access.log. If you send across sensitive data such as passwords in the request, they'll get logged as plaintext.
Even if they are hashed/salted for secure DB storage, a breach (or someone looking over the IT guy's shoulder) could reveal them. Such data should go in the POST body.
Let's say we have an internet banking application and we visit the transfer page. The logged in user chooses to transfer $10 to another account.
Clicking on the submit button redirects (as a GET request) to https://my.bank.com/users/transfer?amount=10&destination=23lk3j2kj31lk2j3k2j
But the internet connection is slow and/or the server(s) is(are) busy so after hitting the submit button the new page is loading slow.
The user gets frustrated and starts hitting F5 (refresh page) furiously. Guess what will happen? More than one transfer will occur possibly emptying the user's account.
Now if the request is made as POST (or anything else than GET) the first F5 (refresh page) the user will make the browser will gently ask "are you sure you want to do that? It can have side effects [ bla bla bla ] ... "
Another issue with GET is that the command goes to the browser's address bar. So if you refresh the page, you issue the command again, be it "delete last stuff", "submit the order" or similar.

ASP.NET MVC: Do GET requests on private web pages have to be nondestructive?

In ASP.NET MVC it seems to be common practice not to use GET requests for calls to a controller that modify the model. For example, deleting a customer should not be possible by clicking a simple HTML link.
The only reason for this rule I am aware of is not safeguard against web-crawlers which might indavertently alter the database. GET requests are commonly regarded as safe, whereas POST requests are not.
Does this mean that this rule does not apply to non-public portions of a website (Example: Your password-protected user administration area)? Or is there any other reason not to use destructive GET requests?
This is generally part of HTTP. From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
In other words, it's not enforced, but it's really bad form for a GET request to have side-effects. Imagine if a user bookmarks a URL which does updates something, for example - they probably wouldn't expect that to happen.
Another good reason is accelerator plug-ins for browsers. These attempt to speed up page loads by pre-fetching links on the current page. Imagine if you had a bunch of GET requests to delete all the objects in a list, the plug-in would delete them!
The short of it is that you can't predict what a browser will do with GET requests, if it looks like a plain-old hyperlink then its fair game for a browser to go fetch it.
Yes.
It's not just about web crawlers, it's about CRSF - Cross Site Request Forgery.
So imagine that someone is logged into your web site, and browses to www.hax0rs.com
In the source for hax0rs.com is the following tag
<img src="http://mysite.com/members/statusChange?status=I%20am%20looking%20for%20a%20gimp%20mask" height="0" width="0">
Because your user is logged in, and because the request is going to your site, the authentication cookie goes with it. And bang, suddenly your user's status has changed.
What fun :)
But I suppose you can still do some sort of "non-retrieval" actions on GET requests. For example updating the "LastVisit" records which can be consider undestructive and relatively safe.

Resources