Generic Back Functionality - asp.net-mvc

I need to implement back functionality in my project, for this what i am doing that i am maintain last url in ViewData["RetUrl"] and on next page i am getting previous url from that ViewData["RetUrl"].on this way i had implemented this functionality.This idea is failed when level of pages increased i mean page1>page2>page3, no way to back page3 to page1.i can aonly able to maintain 1 level.
Now i am thinking for a generic kind of implementation which i can easily implement on my next project.Please help me with your idea about this...
I am working on ASP.NET MVC.

It also gets complicated if you recall that not all page requests are GET bit include some POST and possibly other verbs.
I once wanted to do something similar but then abandoned the idea. It's not really that needed. As an idea on how to approach the problem...
At each request record the current page and the verb together in pair like:
GET /users
GET /users/add-user
POST /users/add-user
GET /users
You can store this information in the TempData collection, read it at each request and update it by adding current request details. Then you implement some framework method that will scan the collection skipping all POSTs (or whatever you need) and give you the previous GET url.

Related

Rails routing, deep nesting for external resources

I understand that it's a good practice to only include in the URL the parameters needed to determine the object of the model.
If I have 2 models, Post and Comment... a Post has many comments and a comment belongs to one post. A URL for a comment can be
/comment/:comment_id
and from associations I can determine which Post it belongs to but
Some rails apps need to access external resources(Via APIs for example). If the rails app needs to replicate a part of another external source, what is the right way to handle URLs and routing?
If for example a post has some comments, The URL for a comment can be
/post/:post_id/comment/:comment_id
or
/comment/:comment_id
The latter has one disadvantage which is that I can't determine which post it belongs to if the API of the external source doesn't determine that and this would cause some problems with navigation through the app but it's a short URL and allows the user to easily manipulate the URL to get another comment(which I see as an advantage). At the same time using the first(long) link would make the URL so long but I can know which post it belongs to.
The only solution I can think of is to make both possible but the user would never know that the short one exists if I make the long one the default. What do you think?
I always use the longer / spelled-out version myself. I don't mind that it's long and I can only see good things come from it as you're discovering here. I also think it's an advantage because then you can do things like this:
post = Post.find_by_id(params[:post_id])
comment = post.comments.find_by_id(params[:id])
The point being that you can't go "comment fishing" this way. You have to have the right post context in order to get at a specific comment. This may not matter a whole lot if comments aren't at all sensitive, but there are many thing in a web app that may be. So scoping finds by a root object (like post here) allows a quick permissions check that can be reused and without having to check on parent objects.
Anyway, that's my 2 cents. I never understood why people take offense to the longer urls. If they work for you then don't be afraid to use them!

when to choose between mvc post and ajax

We have a system that was created using mvc 3 and has a LOT of ajax calls from our views.
There are a number of performance issues (not linked to the ajax) so we are looking at potentially starting from scratch.
Primarily the screens are setup screens so we get some data back, edit and save.
I'm having a difficult job finding any worthwhile material on when to use ajax and when to stick with good old posts.
Does anyone have any input on a good rule of thumb or links as to when to use what...?
If we did go down the re-write rule it would be using mvc 4.
For a fast and slick UI response, use AJAX, as it does not reload the page each time it performs an operation.
Use GET requests for viewing information, and POST requests for editing/saving.
Now AJAX requests can either be through a GET or POST. GET requests are used for viewing something, without editing and POST requests are used, when you wish to edit something. One uses POST when does not wish to expose the sensitive data. When using POST, the data of a request goes in the body of the request as opposed to GET. In GET, the data requested is appended to the URL.
Eg. GET REQUEST
example.com/blog/?name1=value1&name2=value2 HTTP/1.1
POST REQUEST
example.com/blog/ HTTP/1.1
Host: abc.com
name1=value1&name2=value2
Moreover, a user login page, which contains senetive information will be authenticated using a POST request, whereas queries on Google are GET requests, and we can verify that see our search terms appended to the google.com url.
Use AJAX when your boss says the screen flickers.
This is largely a question of usability, and behavior. As such, it's subjective. You have to ask yourself (or your users).. How do you want the page (or elements) to behave? If you don't care if there is a round trip, then a standard post/redirect/get may be in order. If you want to keep the current page state after an operation, then an ajax call may be a better choice.
They both do the same thing, they only do it in different ways. You have to decide which way you want it to behave in.
I'd say partial posts (AJAX) make sense when the result is that your page does not change significantly (if you're staying on the same page and only posting a small thing and maybe rebuilding a small segment of the page).
If you're rebuilding the entire page with new data, or obviously if you're redirecting elsewhere, a full post makes sense.
AJAX calls are significantly smaller and faster, can still provide the same server stuff (Session, authentication, etc.), and can still return partial views based on a model, so you don't even have to lose your MVC pattern. It's a little more javascript, but if all you're doing is making a small post and expecting a small change to your page, AJAX can dramatically improve user experience, while at the same time reducing bandwidth.

Play Framework - GET vs POST

New to web development, my understanding is that GET is used to get user input and POST to give them output. If I have a hybrid page, eg. on StackOverflow, if I write a question, it POSTs a page with my question, but also has a text box to GET my answer. In my routes file, what method would the URL associated with my postQgetA() method specify - GET or POST?
From technical point of view you can use only GET to perform almost every operation, but...
GET is most common method and it's used when you ie. click on the link, to get data (and do not modify it on server), optionally you send id of the resource to get (if you need to get data of single user).
POST is most often used to sending new data to the server ie. from form - to store them in your database (or proccess in any other way)
There are also other request methods (ie. DELETE, PUT) you can use with Play, however some of them need to be 'emulated' via ie. ajax, as there is not possible to set method of the common link ie. to DELETE. It's described how to use non-GET/POST methods in Play! (Note, that Julien suggests there, using GET for delete action although is possible it's a broken semantics.)
There are also other discussions on StackOverflow where you can find examples and suggestions for choosing correct method for your routes.
BTW, if you sending some request, let's say it's POST you don't need to perform separate GET as sending a request generates a response in other words, after sending new question with POST first you're trying to save it to DB, if no errors render the page and send it back in response.

Why should you delete using an HTTP POST or DELETE, rather than GET?

I have been working through Microsoft's ASP.NET MVC tutorials, ending up at this page
http://www.asp.net/learn/mvc/tutorial-32-cs.aspx
The following statement is made towards the bottom of this page:
In general, you don’t want to perform an HTTP GET operation when invoking an action that modifies the state of your web application. When performing a delete, you want to perform an HTTP POST, or better yet, an HTTP DELETE operation.
Is this true? Can anyone offer a more detailed explanation for the rationale behind this statement?
Edit
Wikipedia states the following:
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side effects either on the server
Jon Skeet's answer is the canonical answer. But: Suppose you have a link:
href = "\myApp\DeleteImportantData.aspx?UserID=27"
and the google-bot comes along and indexes your page? What happens then?
GET is conventionally free of side-effects - in other words, it doesn't change the state. That means the results can be cached, bookmarks can be made safely etc.
From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
Apart from purist issues around being idempotent, there is a practical side: spiders/bots/crawlers etc will follow hyperlinks. If you have your "delete" action as a hyperlink that does a GET, then google can merrily delete all your data. See "The Spider of Doom".
With posts, this isn't a risk.
Another example..
http://example.com/admin/articles/delete/2
This will delete the article if you are logged in and have the right privileges. If your site accepts comments for example and a user submits that link as an image; like so:
<img src="http://example.com/admin/articles/delete/2" alt="This will delete your article."/>
Then when you yourself as the admin user come to browse through the comments on your site the browser will attempt to fetch that image by sending off a request to that URL. But because you are logged in whilst the browser is doing this the article will get deleted.
You may not even notice, without looking at the source code as most browsers wont show anything if it can't find an image.
Hope that makes sense.
Please see my answer here. It applies equally to this question.
Prefetch: A lot of web browsers will use prefetching. Which means
that it will load a page before you
click on the link. Anticipating that
you will click on that link later.
Bots: There are several bots that scan and index the internet for
information. They will only issue GET
requests. You don't want to delete
something from a GET request for this
reason.
Caching: GET HTTP requests are not supposed to change state and they should be idempotent. Idempotent means that
issuing a request once, or issuing it
multiple times gives the same result.
I.e. there are no side effects. For
this reason GET HTTP requests are
tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is
for. Several programs are built to
use the HTTP standard, and they assume
that you will use it the way you are
supposed to. So you will have
undefined behavior from a slew of
random programs if you don't follow.
In addition to spiders and requests having to be idempotent there's also a security issue with get requests. Someone can easily send your users an e-mail with
<img src="http://yoursite/Delete/Me" />
in the text and the browser will happily go along and try and access the resource. Using POST isn't a cure for such things (as you can put together a form post in javascript pretty easily) but it's a good start.
About this topic (HTTP methods usage), I recommend reading this blog post: http://blog.codevader.com/2008/11/02/why-learning-http-does-matter/
This is actually the opposite problem: why do not use POST when no data is changed.
Apart from all the excellent reasons mentioned on here, GET requests could be logged by the recipient server, such as in the access.log. If you send across sensitive data such as passwords in the request, they'll get logged as plaintext.
Even if they are hashed/salted for secure DB storage, a breach (or someone looking over the IT guy's shoulder) could reveal them. Such data should go in the POST body.
Let's say we have an internet banking application and we visit the transfer page. The logged in user chooses to transfer $10 to another account.
Clicking on the submit button redirects (as a GET request) to https://my.bank.com/users/transfer?amount=10&destination=23lk3j2kj31lk2j3k2j
But the internet connection is slow and/or the server(s) is(are) busy so after hitting the submit button the new page is loading slow.
The user gets frustrated and starts hitting F5 (refresh page) furiously. Guess what will happen? More than one transfer will occur possibly emptying the user's account.
Now if the request is made as POST (or anything else than GET) the first F5 (refresh page) the user will make the browser will gently ask "are you sure you want to do that? It can have side effects [ bla bla bla ] ... "
Another issue with GET is that the command goes to the browser's address bar. So if you refresh the page, you issue the command again, be it "delete last stuff", "submit the order" or similar.

ASP.NET MVC: Do GET requests on private web pages have to be nondestructive?

In ASP.NET MVC it seems to be common practice not to use GET requests for calls to a controller that modify the model. For example, deleting a customer should not be possible by clicking a simple HTML link.
The only reason for this rule I am aware of is not safeguard against web-crawlers which might indavertently alter the database. GET requests are commonly regarded as safe, whereas POST requests are not.
Does this mean that this rule does not apply to non-public portions of a website (Example: Your password-protected user administration area)? Or is there any other reason not to use destructive GET requests?
This is generally part of HTTP. From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
In other words, it's not enforced, but it's really bad form for a GET request to have side-effects. Imagine if a user bookmarks a URL which does updates something, for example - they probably wouldn't expect that to happen.
Another good reason is accelerator plug-ins for browsers. These attempt to speed up page loads by pre-fetching links on the current page. Imagine if you had a bunch of GET requests to delete all the objects in a list, the plug-in would delete them!
The short of it is that you can't predict what a browser will do with GET requests, if it looks like a plain-old hyperlink then its fair game for a browser to go fetch it.
Yes.
It's not just about web crawlers, it's about CRSF - Cross Site Request Forgery.
So imagine that someone is logged into your web site, and browses to www.hax0rs.com
In the source for hax0rs.com is the following tag
<img src="http://mysite.com/members/statusChange?status=I%20am%20looking%20for%20a%20gimp%20mask" height="0" width="0">
Because your user is logged in, and because the request is going to your site, the authentication cookie goes with it. And bang, suddenly your user's status has changed.
What fun :)
But I suppose you can still do some sort of "non-retrieval" actions on GET requests. For example updating the "LastVisit" records which can be consider undestructive and relatively safe.

Resources