Using GET instead of POST to delete data behind authenticated pages [closed] - asp.net-mvc

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I know you should use POST whenever data will be modified on a public website. There are several reasons including the fact that search engines will follow all the links and modify the data.
My question is do you think it is OK to use GET behind authenticated pages in something like an admin interface?
One example would be a list of products with a delete link on each row. Since the only way to get to the page is if you are logged in, is there any harm in just using a link with the product ID in the query string?
Elaboration for comments:
I personally don't have any issues or difficulties in implementing the deletes with POST. I have just seen several examples of code in ASP.NET and ASP.NET MVC for "admin like" pages that use GET instead of POST. I am curious about peoples' opinion on the matter.

The temptation of using GETs is that you can create a bunch of delete links without creating dozens of forms per page, or resorting to JavaScript. Yet for various reasons that have already been mentioned, the web depends on GETs not being destructive.
The best practice, if generating one tiny form per delete link in on server is impractical, is to use a GET link to load up a confirmation page from the server which has a POST form that performs the delete. Then do some progressive enhancement:
Delete
If the server gets a GET to /controller/delete/x then serve up a confirmation page with a POST form. If the server gets a POST (or maybe a DELETE) request then do the deletion.

Some people learned some time ago that it's a very bad idea.
Google launched a new app to "speed up browsing" (Google Web Accelerator) that prefetched the linked pages in the browser (no attacks, no third party...), and when someone logged to such protected pages, well the app looked at all those links and said: "Hey, I'll prefetch those ones also because that way I have the page ready when the user requests it"
They have changed the behavior, but anyone can do anything similar any day.

It is still bad practice to use GET for destructive operations - even if it is hidden behind authentication - as it makes it possible (easier?) for someone with knowledge of that URL to exploit it (for example, using XSS). And of course, it is a bad design/coding practice as well, especially if you are trying to create a RESTful service.
There are probably many other reasons as well...

GET ought to be used to retrieve data idempotently and POST ought to be used to update data non-idempotently. That's all. It's certainly not a "best practice" to interchange the methods.
As to XSS and CSRF risks, to prevent the one just HTML-escape any user-controlled input during (re)displaying and to prevent the other just make use of request based tokens and/or captchas.

Yes.
Code may rely (correctly) on GET not being destructive. That code could run in the browser, and thus will be authenticated (link prefetching comes to mind).

It would be a bad practice to delete data based on a GET request. Technically, you can do it, but you'll be out of sync with most well written websites. You are basically creating a new set of rules for your user interface if your use GET requets for deletes. I consider the URLs of your website part of the user interface. If you sent somebody a link like http://www.fakesite.site/posts/delete?ID=1, they would expect to be displayed a page asking if they want to delete post with ID #1, not perform the actual delete.

I do this on pages where I know someone is logged in and I can verify the users right to delete something based on other data which I keep in my session. I would suggest adding a confirmation step: "are you sure you want to delete this thingy?"

GET and POST are very, very similar except for the fact that GETs have a limit on the length of the HTTP action because they are all URL based.
Since you won't be providing access to people who haven't authenticated I don't believe using gets is problematic.

Related

Can i make a website without an user login

I'm planning to make a simple one page or two page website on travel experience. Guests can sent me those details by form and I can post it on website.
The short answer is, yes you can.
From what I understand you want any visitor to your site to be able to type up a travel experience on the site, submit it, you then moderate and check it, and decide to publish it or not.
As much as that describes a "simple one or two page website", there is a lot that needs to happen for you to accomplish that:
You will need a database to store the user submissions in;
You probably want some kind of protection mechanism so that a malicious user or bot cannot just submit millions of rubbish entries;
You will want to send commands to your database in a way that prevents "SQL Injection" whereby a user can hide malicious actions (like deleting all your data in the database) inside his submission.
I can carry on, but I think you get the point: what you want to do is a simple technical exercise for someone who already knows how to build dynamic websites, but quite a challenge for someone with little or no experience.
That does not mean that it won't be a worthwhile exercise and a most valuable learning experience, but it won't be a quick couple of days' work for someone without the experience and knowledge.
There are tons of free resources on the web that you can use to learn to do exactly what you envision, so I encourage you to go for it. Good luck!
They are no need of the user login to send Posts to you. You can simply design a Submit Post page and get the Posts under your view. After that you can Publish or reject the submitted posts.
But there are some problems,
You can not verify the users who are submitting the posts
Accuracy of the posts will be reduce due to unauthorized requests.

Rails routing, deep nesting for external resources

I understand that it's a good practice to only include in the URL the parameters needed to determine the object of the model.
If I have 2 models, Post and Comment... a Post has many comments and a comment belongs to one post. A URL for a comment can be
/comment/:comment_id
and from associations I can determine which Post it belongs to but
Some rails apps need to access external resources(Via APIs for example). If the rails app needs to replicate a part of another external source, what is the right way to handle URLs and routing?
If for example a post has some comments, The URL for a comment can be
/post/:post_id/comment/:comment_id
or
/comment/:comment_id
The latter has one disadvantage which is that I can't determine which post it belongs to if the API of the external source doesn't determine that and this would cause some problems with navigation through the app but it's a short URL and allows the user to easily manipulate the URL to get another comment(which I see as an advantage). At the same time using the first(long) link would make the URL so long but I can know which post it belongs to.
The only solution I can think of is to make both possible but the user would never know that the short one exists if I make the long one the default. What do you think?
I always use the longer / spelled-out version myself. I don't mind that it's long and I can only see good things come from it as you're discovering here. I also think it's an advantage because then you can do things like this:
post = Post.find_by_id(params[:post_id])
comment = post.comments.find_by_id(params[:id])
The point being that you can't go "comment fishing" this way. You have to have the right post context in order to get at a specific comment. This may not matter a whole lot if comments aren't at all sensitive, but there are many thing in a web app that may be. So scoping finds by a root object (like post here) allows a quick permissions check that can be reused and without having to check on parent objects.
Anyway, that's my 2 cents. I never understood why people take offense to the longer urls. If they work for you then don't be afraid to use them!

Persisting data in MVC for the duration of a users session

Apologies in advance as I'm sure this topic has no doubt been asked before but I couldn't find any post that answers my specific query.
Bearing in mind that I'm new to MVC this is where I have got to. I've got a project developed under VS 2010 using the MVC 3 framework. I've got a search page which consists of 6 fields and a nested model which itself holds around 3 fields.
I can successfully post all this data back to itself and the data is successfully passed as a model and back agian so the fields keep the data which the user has supplied.
Before I move on to actually using this search criteria on another view a thought hit me. I want to keep this search criteria, and possibly even the search results in memory for the duration of the users session.
The reasoning behind this is simply to save my users time by:
a) negating the need to keep re-inputting their search criteria regardless of how they enter or leave the search page
b) speed up the user experience by presenting the search results more quickly
The later isn't as important as the first requirement.
I've done some google searches and indeed had a look through this site on similar topics. From what I've read using sessions (which I would typically use if developing a PHP site) is a no no. From the reasons I've read as to why you shouldn't use sessions seem valid and I'm happy to go along with it.
But now I'm left in a place where I'm scratching my head wondering to myself what exactly is best practice to achieve this simple goal that could be applied to similar situations later down the line in the project.
I also looked at the OutputCache method and that didn't behave as I expected it to. In a test I set the timeout for 30 seconds. After submitting a search I clicked the link to my search page to see if the fields would auto-populate, they didn't. But then clicking the search button the values in the cache were retrieved. I thought I was making progress but when I tried to submit a new value the old value from the cache came back i.e. I couldn't actually change my search criteria with the cache enforced. So I've discounted this as an avenue to explore.
The last option seems to suggest the use of cookies as the most likely candidate, but rightly or wrongly I feel this isn't the best solution. I would have thought the MVC 3 design pattern would have an easier and recommended method of persisting values. I'm sure there is but I've just not discovered it yet.
I have started to use JQuery and again this has been mentioned but I'm not sure this is right direction to take either.
So in summary my question really comes down to what is considered by the wider community as best practice for persisting data in my situation. Effiency, scalability and resiliancy is paramount as I'll have a large global user base that will end up using this web app.
Thanks in advance!
Pete
I'd just use cookies. They're simple to use, you can persist them for as long as you want or have them expire when the users closes their browser, and it doesn't sound like you are storing anything sensitive in them.

When to check if account should be allowed to use the web application? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So I have this web app that in theory may one day become a for-pay application - if anyone actually finds it useful and worth it.
I have all the logic to handle payment, check to see if the account is overdue etc. in place. It is all stored in RavenDB (RavenHQ actually) - not that this should matter to the question at hand.
Now, I am trying to follow best practices, and I want my application to be performant, i.e. not micro-optimizing, but I want to do things in a way that will scale relatively well with load (if it takes off it will be hosted - I would love to not have to pay for more servers than is strictly necessary).
My app uses something close to the default login/account model. Users log in securely using forms authentication over https.
At what point should I check that a user is actually allowed (with regards to payment status etc - a domain model concern really) to be using the web application? Consider that this will mean requesting a single document from the RavenDB backend and checking if the current payment period has expired.
Should I:
Check every time the user logs in, and make them unable to "Remember me" for more than x hours, where x is a relatively small number?
Check in a few central controller actions that the user would visit relatively often - the application would essentially be severely restricted if these actions were not available.
Do a global action filter that checks for every request, then redirects to the "Pay nooooow!" page as soon as stuff expires?
Another option?
RavenDB does clever caching, so I don't think a request for this document would kill performance, but should the application really take off (unlikely, but one can dream), an extra database request per http request will probably lead to Ayende hunting me down and mercilessly beating me. I don't want that.
It seems to me like this is something that others would have thought about and solved, so I am asking - what would be the right way to handle this?
Thanks for any insights!
I don't think this is a framework issue strictly, it's more like how you want your site to behave and then use framework to support it. Generally speaking you want to make the site usable and not too restrictive unless when that's necessary, e.g. surfing the site with no restriction whatsoever, but checking out should be done very securely.

Why should you delete using an HTTP POST or DELETE, rather than GET?

I have been working through Microsoft's ASP.NET MVC tutorials, ending up at this page
http://www.asp.net/learn/mvc/tutorial-32-cs.aspx
The following statement is made towards the bottom of this page:
In general, you don’t want to perform an HTTP GET operation when invoking an action that modifies the state of your web application. When performing a delete, you want to perform an HTTP POST, or better yet, an HTTP DELETE operation.
Is this true? Can anyone offer a more detailed explanation for the rationale behind this statement?
Edit
Wikipedia states the following:
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side effects either on the server
Jon Skeet's answer is the canonical answer. But: Suppose you have a link:
href = "\myApp\DeleteImportantData.aspx?UserID=27"
and the google-bot comes along and indexes your page? What happens then?
GET is conventionally free of side-effects - in other words, it doesn't change the state. That means the results can be cached, bookmarks can be made safely etc.
From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
Apart from purist issues around being idempotent, there is a practical side: spiders/bots/crawlers etc will follow hyperlinks. If you have your "delete" action as a hyperlink that does a GET, then google can merrily delete all your data. See "The Spider of Doom".
With posts, this isn't a risk.
Another example..
http://example.com/admin/articles/delete/2
This will delete the article if you are logged in and have the right privileges. If your site accepts comments for example and a user submits that link as an image; like so:
<img src="http://example.com/admin/articles/delete/2" alt="This will delete your article."/>
Then when you yourself as the admin user come to browse through the comments on your site the browser will attempt to fetch that image by sending off a request to that URL. But because you are logged in whilst the browser is doing this the article will get deleted.
You may not even notice, without looking at the source code as most browsers wont show anything if it can't find an image.
Hope that makes sense.
Please see my answer here. It applies equally to this question.
Prefetch: A lot of web browsers will use prefetching. Which means
that it will load a page before you
click on the link. Anticipating that
you will click on that link later.
Bots: There are several bots that scan and index the internet for
information. They will only issue GET
requests. You don't want to delete
something from a GET request for this
reason.
Caching: GET HTTP requests are not supposed to change state and they should be idempotent. Idempotent means that
issuing a request once, or issuing it
multiple times gives the same result.
I.e. there are no side effects. For
this reason GET HTTP requests are
tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is
for. Several programs are built to
use the HTTP standard, and they assume
that you will use it the way you are
supposed to. So you will have
undefined behavior from a slew of
random programs if you don't follow.
In addition to spiders and requests having to be idempotent there's also a security issue with get requests. Someone can easily send your users an e-mail with
<img src="http://yoursite/Delete/Me" />
in the text and the browser will happily go along and try and access the resource. Using POST isn't a cure for such things (as you can put together a form post in javascript pretty easily) but it's a good start.
About this topic (HTTP methods usage), I recommend reading this blog post: http://blog.codevader.com/2008/11/02/why-learning-http-does-matter/
This is actually the opposite problem: why do not use POST when no data is changed.
Apart from all the excellent reasons mentioned on here, GET requests could be logged by the recipient server, such as in the access.log. If you send across sensitive data such as passwords in the request, they'll get logged as plaintext.
Even if they are hashed/salted for secure DB storage, a breach (or someone looking over the IT guy's shoulder) could reveal them. Such data should go in the POST body.
Let's say we have an internet banking application and we visit the transfer page. The logged in user chooses to transfer $10 to another account.
Clicking on the submit button redirects (as a GET request) to https://my.bank.com/users/transfer?amount=10&destination=23lk3j2kj31lk2j3k2j
But the internet connection is slow and/or the server(s) is(are) busy so after hitting the submit button the new page is loading slow.
The user gets frustrated and starts hitting F5 (refresh page) furiously. Guess what will happen? More than one transfer will occur possibly emptying the user's account.
Now if the request is made as POST (or anything else than GET) the first F5 (refresh page) the user will make the browser will gently ask "are you sure you want to do that? It can have side effects [ bla bla bla ] ... "
Another issue with GET is that the command goes to the browser's address bar. So if you refresh the page, you issue the command again, be it "delete last stuff", "submit the order" or similar.

Resources