I want to escape all outgoing content sent to the broswer. Unfortunately, it is not possible to add a tag and modify jsp's at this stage. I have an interceptor which can be modified. But I'm not sure how i can get hold of the Result as it is not yet generated, when the last interceptor runs.
Is there anyway to get hold of the content sent back to the browser, so that i can escape the content. It need not be an interceptor, all i want is to put this 'escaping' code to run on all outgoing content.
There are few issues taken care in latest release of strust2(2.3.1), i do not have much idea about XSS have look at the following issues may be they can give me some idea
XSS vulnerability in javatemplates plugin
Struts 2 XSS vulnerability
Related
Given a Vaadin application where a user can add and remove elements of a list that is also rendered in the browser, I am wondering what the most efficient way of handling such manipulations would be. Currently, I am simply using the add and remove methods.
I am only experienced with Apache Wicket where one should avoid to manipulate the component tree for performance reasons. In the documentation, I only found a section on how to handle repeated elements in Polymer but nothing on how this can be done using the "simple" API.
Am I choosing the right approach?
The Vaadin UI code runs on the server, so the add/remove operations don't affect the DOM directly. When a response is sent back to the browser, Vaadin will look at the difference between the previous UI state and the current and send appropriate instructions to the browser client to update the DOM. In this case, the instruction would be something like "remove the following components:...". The actual DOM manipulation is handled by Vaadin, and is not something you can affect yourself.
If you run into performance issues, help us out by filing an issue ticket on GitHub, so we can take a look at it https://github.com/vaadin/flow/issues
I would like to enforce a dynamic parameter (time-stamp) with every url of the application
I would like to use this parameter to solve the iterative problem of invoking the browser back button or a url from the history by comparing the current page time-stamp with the invoked URL time-stamp.
Any clue is Highly appreciated
Hossam Khalil
What's the "current page timestamp"? Do you mean by checking against the server's current time?
You'd need to have a timestamp in every link, which could be done with a custom tag.
Each form would need a timestamp, which could also be done via custom tag.
A custom request processor would be the Struts 1-way, although you may just be able to use a filter.
You may need to provide more details regarding what exact problem you're trying to solve.
A common web problem is where a user clicks the submit button of a form multiple times so the server processes the form more than once. This can also happen when a user hits the back button having submitted a form and so it gets processed again.
What is the best way of stopping this from happening in ASP.NET MVC?
Possibilities as I see it are:
Disable the button after submit - this gets round the multiple clicks but not the navigation
Have the receiving action redirect immediately - browsers seem to leave these redirects out of the history
Place a unique token in the session and on the form - if they match process the form - if not clear the form for a fresh submit
Are there more?
Are there some specific implementations of any of these?
I can see the third option being implemented as an ActionFilter with a HtmlHelper extension in a similar manner to the anti-forgery stuff.
Looking forward to hearing from you MVC'ers out there.
Often people overlook the most conventional way to handle this which is to use nonce keys.
You can use PRG as others have mentioned but the downside with PRG is that it doesn't solve the double-click problem, it requires an extra trip to the server for the redirect, and since the last step is a GET request you don't have direct access to the data that was just posted (though it could be passed as a query param or maintained on the server side).
I like the Javascript solution because it works most of the time.
Nonce keys however, work all the time. The nonce key is a random unique GUID generated by the server (also saved in the database) and embedded in the form. When the user POSTs the form, the nonce key also gets posted. As soon as a POST comes in to the server, the server verifies the nonce key exists in its database. If it does, the server deletes the key from the database and processes the form. Consequently if the user POSTs twice, the second POST won't be processed because the nonce key was deleted after processing the first POST.
The nonce key has an added advantage in that it brings additional security by preventing replay attacks (a man in the middle sniffs your HTTP request and then replays it to the server which treats it as a legitimate).
You should always return a redirect as the HTTP response to a POST. This will prevent the POST from occuring again when the user navigates back and forth with the Forward/Back buttons in the browser.
If you are worried about users double-clicking your submit buttons, just have a small script disable them immediately on submit.
You might want to look at the Post-Redirect-Get (PRG) pattern:
This really isn't MVC specific, but the pattern we follow on our web pages is that actions are performed with AJAX calls, rather than full page POSTs. So navigating to a url never performs an action, just displays the form. The AJAX call won't be in the history
Along with the disabling of buttons, you can add a transparent div over the entire web page so that clicking does nothing. We do this at my work and add a little friendly label saying processing request..
The most elegant solution I found was to use ActionFilters:
Blog post is here
I have been working through Microsoft's ASP.NET MVC tutorials, ending up at this page
http://www.asp.net/learn/mvc/tutorial-32-cs.aspx
The following statement is made towards the bottom of this page:
In general, you don’t want to perform an HTTP GET operation when invoking an action that modifies the state of your web application. When performing a delete, you want to perform an HTTP POST, or better yet, an HTTP DELETE operation.
Is this true? Can anyone offer a more detailed explanation for the rationale behind this statement?
Edit
Wikipedia states the following:
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side effects either on the server
Jon Skeet's answer is the canonical answer. But: Suppose you have a link:
href = "\myApp\DeleteImportantData.aspx?UserID=27"
and the google-bot comes along and indexes your page? What happens then?
GET is conventionally free of side-effects - in other words, it doesn't change the state. That means the results can be cached, bookmarks can be made safely etc.
From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
Apart from purist issues around being idempotent, there is a practical side: spiders/bots/crawlers etc will follow hyperlinks. If you have your "delete" action as a hyperlink that does a GET, then google can merrily delete all your data. See "The Spider of Doom".
With posts, this isn't a risk.
Another example..
http://example.com/admin/articles/delete/2
This will delete the article if you are logged in and have the right privileges. If your site accepts comments for example and a user submits that link as an image; like so:
<img src="http://example.com/admin/articles/delete/2" alt="This will delete your article."/>
Then when you yourself as the admin user come to browse through the comments on your site the browser will attempt to fetch that image by sending off a request to that URL. But because you are logged in whilst the browser is doing this the article will get deleted.
You may not even notice, without looking at the source code as most browsers wont show anything if it can't find an image.
Hope that makes sense.
Please see my answer here. It applies equally to this question.
Prefetch: A lot of web browsers will use prefetching. Which means
that it will load a page before you
click on the link. Anticipating that
you will click on that link later.
Bots: There are several bots that scan and index the internet for
information. They will only issue GET
requests. You don't want to delete
something from a GET request for this
reason.
Caching: GET HTTP requests are not supposed to change state and they should be idempotent. Idempotent means that
issuing a request once, or issuing it
multiple times gives the same result.
I.e. there are no side effects. For
this reason GET HTTP requests are
tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is
for. Several programs are built to
use the HTTP standard, and they assume
that you will use it the way you are
supposed to. So you will have
undefined behavior from a slew of
random programs if you don't follow.
In addition to spiders and requests having to be idempotent there's also a security issue with get requests. Someone can easily send your users an e-mail with
<img src="http://yoursite/Delete/Me" />
in the text and the browser will happily go along and try and access the resource. Using POST isn't a cure for such things (as you can put together a form post in javascript pretty easily) but it's a good start.
About this topic (HTTP methods usage), I recommend reading this blog post: http://blog.codevader.com/2008/11/02/why-learning-http-does-matter/
This is actually the opposite problem: why do not use POST when no data is changed.
Apart from all the excellent reasons mentioned on here, GET requests could be logged by the recipient server, such as in the access.log. If you send across sensitive data such as passwords in the request, they'll get logged as plaintext.
Even if they are hashed/salted for secure DB storage, a breach (or someone looking over the IT guy's shoulder) could reveal them. Such data should go in the POST body.
Let's say we have an internet banking application and we visit the transfer page. The logged in user chooses to transfer $10 to another account.
Clicking on the submit button redirects (as a GET request) to https://my.bank.com/users/transfer?amount=10&destination=23lk3j2kj31lk2j3k2j
But the internet connection is slow and/or the server(s) is(are) busy so after hitting the submit button the new page is loading slow.
The user gets frustrated and starts hitting F5 (refresh page) furiously. Guess what will happen? More than one transfer will occur possibly emptying the user's account.
Now if the request is made as POST (or anything else than GET) the first F5 (refresh page) the user will make the browser will gently ask "are you sure you want to do that? It can have side effects [ bla bla bla ] ... "
Another issue with GET is that the command goes to the browser's address bar. So if you refresh the page, you issue the command again, be it "delete last stuff", "submit the order" or similar.
In ASP.NET MVC it seems to be common practice not to use GET requests for calls to a controller that modify the model. For example, deleting a customer should not be possible by clicking a simple HTML link.
The only reason for this rule I am aware of is not safeguard against web-crawlers which might indavertently alter the database. GET requests are commonly regarded as safe, whereas POST requests are not.
Does this mean that this rule does not apply to non-public portions of a website (Example: Your password-protected user administration area)? Or is there any other reason not to use destructive GET requests?
This is generally part of HTTP. From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
In other words, it's not enforced, but it's really bad form for a GET request to have side-effects. Imagine if a user bookmarks a URL which does updates something, for example - they probably wouldn't expect that to happen.
Another good reason is accelerator plug-ins for browsers. These attempt to speed up page loads by pre-fetching links on the current page. Imagine if you had a bunch of GET requests to delete all the objects in a list, the plug-in would delete them!
The short of it is that you can't predict what a browser will do with GET requests, if it looks like a plain-old hyperlink then its fair game for a browser to go fetch it.
Yes.
It's not just about web crawlers, it's about CRSF - Cross Site Request Forgery.
So imagine that someone is logged into your web site, and browses to www.hax0rs.com
In the source for hax0rs.com is the following tag
<img src="http://mysite.com/members/statusChange?status=I%20am%20looking%20for%20a%20gimp%20mask" height="0" width="0">
Because your user is logged in, and because the request is going to your site, the authentication cookie goes with it. And bang, suddenly your user's status has changed.
What fun :)
But I suppose you can still do some sort of "non-retrieval" actions on GET requests. For example updating the "LastVisit" records which can be consider undestructive and relatively safe.