so it is possible that form just shows validation error (like HTML is not allowed) instead of throwing an exception
A potentially dangerous Request.Form value was detected from the client
when I include HTML in input. I dont want to accept HTML, but I also dont like to get exception.
In ASP .NET 4.0, request validation is enabled for all requests by default.
http://www.asp.net/whitepapers/aspnet4/breaking-changes
You can still force your app to ignore this check.
See link for details:-
ValidateRequest="false" doesn't work in Asp.Net 4
However, I wouldn't advocate this strategy. It's much better to try to validate the text with Javascript before sending to the server to ensure it doesn't contain any characters that trip the ValidateRequest behaviour.
This question covers all of those characters:-
What characters or character combinations are invalid when ValidateRequest is set to true?
If your user does not have Javascript enabled, you'll still hit the error. In that (rare) case, you can fall back on customErrors so that you at least show something that is prettier than the yellow error page.
You can handle application behavior on errors in your web.config file:
</system.web>
…
<customErrors defaultRedirect="/ErrorPage" mode="RemoteOnly"></customErrors>
…
</system.web>
Moreover, beside specifying whether to show detailed errors or not you can also redirect user to a specific page to show him custom errors pages which you have designed.
For more information about this check msdn explanation for customErrors element
Related
how to handle request when user directly enters html content in URL.
I want to redirect to Error page when user enters html tag in URL is that possible in MVC.
I have tried from BeginExecute event of by creating override method.
Please give some suggestion.
Thanks.
meybe can use RouteHandler for when a user needs to redirect to any
external page, shorten long URLs, or make URLs more user friendly.
please check my answer
Error handling ASP.NET MVC
You can always choose CustomErrorMode="On" in web.config and configure with your error controller
Custom Error Mode will help you to redirect any invalid or malicious link or content to redirect it to your errorcontroller and handle it the way you want.
You can use Request validation for do it. It prevents to accept un-encoded HTML/XML etc from Client to server. It validates all the data that is passed from client to server. To use this feature , you must set requestValidationMode as 4.5 in web.config like:
<httpruntime requestvalidationmode="4.5" />
For more information please see this article.
I've got a public MVC 5 web-site, using the anti-forgery token. Every day a large number of errors are logged in the form of "The anti-forgery cookie token and form field token do not match.", and a lesser number in the form of "The required anti-forgery cookie "__RequestVerificationToken" is not present.".
The problem is not reproducible, it occurs for different people on different pages at different times. Closing the browser resolves the problem - sometimes just using the Back button and re-trying resolves the problem.
As the website works for the vast majority of users, I can rule out missing ValidateAntiForgeryToken attributes in controllers, likewise, I can rule out missing or duplicate #Html.AntiForgeryToken() code in the views.
The website runs on a single server, so I can rule out different machinekeys in the web.config (I've tried running the website with and without this setting anyway).
The application pool is set to restart each night, and there's plenty of spare resource on the
server, so I can rule out the application pool restarting and invalidating sessions (especially as this isn't logged in the event log or anywhere else).
I've hit the problem very rarely - I definitely have cookies enabled, so I can rule out cookies being disabled. I can also rule out javascript being disabled, as user's can only progress so far into the site without JS - and errors occur on pages beyond this point.
I've disabled all caching, setting nocache, nostore etc. This seemed to reduce the occurrence of the issue, but it still persisted (I had to re-enable caching for a variety other reasons).
What other options are there to consider?
I am so frustrated by this I am considering turning off anti-forgery protection and contributing to the global weakening of security.
Make sure you have AntiForgery attributes both in controller and forms.
If you are doing ajax post maybe you can send RequestValidationToken as a parameter.
$('input[name=__RequestVerificationToken]').val()
Also maybe somebody attacking your site or using some bots to get content or post forms.
I have a security constraint declared in web.xml:
<security-constraint>
<web-resource-collection>
<web-resource-name>LoggedIn</web-resource-name>
<url-pattern>/screens/*</url-pattern>
</web-resource-collection>
<auth-constraint/>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
After logging in, when I make a GET request against the application I get the expected behavior e.g.
https://localhost:8443/Patrac/screens/user.xhtml --> results in access denied.
However, when I do a postback e.g.
<rich:menuItem submitMode="ajax" label="User" action="/screens/user"/>
I can view the screen. If I do a second identical postback, I get the access denied message. Each time I submit a postback the result alternates between displaying the screen and issuing a 403. The URL displayed in the browser alternates between the following:
https://localhost:8443/Patrac/screens/user.xhtml --> browser URL when access denied
https://localhost:8443/Patrac/public/403.xhtml --> browser URL when user screen is displayed
I understand the way the displayed browser URL in JSF lags behind the screen that is currently displayed, so that's no mystery. But I don't understand how I'm able to view the screen every other time the same postback is submitted. Again, GET requests are always denied.
EDIT :
I did try post-redirect-get and that made the strange behavior go away, as expected.
<rich:menuItem submitMode="ajax" label="User" action="/screens/user?faces-redirect=true"/>
However, I don't want to do PRG every time and besides PRG doesn't eliminate the security problem.
What am I missing here? Thanks for any insights!
The security constraint isn't checked on forwards, but on requests. This is by design.
So you definitely need the PRG pattern or, better, normal GET links. It'll instantly also make your webapp more SEO friendly and better bookmarkable. Using POST for page-to-page navigation is bad design anyway.
The "alternating behaviour" you're seeing is because the forward isn't checked, but any subsequent (postback) request on the same page is a fullworthy request and thus checked.
I'm trying to solve a "A potentially dangerous Request.Form value was detected from the client" problem, and SO answers and Scott Hanselman recommend setting
<httpRuntime requestValidationMode="2.0" />
in Web.config (along with adding an attribute to problematic Methods).
I realize this changes the validation mode to ASP.NET 2.0's, but what does that mean?
And also, does this change has any side effects I should be aware of?
Thanks.
Check out the description at MSDN's HttpRuntimeSection.RequestValidationMode Property.
2.0. Request validation is enabled only for pages, not for all HTTP requests. In addition, the request validation settings of the pages
element (if any) in the configuration file or of the # Page directive
in an individual page are used to determine which page requests to
validate.
Take a look at ASP.NET Request Validation>
The request validation feature in ASP.NET provides a certain level of
default protection against cross-site scripting (XSS) attacks. In
previous versions of ASP.NET, request validation was enabled by
default. However, it applied only to ASP.NET pages (.aspx files and
their class files) and only when those pages were executing.
In ASP.NET 4, by default, request validation is enabled for all
requests, because it is enabled before the BeginRequest phase of an
HTTP request. As a result, request validation applies to requests for
all ASP.NET resources, not just .aspx page requests. This includes
requests such as Web service calls and custom HTTP handlers. Request
validation is also active when custom HTTP modules are reading the
contents of an HTTP request.
As a result, request validation errors might now occur for requests
that previously did not trigger errors. To revert to the behavior of
the ASP.NET 2.0 request validation feature, add the following setting
in the Web.config file:
<httpRuntime requestValidationMode="2.0" />
However, we recommend that you analyze any request validation errors
to determine whether existing handlers, modules, or other custom code
accesses potentially unsafe HTTP inputs that could be XSS attack
vectors.
I'm using ELMAH to handle errors in my MVC sites and I've noticed over the past couple of weeks that I'm getting some CryptographicExceptions thrown. The message is:
System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed.
System.Web.Mvc.HttpAntiForgeryException:
A required anti-forgery token was not
supplied or was invalid. --->
System.Web.HttpException: Validation
of viewstate MAC failed. If this
application is hosted by a Web Farm or
cluster, ensure that
configuration specifies the same
validationKey and validation
algorithm. AutoGenerate cannot be used
in a cluster. --->
The application is not running in a cluster and I can't seem to reproduce these errors. They look like valid requests -- not a hand-crafted post -- and do contain the __RequestVerificationToken cookie. I do have the required HTML helper on the page, inside the form (my login form).
I haven't had any user complaints, yet, so I'm assuming that eventually it works for whoever is trying to login, but I'm left wondering why this could be happening.
Anyone else seeing this behavior or have any ideas on how to diagnose the exception -- like I said, I can't get it to fail. Deleting the cookie in FF comes up with a different error. Modifying the cookie (changing or removing the contents) also results in a different error, as does modifying the contents of the hidden token input on the page.
I'm not sure if there is a correlation, but after adding a robots.txt file that excludes my login actions, I am no longer seeing these errors. I suspect that it has to do with a crawler hitting the page and trying to invoke the login action.
EDIT: I've also see this issue when receiving old cookies after the application pool has recycled. I've resorted to setting the machineKey explicitly so that changes to the validation/decryption keys on application restarts don't affect old cookies that may be resent.
After updating the site and going to a fixed machineKey I found that I was still getting these errors from people who had cookies from the previous version. As a temporary work around I've added the following Application_Error handler:
public void Application_Error()
{
var exception = Server.GetLastError().GetBaseException();
if (exception is System.Security.Cryptography.CryptographicException)
{
Server.ClearError();
if (Request.IsAuthenticated)
{
var form = new FormsAuthenticationWrapper();
form.SignOut();
Session.Clear();
}
Response.Cookies.Clear();
Response.Redirect( "~" );
}
}
I'm not so sure this has anything specifically to do with the antiforgery system, the inner exception states 'Validation of viewstate MAC failed.', from what I can tell, the default infrastructure for the antiforgery system has a dependency on the viewstate (actually if you take a look here you'll see see the dependency and horror (the CreateFormatterGenerator method at the bottom)).
As for why the viewstate mac is failing on the fake request, I'm not sure- but given the horror that exists in deserializing the antiforgery token (processing an entire fake request), it doesn't suprise me at all..