Manual/non-JS reCAPTCHA not translating - localization

When JavaScript is disabled reCAPTCHA uses a JS-independent captcha (obviously). But the problem is that the text/links are not translated.
In fact, for the manual challenge the following text is additionally displayed:
"We need to make sure you are a human. Please solve the challenge below, and click the I'm a Human button to get a confirmation code. To make this process easier in the future, we recommend you enable Javascript."
The text above is in English even though I've set "lang: 'es'" (but obviously this doesn't work because it's set in JavaScript).
Is there a certain way to specify the language for the manual challenge? (different to the JS "lang: 'es'" JS settting)
Or can custom translations be written for the manual challenge?

You can only add hl=es parameter to link for noscript

Related

Show disclaimer site when opening external link

I'm running a typo3 v. 7.6.4
I alredy looked into existing plugins an even how to write my own... but i can't find a solution.
My goal is pretty simple:
Show a simple disclaimer page whenever the user clicks a link to any external page.
Is there any easy ways to accomplish this?
The easiest way would in fact be to add a on('click') eventHandler on all links. This would be additional JavaScript and work with all existing content. Figuring out if a link refers to an external site should be easy (exclude relative urls and match absolute urls against your baseUrl).
However, if this is a legal requirement, you should decide if JavaScript works for you, because with disabled JS the disclaimer would not be triggered.

Prevent XSS attacks and still use Html.Raw

I have CMS system where I am using CK Editor to enter data. Now if user types in <script>alert('This is a bad script, data');</script> then CKEditor does the fair job and encodes it correctly and passes <script>alert('This is a bad script, data')</script> to server.
But if user goes into browser developer tools (using Inspect element) and adds this inside it as shown in the below screen shot then this is when all the trouble starts. Now after retrieving back from DB when this is displayed in Browser it presents alert box.
So far I have tried many different things one them is
Encode the contents using AntiXssEncoder [HttpUtility.HtmlEncode(Contents)] and then store it in database and when displaying back in browser decode it and display it using MvcHtmlString.Create [MvcHtmlString.Create(HttpUtility.HtmlDecode(Contents))] or Html.Raw [Html.Raw(Contents)] as you may expect both of them displays JavaScript alert.
I don't want to replace the <script> manually thru code as it is not comprehensive solution (search for "And the encoded state:").
So far I have referred many articles (sorry not listing them all here but just adding few as proof to show I have put sincere efforts before writing this question) but none of them have code which shows the answer. May be there is some easy answer and I am not looking in right direction or may be it is not that simple at all and I may need to use something like Content Security Policy.
ASP.Net MVC Html.Raw with AntiXSS protection
Is there a risk in using #Html.Raw?
http://blog.simontimms.com/2013/01/21/content-security-policy-for-asp-net-mvc/
http://blog.michaelckennedy.net/2012/10/15/understanding-text-encoding-in-asp-net-mvc/
To reproduce what I am saying go to *this url and in the text box type <script>alert('This is a bad script, data');</script> and click the button.
*This link is from Michael Kennedy's blog
It isn't easy and you probably don't want to do this. May I suggest you use a simpler language than HTML for end user formatted input? What about Markdown which (I believe) is used by Stackoverflow. Or one of the existing Wiki or other lightweight markup languages?
If you do allow Html, I would suggest the following:
only support a fixed subset of Html
after the user submits content, parse the Html and filter it against a whitelist of allowed tags and attributes.
be ruthless in filtering and eliminating anything that you aren't sure about.
There are existing tools and libraries that do this. I haven't used it, but I did stumble on http://htmlpurifier.org/. I assume there are many others. Rick Strahl has posted one example for .NET, but I'm not sure if it is complete.
About ten years ago I attempted to write my own whitelist filter. It parsed and normalized the entered Html. Then it removed any elements or attributes that were not on the allowed whitelist. It worked pretty well, but you never know what vulnerabilities you've missed. That project is long dead, but if I had to do it over I would have used an existing simpler markup language rather than Html.
There are so many ways for users to inject nasty stuff into your pages, you have to be fierce to prevent this. Even CSS can be used to inject executable expressions into your page, like:
<STYLE type="text/css">BODY{background:url("javascript:alert('XSS')")}</STYLE>
Here is a page with a list of known attacks that will keep you up at night. If you can't filter and prevent all of these, you aren't ready for untrusted users to post formatted content viewable by the public.
Right around the time I was working on my own filter, MySpace (wow I'm old) was hit by an XSS Worm known as Samy. Samy used Style attributes with embedded background Url that had a javascript payload. It is all explained by the author.
Note that your example page says:
This page is meant to accept and display raw HTML by trusted
editors.
The key issue here is trust. If all of your users are trusted (say employees of a web site), then the risk here is lower. However, if you are building a forum or social network or dating site or anything that allows untrusted users to enter formatted content that will be viewable by others, you have a difficult job to sanitize Html.
I managed to resolve this issue using the HtmlSanitizer in NuGet:
https://github.com/mganss/HtmlSanitizer
as recommended by the OWASP Foundation (as good a recommendation as I need):
https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet#RULE_.236_-_Sanitize_HTML_Markup_with_a_Library_Designed_for_the_Job
First, add the NuGet Package:
> Install-Package HtmlSanitizer
Then I created an extension method to simplify things:
using Ganss.XSS;
...
public static string RemoveHtmlXss(this string htmlIn, string baseUrl = null)
{
if (htmlIn == null) return null;
var sanitizer = new HtmlSanitizer();
return sanitizer.Sanitize(htmlIn, baseUrl);
}
I then validate within the controller when the HTML is posted:
var cleanHtml = model.DodgyHtml.RemoveHtmlXss();
AND for completeness, sanitise whenever you present it to the page, especially when using Html.Raw():
<div>#Html.Raw(Model.NotSoSureHtml.RemoveHtmlXss())</div>

Sending data to form, but cant work out encrypted post data - work around

Im trying to send some data to a form on a site were im a member using cURL, but when i look at the headers being sent, they seem to have been encrypted.
Is there a way i can get around this by making the computer / server visit the site and actual add the data to the inputs on the form and then hit submit, so that it would generate the correct data and post the form ?
You have got a few options:
reverse engineer the JavaScript that does the encryption (or possibly just encoding) process
get a browser engine (e.g. the Gecko engine), and add some scripting to it to fill in the forms and push the submit button - of course you would need JavaScript support within the page itself
parse the HTML using an HTML parser, feed the JavaScript in it to a JavaScript runtime with the correct libraries, fill in the "form" and hit the submit button
It's probably easiest to go for the first option. The JavaScript must be in the open to be able to be executed in the browser. But it may take some time to reverse-engineer as it is likely obfuscated.
You can use a framework to automate user interaction on the web pages, like Selenium.
This would enable you to not bother reverse engineering anything.
Selenium has binding in various languages, including Python and java.
Provided the javascript is visible on the website in question, you should be able to simply copy and paste their encryption routines to prepare the headers exactly as they do
A hacky fix if you can isolate the function that encodes the data you type in the form - is to use something like PyV8 to execute the JS inside python.
Use AutoHotKeyIt and actually have it use the Browser Normally. It can read from files, and do repetitive tasks infinitely. Also you can push a flag to make it only happen within that application, which means you can have it minimized and yet still preform the action.
You seem to be having issues with the problem of them encrypting the headers and such, so why not simply use that too your advantage? Your still pushing the same data in, but now your working around their system. With little to no side effect too you.

How can I get it work: Embedding Twitter's Web Intent

Normally Web Intent is used as a pop-up. According to Twitter, it also provides embedding functionality.
"Some sites may prefer to embed the unobtrusive Web Intents pop-up Javascript inline or without a dependency to platform.twitter.com. The snippet below will offer the equivalent functionality without the external dependency."
The snippet can be found at https://gist.github.com/894540#file_intents.html
See: http://dev.twitter.com/pages/intents
However, I can't get this snippet work. I copied the snippet(JavaScript) code to an html file and open that in a browser. Nothing happened! What should I do to make it work?
I had the same problem during NYC Startup Weekend.
The snippet they provide does open up the Twitter popup, as required, but the ability of the Twitter popup window to pass a message back to your web page is a little more complicated. You will need to understand how their widgets.js code works and reproduce what's necessary to set up the RPC framework. My short-term workaround was to include a slightly modified (un-obfuscated) version of widgets.js that would not replace my button with theirs.
I will be tackling this in a week or two, if you can wait.
... or you can just include their widgets.js directly :)
Welcome to the vague world of the Twitter API documentation.. I think you're misunderstanding, no small part to the incredibly poor wording on the API page:
Embedding that javascript code allows you to open the twitter web intent pages in a "twitter style" popup without requiring a dependency on platform.twitter.com. It does not allow you to embed a twitter intent.
You can see it at work by adding that javascript and then adding the following to your page
<p>New</p>
Clicking "New" will open it in a popup. Remove the javascript, and clicking "New" will navigate the page away to the tweet box.
It's incredibly frustrating to me they don't, at least to my knowledge, allow for proper embedding. I know why they oped for this method, but it's frustrating none the less.

prevent outlook stationery from showing up in my email (Outlook 2007)

There are some people in my office who insist on using cute stationery and some of it makes messages difficult to read. I really just want to read email on a white background with no distractions. Is there a way to disable stationery on incoming mail in Outlook? (Without switching to "plain text only")
yeah, I yanked that description from here
but it is very accurate however I've had no luck in finding a solution. Most solutions I see solve the problem by pushing out something to a bunch of users.
like : this
I don't really have the authority to do that. Not only that, that only prevents ME from setting stationery.
this has been asked before to no avail:
I don't have time to deal with this, so hopefully there is something I have overlooked.
Without switching to "plain text only" I want to be able to change a setting on my computer (it can be. a reg hack, I don't care) that will prevent outlook stationery from showing up in my email
it would also be helpful to know how to do it for Outlook 2003 as well.
No such setting/reghack exists. you would need to override the Item_Open event and change the message format from html or RTF (if either are detected) to Plaintext. This is the only way you could reliably strip out all the formatting junk without losing the text.
that or write a custom parsing agent (which would seem to be a bit harder).
either solution involves coding an addin to handle the open event and change the message format before displaying the message.
I'm not aware of a setting, but could you copy the text and paste it in Notepad?
I use that all the time to remove obnoxious formatting.

Resources