I have a Rails/Ember one-page app. Burp reports that
The value of the 'content_type' JSON parameter is copied into the HTML
document as plain text between tags. The payload
da80balert(1)4f31e was submitted in the content_type
JSON parameter. This input was echoed unmodified in the application's
response.
I can't quite parse this message referring to "is copied into" and "was submitted" in, but basically what is happening is:
A PUT or POST from the client contains ...<script>...</script>... in some field.
The server handles this request, and sends back the created object in JSON format, which includes the string in question
The client then displays that string, using the standard Embers/Handlebars {{content_type}}, which HTML-escapes the string and inserts it into the DOM, so the browser displays it on the screen as originally entered (and of course does NOT execute it).
So yes, the input was indeed echoed unmodified in the application's response. However, the application's response was not HTML, in which case there would indeed be a problem, but JSON, containing strings which when referred to by Handlebars will always be escaped properly for proper display in the browser.
So my question is, is this in fact a vulnerability? I have taken great care with my Ember app and can prove that no data from JSON objects is ever inserted "raw" into the DOM. Or is this a false positive given rise to by the mere fact the unescaped string may be found in the response if looked for using an unintelligent string comparison, not taking into account the fact that the JSON will be processed/escaped by the client-side framework?
To put it a different way, in a classic webapp spitting out HTML from the server, we know that user input such as the above must be escaped/sanitized properly. Unsanitized data "on the wire" in and of itself represents a vulnerability. However, in a one-page app based on JSON coming back from the server, the escaping/sanitization occurs in the client; the JSON on the "wire" may contain unsanitized data, and this is as expected. Am I missing something here?
There are subtle ways in which you can trick IE9 and older into treating JSON as HTML. So even if the server's response has a Content-Type header of application/json, IE will second guess it. This is called content type sniffing, and can be disabled by adding the X-Content-Type-Options: nosniff header.
JSON is not an executable format so your understanding is correct.
I did a demo of this exact problem in my talk on securing single page web apps at OWASP AppSec EU 2013 which someone put up on youtube here: http://m.youtube.com/watch?v=Femsrx0m9bU
Related
I just pulled out a piece of code which I wrote a few months ago. The code fetches an XML document from a web server and parses it using JAXB. The last time I tried it worked flawlessly; now I am getting an exception:
org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 50; White spaces are required between publicId and systemId.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:121)
Looking around, this suggests some issues with the XML header data, namely <!DOCTYPE ...>. The answer suggests that the statement is misleading: in the case described, systemId was missing altogether, despite the error just complaining about a missing whitespace in front of it.
However, if I get the XML document with a web browser, it doesn’t even contain the <!DOCTYPE ...> header.
Parsing an XML document I retrieved a few months back works without issues.
If I diff the document I retrieved today and the one from a few months back, both are exactly the same up to the start of the root element.
Capturing the HTTP traffic finally provided the answer (unencrypted connections come in handy at times): Apparently the service switched from HTTP to HTTPS in the last few months, with URLs remaining unchanged otherwise.
Requests to the old URL are answered with 301 Moved Permanently and the new URL.
When reading from a URL with java.net.URL.openStream(), redirects are not followed automatically. Thus, the data it returns is not valid XML, leading to the error message.
Lesson learned for today: White spaces are required between publicId and systemId is really just a cryptic way of saying: Something’s wrong with the XML data you supplied, but we didn’t bother to dig any deeper.
first of all I'd like to thank the team for this amazing project, it is indeed exiting to be able to start writing server-side software in Swift.
I'm successfully running a POC using PerfectServer on an Ubuntu VM and working on the API to interact with the mobile client.
There is one aspect I didn't quite understand yet, and that is accessing the request body data from my PerfectServer Handler.
Here is the workflow I have in mind:
The client submits a POST request to PerfectServer including some
JSON encoded body data
Once that hits the "valuesForResponse:" of
my server side Handler, I retrieve the WebRequest representation of
my request successfully
The request object does expose a many
properties of the HTTP request, including headers and the url-like
formatted query parameters.
Unfortunately, I cannot see a way to retrieve the underlying request body data. I would expect that to be some kind of public properties exposing the raw data that my handle can retrieve and decode in order to process the request.
The only example provided in the Examples workspace that comes with the project and sends a POST request that includes a body is in the project Authenticator. Here the HTTP body part takes the form os a UTF-8 encoded string where the values are query-params-like formatted.
name=Matteo&password=mypassword
This gets somehow exposed on the server handler by the WebRequest "param" property, that in the inner implementation of HTTPServer seems to expect an "&" separated string of key-values:
What I would expect is to have a way to provide body data in whatever form / encoding needed, in my case a JSON form:
{"name":"Matteo", "password":"psw"}
and be able to access that data from the WebRequest in my handler, decode it and use it to serve the request.
To summarise, I assume you could say that a WebRequest.bodyData public property is what I am after here :).
Is there something I am missing here?
Thanks in advance for any clarification!
Im trying to figure out what is wrong with my POST data, or Ajax call.
Im using Symfony to create a form, and Ajax to collect and pass the data. Each time I do a POST request I use Firebug's Net panel to look at my POST data. My POST is breaking somewhere, but I can't tell where. The only thing I can see here is when I look at Firebug I am seeing the POST looks different there for each example (the parameters are present in one, and not present in the other), but they should be identical, right? Is this a clue? I don't know how to interpret this, I don't know enough about Firebug, and its obviously not intuitive enough here for this particular issue.
Is this telling me my data isn't encoded correctly?
Here is a non-working example. Notice, "Parameters" is missing. All I see is the "Source" serialized/encoded data:
Now, in the example below, this is what I expect to see. Notice, this one not only contains the "Source" portion and the source data looks identical (but can use a 2nd pair of eyes on this), but there is another section called "Parameters". Why is this elusively missing in the first example and what does the missing "Parameters" mean?
I'm attaching the headers here, too. Maybe this will explain the problem. And posting these here now I do see the different Content-Type, but I think most of my testing was done before I was sending that header.
broken form headers
working form headers
Either something is wrong with the POST data or might I be be missing the Ajax dataType: 'json', or something?
If you have the wrong content type set in the headers, when the data is sent back, and inspected in Firebug, it can't pull them apart as parameters, unless it knows its form encoded data. If the header declares a different type, then if the data is indeed form url encoded, then the browser doesn't parse it as such, therefore can't break it apart into its parameter elements.
So when you make your call, be sure the content type is being sent as 'application/x-www-form-urlencoded' in your Ajax call.
I've made an iPad app that posts data to a ASP script. Data is then stored in UTF-8 in the MySQL db. Today one of the users posted data which made an error:
Data posted:
Jeanette Sjösvärd, Uttke Renata, Håkan Giljam
Data saved in log and db:
Jeanette Sjösvärd, Uttke Renata, Håkan Giljam
When reading data from the database, the text is full of "Ã¥ ä ö" (should be "å ä ö")
The log also saves the original post data the way it arrives to the server in percentage format:
Jeanette%20Sj%C3%B6sv%C3%A4rd%2C%20Uttke%20Renata
When posting all the data again (copy and paste that percentage-encoded block) from an ASP page, the data gets saved without any encoding issues.
Facts
Full posted data is about 18kB
All ASP pages contains <%#LANGUAGE="VBSCRIPT" CODEPAGE="65001"%> on top
Questions/ideas
Why does the ASP script read the data in different ways, depending on the sender?
Why did it just happen 1 out of 100 times?
Is there any encoding information that is/should be sent along with a POST request?
Could it be depending on some single special character in the data?
Does the iPad use any other encoding than UTF-8 as a standard? (The iPad is set to Swedish)
Edit
Found this thread: What determines the encoding of then data you receive from an HTTP POST?
Will look into the encoding header of the post request. But how could I detect if posted data isn't UTF-8 and in that case convert it?
Follow-up in this post: https://stackoverflow.com/questions/15656710/how-to-detect-post-encoding-and-convert-to-utf-8-in-asp
I have in my browser.xul code,what I am tyring to is to fetch data from an html file and to insert it into my div element.
I am trying to use div.innerHTML but I am getting an exception:
Component returned failure code: 0x804e03f7
[nsIDOMNSHTMLElement.innerHTML]
I tried to parse the HTML using Components.interfaces.nsIScriptableUnescapeHTML and to append the parsed html into my div but my problem is that style(attribute and tag) and script isn`t parsed.
First a warning: if your HTML data comes from the web then you are trying to build a security hole into your extension. HTML code from the web should never be trusted (even when coming from your own web server and via HTTPS) and you should really use nsIScriptableUnescapeHTML. Styles should be part of your extension, using styles from the web isn't safe. For more information: https://developer.mozilla.org/En/Displaying_web_content_in_an_extension_without_security_issues
As to your problem, this error code is NS_ERROR_HTMLPARSER_STOPPARSING which seems to mean a parsing error. I guess that you are trying to feed it regular HTML code rather than XHTML (which would be XML-compliant). Either way, a better way to parse XHTML code would be DOMParser, this gives you a document that you can then insert into the right place.
If the point is really to parse HTML code (not XHTML) then you have two options. One is using an <iframe> element and displaying your data there. You can generate a data: URL from your HTML data:
frame.src = "data:text/html;charset=utf-8," + encodeURIComponent(htmlData);
If you don't want to display the data in a frame you will still need a frame (can be hidden) that has an HTML document loaded (can be about:blank). You then use Range.createContextualFragment() to parse your HTML string:
var range = frame.contentDocument.createRange();
range.selectNode(frame.contentDocument.documentElement);
var fragment = range.createContextualFragment(htmlData);
XML documents don't have innerHTML, and nsIScriptableUnescapeHTML is one way to get the html parsed but it's designed for uses where the HTML might not be safe; as you've found out it throws away the script nodes (and a few other things).
There are a couple of alternatives, however. You can use the responseXML property, although this may be suboptimal unless you're receiving XHTML content.
You could also use an iframe. It may seem old-fashioned, but an iframe's job is to take a url (the src property) and render the content it receives, which necessarily means parsing it and building a DOM. In general, when an extension running as chrome does this, it will have to take care not to give the remote content the same chrome privilages. Luckily that's easily managed; just put type="content" on the iframe. However, since you're looking to import the DOM into your XUL document wholesale, you must have already ensured that this remote content will always be safe. You're evidently using an HTTPS connection, and you've taken extra care to verify the identity of the server by making sure it sends the right certificate. You've also verified that the server hasn't been hacked and isn't delivering malicious content.