I'm seeing this in the Chrome javascript console when loading a FT map embedded in an iframe:
Invalid 'X-Frame-Options' header encountered when loading 'https://www.google.com/fusiontables/embedviz?viz=MAP&q=select+col4%3E%3E0+f…%3E0+%3D+'Y'&h=false&lat=40.0&lng=-100.0&z=4&t=1&l=col4%3E%3E0&y=2&tmplt=3': '' is not a recognized directive. The header will be ignored.
The page is here: http://new.oto-usa.org/wordpress/locations/
Anybody know what's up with this?
Just guessing here. I haven't used embedviz. But you have a single quote inside your single quote iframe src. Look for +'Y'
Related
Validating my feed, it has an enclosure with a URL of
https://archive.org/download/NigelFarageAPersonalMessageToNorthernIrelandVoters./Nigel%20Farage,%20a%20personal%20message%20to%20Northern%20Ireland%20voters..mp3
I know it is a bit convoluted... but what is wrong with it? The stop in the directory name? the double dot in the file name? the comma? all of em?
I have looked at the RFC on URL's but cant make it out(!).
This feed does not validate.
line 441, column 2: url must be a full URL: https://archive.org/download/NigelFarageAPersonalMessageToNorthernIrelandVoters./Nigel%20Farage,%20a%20personal%20message%20to%20Northern%20Ireland%20voters..mp3 (4 occurrences) [help]
<enclosure type="audio/mpeg" url="https://archive.org/download/NigelFarage ...
^
** edit **
A useful (even if incorrect) answer was added (and removed...) showing the result from the w3c URL validator - https://validator.w3.org/checklink
This Link Checker looks for issues in links, anchors and referenced objects in a Web page, CSS style sheet, or recursively on a whole Web site. For best results, it is recommended to first ensure that the documents checked use Valid (X)HTML Markup and CSS. The Link Checker is part of the W3C's validators and Quality Web tools.
If you find this question, you may find the link checker a useful resource!
The problem seems to be that it’s a HTTPS URL instead of a HTTP URL.
The linked error documentation, foo attribute of bar must be a full URL, says:
If this is a link to a web page, you must include the "http://" at the beginning and immediately follow it with a valid domain name.
The RSS 2.0 spec says about <enclosure>:
The url must be an http url.
If you change https://archive.org/download/… to http://archive.org/download/…, it validates.
And if you don't have httpS then your SSL says your page isn't secure. #feedvalidator step up. There are a ton of feedback/complaints about this on the support forum here https://groups.google.com/forum/#!forum/feedvalidator-users
More specifically here: https://github.com/rubys/feedvalidator/issues/16
Disclaimer I am aware of the Content-Disposition header to send back to the client to set the downloaded file name - however my problem is a little more complicated than just that
I have an application (RubyOnRails using rails 3.1.3) that is essentially a document search/view application (search for documents and then render them in the browser). This is accomplished using an iframe.
<iframe src="<%= #frameURL %>" width="100%" height="100%">
#frameURL is a call to the plugin function of our Documents controller. The plugin function makes a RESTful call to our back end API to retrieve the referenced document, and then send the document contents back to the browser for rendering inside the iframe.
This works perfectly for documents like JPEG, PDF, TXT, etc. However, when the browser does not know how to handle the content-type (like a word document - we run Mac OS-X) - then the browser downloads the returned file as plugin.doc <- NOTE this is without setting the Content-Disposition header.
Since we want to name the file appropriately when it needs to be downloaded, we set the Content-Disposition header:
response.headers['Content-Disposition'] = "attachment; filename.extension"
Now the file gets downloaded as filename.doc - however, with this header set, even files like JPEG which the browser can render internally, get downloaded.
Questions:
Does anyone know where rails or the browser is getting the name of plugin.extension when we don't set the Content-Disposition header?
Is there a way to set Content-Disposition but have it only applied IF the browser can't render the document - so the default should be browser handles everything it can, and as a fallback, the browser uses the Content-Disposition content to name the downloaded file.
Thanks!
If you are calling some Rails function like "send_file", then search the source code of your version of Rails to find the source code of that function and see what headers it sets. You have to follow the call stack down a couple of levels but you should be able to find out how it sets the headers; I have done this before. As for the browser, I think if it doesn't find a file name in the Content-Disposition header it will more or less use the last portion of the URL for a filename.
Try using "inline" instead of "attachment" in the header.
One of my GET parameters was named ª, and today when I was expecting to test the rendering of a page in IE9 vs Chrome, I found instead that IE9 wouldn't even load the right data because it was converting ª to "ª"(no quotes) which is ª HTML special character.
Is there anything I can do to prevent IE from doing this? Chrome and Firefox don't have any problems with the URL and display it fine. The only fix I could think of was changing it from ª to something else, but I was hoping there was something easier or quicker.
You probably need to escape the & in the url in the HTML file as &
I have in my browser.xul code,what I am tyring to is to fetch data from an html file and to insert it into my div element.
I am trying to use div.innerHTML but I am getting an exception:
Component returned failure code: 0x804e03f7
[nsIDOMNSHTMLElement.innerHTML]
I tried to parse the HTML using Components.interfaces.nsIScriptableUnescapeHTML and to append the parsed html into my div but my problem is that style(attribute and tag) and script isn`t parsed.
First a warning: if your HTML data comes from the web then you are trying to build a security hole into your extension. HTML code from the web should never be trusted (even when coming from your own web server and via HTTPS) and you should really use nsIScriptableUnescapeHTML. Styles should be part of your extension, using styles from the web isn't safe. For more information: https://developer.mozilla.org/En/Displaying_web_content_in_an_extension_without_security_issues
As to your problem, this error code is NS_ERROR_HTMLPARSER_STOPPARSING which seems to mean a parsing error. I guess that you are trying to feed it regular HTML code rather than XHTML (which would be XML-compliant). Either way, a better way to parse XHTML code would be DOMParser, this gives you a document that you can then insert into the right place.
If the point is really to parse HTML code (not XHTML) then you have two options. One is using an <iframe> element and displaying your data there. You can generate a data: URL from your HTML data:
frame.src = "data:text/html;charset=utf-8," + encodeURIComponent(htmlData);
If you don't want to display the data in a frame you will still need a frame (can be hidden) that has an HTML document loaded (can be about:blank). You then use Range.createContextualFragment() to parse your HTML string:
var range = frame.contentDocument.createRange();
range.selectNode(frame.contentDocument.documentElement);
var fragment = range.createContextualFragment(htmlData);
XML documents don't have innerHTML, and nsIScriptableUnescapeHTML is one way to get the html parsed but it's designed for uses where the HTML might not be safe; as you've found out it throws away the script nodes (and a few other things).
There are a couple of alternatives, however. You can use the responseXML property, although this may be suboptimal unless you're receiving XHTML content.
You could also use an iframe. It may seem old-fashioned, but an iframe's job is to take a url (the src property) and render the content it receives, which necessarily means parsing it and building a DOM. In general, when an extension running as chrome does this, it will have to take care not to give the remote content the same chrome privilages. Luckily that's easily managed; just put type="content" on the iframe. However, since you're looking to import the DOM into your XUL document wholesale, you must have already ensured that this remote content will always be safe. You're evidently using an HTTPS connection, and you've taken extra care to verify the identity of the server by making sure it sends the right certificate. You've also verified that the server hasn't been hacked and isn't delivering malicious content.
I am trying to auto-scroll to div using:
/index.php#tabletabs2?contact_added=1
when I use:
/index.php#tabletabs2
it works. How can I have both a variable and the auto-scroll working in my URL???
The query part of the URL needs to be before the #. Browsers only send the part before the # to the server. The part after is for auto-scrolling to elements via their id or name attribute, e.g.
/index.php?contact_added=1#tabletabs2
See also the "Syntax" section of Wikipedia's "Universal Resource Locator" article. Especially the description of fragment identifiers.
You have to have them in the correct order; ? goes before #:
/index.php?contact_added=1#tabletabs2