According to Wikipedia in "Browser Add-Ons and Extensions Exemption" section:
CSP should not interfere with the operation of browser add-ons or
extensions installed by the user...
But unfortunately it is blocking external scripts, injected by my add-on.
I can always put this injected code in to the content script. But I'm wondering if there is another way to overcome this.
You should indeed put your code into a content script. If you insert a <script> tag into a page then it works exactly the same as if the web page itself inserted it. The browser has no way of knowing that this code belongs to your extension. What's worse, this code isn't safe from manipulations by the webpage - e.g. the webpage can redefine window.alert() method and your code won't be able to show messages. Extension code and content scripts on the other hand aren't affected by this, these see only the raw DOM objects without any JavaScript-induced changes.
Related
I've been using HTTPBuilder as a way of obtaining a site's HTML content. As an example, this is how I've used it:
def http = new HTTPBuilder(url)
def root = http.get([:])
// Really just the standard approach.
Now this has worked very well for static HTML sites, however I'm now attempting to take data from sites where Javascript is executed on load, which populates the page. For example this page.
My question is, does Grails / Groovy have a native way of waiting until all Javascript has executed, before returning the HTML content. If not native, then third party?
Research I have already attempted
I've had a look at libraries that attempt to mock a browser. I thought that if I could get the library to execute the Javascript and only return the result, I could mimic the behaviour I wanted. My research into this has been somewhat limited, as the libraries I have found only give you control over things like your User-Agent.
The method you are using only gets the raw HTML content from the server. So there's nothing to download or execute any code. Selenium might work (or Geb, a Groovy wrapper around it), but the getPageSource method says that getting the HTML content post-JavaScript depends on the driver. You might find one of the drivers (chrome, firefox, etc.) do return the results post-JavaScript. If that doesn't work, try using PhantomJS (blog post on what you want).
Long story short:
We've had errors being logged concerning a JQuery/JQueryUI based system for some time. At it's core we're doing a pretty basic click link -> JQuery AJAX GET -> Open JQueryUI modal pattern.
The error we were getting appeared simple - "Object doesn't support property or method 'dialog'" - leading us to believe there was an error with JQueryUI. After expending a lot of time ruling out browser incompatibilities, bad code on JQuery's end, bad code on our end, angry code gods... we caught a lucky break. A 100% repro on one of the machines in the office.
Turns out the thing was riddled with adware - specifically [an older version of] easyinline - http://www.easyinline.com. When the user clicked any link a cascade of javascript files would be loaded, including reloading JQuery from Google's CDN.
For most links this isn't really a problem - they take you off the page anyway and everything reloads. But for our modals it meant that every modal link would stamp over our JQuery at the point the request was sent, resulting in the response trying to make use of the 'new' $ which would now be missing JQueryUI and any other plugins.
Initially we thought about making another global var ($$ or something) for 'our' JQuery and explicitly using that in our code instead of just $. The issue with that is that we were using a few other 3rd party tools which rely on $ and the adware-loaded $ is a different (older) version. So it's important that we preserve $ correctly.
Any ideas? I'm aware of JQuery's noConflict() method but after a cursory glance don't think it fits the bill.
Ultimately we've decided to re-establish our JQuery integrity when we receive any ajax responses (i.e. just before the open modal code is executed). All our ajax stuff is wrapped in our own handler so this was fairly easy to inject across the board.
Basically;
We have the original JQuery 'saved' - we've got it in-scope thanks to our handler but it could be easily put into a separate global (like $$) just after it is loaded. In our ajax response handler we've got a fairly straightforward check;
if (window.$ !== $$) {
window.$ = window.jquery = window.jQuery = $$;
}
This will reset the global jquery back to what it should be.
well this is just a work around and not a full fledged solution.
you can try multiple things here
1. if you have control over what the adware loads then just put in something like this if(!$) where they try to load the jquery
2. try loading your plugin at the end of the page
3.even if end of the page is not working. Try injecting the link(a script tag using document.write) to the plugins CDN in the Jquery document ready event. this would ensure that the plugins code would be loaded at the end when all the jquery is already loaded (not a preferred thing).
When debugging my MVC3 app in Visual Studio using IE9 I see lots of small "script block" entries for my page. My page relies heavily on AJAX, and some actions result in replacing sections of the DOM with partial views coming back from the server.
What I'm seeing is a growing list of these "script block" entries - should I be worried about this? Will this ultimately be a performance problem when the app is live?
Note: the script blocks are quite small bits of code - I've moved most of my significant javascript into their own .js files.
Mm, I thinks it's more of a personal style thing with modern browsers, but if nothing else, trying to contain all the script for a view in one block at the bottom of the page will make for easier debugging and your future self will thank you for it!
As a general rule of thumb I will only have script blocks in pages that need to use the document.ready or variables from my viewmodel. Otherwise, I would move all the functions into their own js file. It helps keep the views cleaner and the browser will load the page faster since it won't block loading the page when it hits as many script tags. Plus, it will make debugging easier since you can go straight to the js file instead of having to find the function within the HTML.
I worked with Chrome extensions which have so called background page - an html page that is loaded in background once per browser window. You can store there some javascript variables, can access extension's own localstorage, can communicate back and force with content scripts (scripts injected to pages).
Is there anything similar in Firefox and how do I use it for the tasks listed above?
If you are using the (relatively) new Add-On SDK, then the main javascript file residing in your lib directory is equivalent of a Chrome extension's background page - a persistent script that runs in the background and spawns/creates/inserts panels, widgets and content scripts.
Regarding your specific asks:
1. localStorage: Add-Ons in Firefox cannot access localStorage directly. However, you can use simple-storage for storing data similar to localStorage.
2. Communication with content-script: Add-ons can communicate with content scripts using port or postMessage.
From the point of view of a traditional Firefox extension, the browser itself is just another window containing a document, although this is a XUL document rather than an HTML document. So you can store per-window variables, although you have to be careful not to overwrite other extension variables, which usually means declaring a top-level object and adding all your variables as properties of that object.
Sharing variables between windows used to be a little harder but fortunately JavaScript modules solve that problem in simple cases (primitive types).
Extensions can communicate with content scripts although there are some wrappers in place to prevent you from accidentally doing something silly.
My firefox extension loads content from a 3rd party site into an overlay panel. This content is user generated and sometimes will, for instance, have an image tag that does not close which causes a mismatched tag error to be thrown and the extension fails. Is there any way I can sandbox this content so that these kind of errors are not an issue? I was thinking maybe load the content into a blank iframed page.. but was wondering if there might be a cleaner solution.
Unfortunately, unless you're getting back XML, there is no XPCOM solution for parsing. Your best bet is what you suggested - placing the content in an iframe.
You can find some more discussion about the topic at: http://www.mozdev.org/pipermail/greasemonkey/2005-April/001255.html
Your guess about an iframe was correct, there's no better way to do it (as of Firefox 3.5): Parsing HTML From Chrome on MDC