why would a userscript wrapped in a firefox addon be slower than the same script in greasemonkey? - firefox-addon

I've been working on converting a Greasemonkey userscript into a Firefox addon. I'm using the page-mod module and it appears to work as expected.
EXCEPT that it is noticeably slower!
The first action that is slower is the load of the script. Even though I've set my contentScriptWhen to ready, the xpi version (which, among other things inserts a checkbox for toggling its actions) takes much longer to load and show its checkbox.
The second action that is slower is its toggle action. The affect of the toggle takes noticeably longer to execute.
The script is long and involved so I haven't included it here. But in general, it uses jQuery (pasted into the referenced contentScriptFile) to make a number of modifications to the page. Those mods are turned on and off by the aforementioned toggle.
Can anyone think of general reasons why the same userscript, when loaded via an XPI addon, would be considerably and noticeably slower than that same script is when loaded via Greasemonkey?

Page-mods are UserScripts are implemented differently, the former is more all proxified and more secure, but also slower in some cases. The better written your page-mod is the more it will benefit.

Related

Is it right that ASP.NET bundles get generated on every request?

We hit a performance issue recently that highlighted something that I need to confirm.
When you include a bundle like this:
#Scripts.Render("~/jquery)
This appears to be running through (identified using dotTrace, and seen it running through this):
Microsoft.Ajax.Utilities.MinifyJavascript()
for every single request to both the page that has the include, and also the call to the script itself.
I appreciate that in a real world scenario, there will only be 1 hit to the script as the client will cache it. however, it seems inefficient to say the least.
The question is, is this expected behavior, as if it isn't, I'd like to fix it (so any suggestions), but if it is, we can pre-minify the scripts.
UPDATE
So, even if I change the compilation mode to debug, it's still firing the minify method. It outputs the individual urls, but still trys to minify it.
However, if remove all the references to the render methods, it doesn't try to minify anything, and runs rapidly, doesn't balloon the app pool, and doesn't max the CPU on the web server.

Getting strange Capybara issues

So I'm using capybara to test my backbone app. The app uses jquery animations to do slide transitions.
So I have been getting all kinds of weird issues. Stuff like element not found ( even when using the waiting finders and disabling the jquery animations ) I switched from the chrome driver back to Firefox and it fixed some of the issues. My current issues include:
Sometimes it doesn't find elements if the browser window is not maximized even though they return true for .visible? if I inspect w pry. (this is a fixed with slide w no responsive stuff )
and the following error:
Failure/Error: click_link "Continue"
Selenium::WebDriver::Error::StaleElementReferenceError:
Element not found in the cache - perhaps the page has changed since it was looked up
Basically, my questions are:
what am I doing wrong to trigger these issues.
can you tell me what if I have any other glaring issues in my code?
and when using a waiting Finder, do I need to chain my click to the returned element to ensure it has waited correctly or can I just find the element and call the click on another line:
Do I have to chain like this
page.find('#myDiv a').click_link('continue')
Or does this work?
page.find('h1').should have_content('Im some headline')
click_link('continue')
Here is my code: http://pastebin.com/z94m0ir5
I've also seen issues with off-screen elements not being found. I'm not sure exactly what causes this, but it might be related to the overflow CSS property of the container. We've tried to work around this by ensuring that windows are opened at full size on our CI server, or in some cases scrolling elements into view by executing JavaScript. This seems to be a Selenium limitation: https://code.google.com/p/selenium/issues/detail?id=4241
It's hard to say exactly what's going wrong, but I'm suspicious of the use of sleep statements and a lot of use of evaluate_script/execute_script. These are often bad signs. With the waiting finder and assertion methods in Capybara, sleeps shouldn't be necessary (you may need to set longer wait times for some actions). JavaScript execution, other than being a poor simulation of how the user interacts with the page, don't wait at all, and when you use jQuery, actions on selectors that don't match anything will silently fail, so that could result in the state of the page not being correct.
You do not have to chain. Waiting methods in Capybara are all synchronous.

Loading additional javascript code from firefox add-on content script

I'm writing something that I want to release as both a chrome extension and a firefox add-on.
The chrome extension is already available on github. I've factored my code into several modules using a module load format similar to what requirejs uses; I did this to separate the parts that were chrome-specific from the parts I hoped to re-use in the firefox add-on.
Specifically, I split up not only the backend work, but also the content scripts.
In chrome, when my content script needs to load another module, it sends a message to the background page saying "please load this module"; the script on the background page then does:
function onLoadLibrary(request, sender, sendResponse) {
var allFrames = request.allFrames || false;
chrome.tabs.executeScript(
sender.tab.id, {file: request.library.toLowerCase() + '.js',
allFrames: allFrames},
function () {
sendResponse({});
});
return true;
}
That is, I'm able to load additional javascript into the same sandbox as the content script that asked for that code. This is necessary to make module dependencies work.
In firefox, I can't figure out how to do this. I'll attach my initial content scripts through pageMods and by calling tab.attach from the "ready" event of tabs. That seems straightforward, but then if that content script needs to load more code I can't see how to do it.
There doesn't seem to be a way to access the sandbox my content script is running in from the main.js file so that I might inject more code into it. Even if I somehow kept a reference to the relevant tab instance (which only lets me inject into the top frame in any case), it appears that each new call to tab.attach puts injected code into a new sandbox. The object tab that's passed to my ready event handle isn't a real XUL tab that I could pass to require("tabs/util").getBrowserForTab; if it were, then I think I can follow through enough of the sdk code to create my own sandbox, though I'd worry about leaving accidental memory leaks behind.
I considered passing the code back to the content script through a "eval-this-code" message, but I really don't want to use eval in my extension because of security concerns; I also worry that using eval would make it difficult to impossible to get my firefox add-on approved for AMO. (Also, how would that interact when my add-on runs on sites with a Content Security Policy?)
The usage of traits to define the add-on API seems to close off access to objects such that I can't reach inside a Worker to get a reference to the sandbox my content script is executing in. At this point, it appears that I'd need to include nearly a full copy of the sdk in my add-on just to expose one method on WorkerSandbox.
Note: I'm using the Add-On sdk (the project formerly known as JetPack). I'm willing to use Components.utils.import if someone can tell me how to use that from inside an Add-On SDK-managed content script.
Content-scripts do not expose a public API to attach more scripts to a content-script sandbox after it was initialized. You should probably file an enhancement bug and state your use case, if there isn't one filed already (search first), and/or even come up with some patches yourself.
In cases where there is a DOM that your add-on own (widget), then it's just a matter of attaching another script tag.
For things like page-mods where there is no DOM you own, you're left with a couple of options, none of which is really satisfying. As you already found out yourself, the use of traits prohibits you from accessing "private" properties/methods.
Fork page-mod/tab/content-worker to provide the functionality you need. That would require creating your own copies of the modules and expose the necessary APIs to inject scripts into existing workers.
This is has a steep learning curve (but given that you already figured out details such as traits, should be doable for you), but more importantly hard to maintain as you need to make sure you keep up with the upstream. And AMO editors will not like you very much for it :p
On the plus side, you could try to get your stuff committed upstream, fixing this problem for everybody and become a hero to many authors using the Add-on SDK.
The eval method you propose. Not only is this eval a major source for security issues, but it also may be a performance killer, as right now IIRC evaled code will not use the JIT. And, of course, it will make us AMO editors cringe, even if used "correctly".
Do not use lazy loading at all, and specify all content scripts from the very beginning. This is what add-ons usually do (I'm almost inclined to say "always"). However, this conflicts with your current design, and depending on your add-on may pose a serious performance penalty for loading stuff you didn't really need in the end.
You could use the require mechanism to have most scripts as SDK module and not content-scripts. This is not always feasible, of course, e.g. when dealing with code that would normally modify the DOM in your content-script, but might work for some other stuff.
Replace page-mod, etc with your own Greasemonkey-like, enhanced API. This means lots of work, it is error-prone, security-sensitive and has to be maintained. So, it's not really a viable solution, IMO...
Components.utils.import does not help you. It isn't available to content-scripts anyway.

Am I missing potential problems with custom page caching in Rails 3?

I use rails to present automated hardware testing results; our tests are run mainly via TCL. Recently, we have implemented a "log4TCL" which is basically a translated version of log4J. The log files have upwards of 40000 lines, each of which is written to the database as a logline record, and load time for the view is too long to be considered usable. I have tried to use ajax requests to speed things up, but the initial query/page load accounts for ~75% of the full page load.
My solution is page caching. I cannot use the rails included page caching because each log report is a different instance of "log_viewer". The report is generated using a test_run_id parameter. Rails-included page caching only caches one instance of "log_viewer.html". What I need is "log_viewer_#{test_run_id}.html". I have implemented a way of doing this. The reports age out after one week and are purged from the test_runs/log_viewer_cache directory to save disk space. If an older report is needed, loading the page re-generates the report with a fresh age-out timer.
I have come to the conclusion that this is the way to go. My concern is that I have not found any other implementations such as this anywhere which leads me to believe that I have missed an inherent flaw in my design. Any input would be much appreciated.
EDIT: For clarification, the "Dynamic" content of this report is what takes too long to load. I need to cache multiple instances of what action/fragment caching is not concerned with.

How do I get webrat / selenium to "wait for" the CSS of the page to load?

When I use webrat in selenium mode, visit returns quickly, as expected. No prob.
I am trying to assert that my styles get applied correctly (by looking at background images on different elements). I am able to get this information via JS, but it seems like the stylesheets have not loaded and/or gotten applied during my test.
I see that you can "wait" for elements to appear, but I don't see how I can wait for all the styles to get applied. I can put in a general delay, but that seems like built-in flakiness or slowness, which I am trying to avoid.
Obviously since I know what styles I'm looking for I can wait for them to appear. I'll write such a helper, but I was thinking there might be a more general mechanism already in place that I haven't seen.
Is there an easy way detect that the page is really really "ready"?
That's strange. I know that wait_for_page_to_load waits for the whole page, stylesheets included.
If you still think it's not waiting as it should, you can use wait_for_condition which will execute a javascript and wait until is returns true. Here's an example:
#selenium.wait_for_condition "selenium.browserbot.getCurrentWindow().document.body.style.backgroundColor == 'white'", "60000"
We ran into this when a page was reporting loaded even though a Cold Fusion portion was still accessing a database for info to display. Subsequent processing would then occur too soon.
Look at the abstract Wait class in the Selenium API. You can write your own custom until() clause that could test for certain text to appear, text to go away (in the case of a floating message that goes away when the loading is done) or any other event that you can test for in the Selenium repertoire. The API page even has a nice example that helps a lot getting it set up.

Resources