How do I get webrat / selenium to "wait for" the CSS of the page to load? - ruby-on-rails

When I use webrat in selenium mode, visit returns quickly, as expected. No prob.
I am trying to assert that my styles get applied correctly (by looking at background images on different elements). I am able to get this information via JS, but it seems like the stylesheets have not loaded and/or gotten applied during my test.
I see that you can "wait" for elements to appear, but I don't see how I can wait for all the styles to get applied. I can put in a general delay, but that seems like built-in flakiness or slowness, which I am trying to avoid.
Obviously since I know what styles I'm looking for I can wait for them to appear. I'll write such a helper, but I was thinking there might be a more general mechanism already in place that I haven't seen.
Is there an easy way detect that the page is really really "ready"?

That's strange. I know that wait_for_page_to_load waits for the whole page, stylesheets included.
If you still think it's not waiting as it should, you can use wait_for_condition which will execute a javascript and wait until is returns true. Here's an example:
#selenium.wait_for_condition "selenium.browserbot.getCurrentWindow().document.body.style.backgroundColor == 'white'", "60000"

We ran into this when a page was reporting loaded even though a Cold Fusion portion was still accessing a database for info to display. Subsequent processing would then occur too soon.
Look at the abstract Wait class in the Selenium API. You can write your own custom until() clause that could test for certain text to appear, text to go away (in the case of a floating message that goes away when the loading is done) or any other event that you can test for in the Selenium repertoire. The API page even has a nice example that helps a lot getting it set up.

Related

Is it right that ASP.NET bundles get generated on every request?

We hit a performance issue recently that highlighted something that I need to confirm.
When you include a bundle like this:
#Scripts.Render("~/jquery)
This appears to be running through (identified using dotTrace, and seen it running through this):
Microsoft.Ajax.Utilities.MinifyJavascript()
for every single request to both the page that has the include, and also the call to the script itself.
I appreciate that in a real world scenario, there will only be 1 hit to the script as the client will cache it. however, it seems inefficient to say the least.
The question is, is this expected behavior, as if it isn't, I'd like to fix it (so any suggestions), but if it is, we can pre-minify the scripts.
UPDATE
So, even if I change the compilation mode to debug, it's still firing the minify method. It outputs the individual urls, but still trys to minify it.
However, if remove all the references to the render methods, it doesn't try to minify anything, and runs rapidly, doesn't balloon the app pool, and doesn't max the CPU on the web server.

Getting inconsistent "Unable to find css" errors with Rspec + Capybara + Ember

What's Happening
In our Rspec + Capybara + selenium (FF) test suite we're getting A LOT of inconsistent "Capybara::ElementNotFound" errors.
The problem is they only happen sometimes. Usually they won't happen locally, they'll happen on CircleCi, where I expect the machines are much beefier (and so faster)?
Also the same errors usually won't happen when the spec is run in isolation, for example by running rspec with a particular line number:42.
Bare in mind however that there is no consistency. The spec won't consistently fail.
Our current workaround - sleep
Currently the only thing we can do is to litter the specs with 'sleeps'. We add them whenever we get an error like this and it fixes it. Sometimes we have to increase the sleep times which is making out tests very slow as you can imagine.
What about capybara's default wait time?
Doesn't seem to be kicking in I imagine as the test usually fails under the allocated wait time (5 seconds currently)
Some examples of failure.
Here's a common failure:
visit "/#/things/#{#thing.id}"
find(".expand-thing").click
This will frequently result in:
Unable to find css ".expand-thing"
Now, putting a sleep in between those two lines fixes it. But a sleep is too brute force. I might put a second, but the code might only need half a second.
Ideally I'd like Capybara's wait time to kick in because then it only waits as long as it needs to, and no longer.
Final Note
I know that capybara can only do the wait thing if the selector doesn't exist on the page yet. But in the example above you'll notice I'm visiting the page and the selecting, so the element is not on the page yet, so Capybara should wait.
What's going on?
Figured this out. SO, when looking for elements on a page you have a few methods available to you:
first('.some-selector')
all('.some-selector') #returns an array of course
find('.some-selector')
.first and .all are super useful as they let you pick from non unique elements.
HOWEVER .first and .all don't seem to auto-wait for the element to be on the page.
The Fix
The fix then is to always use .find(). .find WILL honour the capybara wait time. Using .find has almost completely fixed my tests (with a few unrelated exceptions).
The gotcha of course is that you have to use more unique selectors as .find MUST only return a single element, otherwise you'll get the infamous Capybara::Ambiguous exception.
Ember works asynchronously. This is why Ember generally recommends using Qunit. They've tied in code to allow the testing to pause/resume while waiting for the asynchronous functions to return. Your best bet would be to either attempt to duplicate the pause/resume logic that's been built up for qunit, or switch to qunit.
There is a global promise used during testing you could hook up to: Ember.Test.lastPromise
Ember.Test.lastPromise.then(function(){
//continue
});
Additionally visit/click return promises, you'll need some manner of telling capybara to pause testing before the call, then resume once the promise resumes.
visit('foo').then(function(){
click('.expand-thing').then(function(){
assert('foobar');
})
})
Now that I've finished ranting, I'm realizing you're not running these tests technically from inside the browser, you're having them run through selenium, which means it's not technically in the browser (unless selenium has made some change since last I used it, possible). Either way you'll need to watch that last promise, and wait on it before you can continue, testing after an asynchronous action.

Getting strange Capybara issues

So I'm using capybara to test my backbone app. The app uses jquery animations to do slide transitions.
So I have been getting all kinds of weird issues. Stuff like element not found ( even when using the waiting finders and disabling the jquery animations ) I switched from the chrome driver back to Firefox and it fixed some of the issues. My current issues include:
Sometimes it doesn't find elements if the browser window is not maximized even though they return true for .visible? if I inspect w pry. (this is a fixed with slide w no responsive stuff )
and the following error:
Failure/Error: click_link "Continue"
Selenium::WebDriver::Error::StaleElementReferenceError:
Element not found in the cache - perhaps the page has changed since it was looked up
Basically, my questions are:
what am I doing wrong to trigger these issues.
can you tell me what if I have any other glaring issues in my code?
and when using a waiting Finder, do I need to chain my click to the returned element to ensure it has waited correctly or can I just find the element and call the click on another line:
Do I have to chain like this
page.find('#myDiv a').click_link('continue')
Or does this work?
page.find('h1').should have_content('Im some headline')
click_link('continue')
Here is my code: http://pastebin.com/z94m0ir5
I've also seen issues with off-screen elements not being found. I'm not sure exactly what causes this, but it might be related to the overflow CSS property of the container. We've tried to work around this by ensuring that windows are opened at full size on our CI server, or in some cases scrolling elements into view by executing JavaScript. This seems to be a Selenium limitation: https://code.google.com/p/selenium/issues/detail?id=4241
It's hard to say exactly what's going wrong, but I'm suspicious of the use of sleep statements and a lot of use of evaluate_script/execute_script. These are often bad signs. With the waiting finder and assertion methods in Capybara, sleeps shouldn't be necessary (you may need to set longer wait times for some actions). JavaScript execution, other than being a poor simulation of how the user interacts with the page, don't wait at all, and when you use jQuery, actions on selectors that don't match anything will silently fail, so that could result in the state of the page not being correct.
You do not have to chain. Waiting methods in Capybara are all synchronous.

why would a userscript wrapped in a firefox addon be slower than the same script in greasemonkey?

I've been working on converting a Greasemonkey userscript into a Firefox addon. I'm using the page-mod module and it appears to work as expected.
EXCEPT that it is noticeably slower!
The first action that is slower is the load of the script. Even though I've set my contentScriptWhen to ready, the xpi version (which, among other things inserts a checkbox for toggling its actions) takes much longer to load and show its checkbox.
The second action that is slower is its toggle action. The affect of the toggle takes noticeably longer to execute.
The script is long and involved so I haven't included it here. But in general, it uses jQuery (pasted into the referenced contentScriptFile) to make a number of modifications to the page. Those mods are turned on and off by the aforementioned toggle.
Can anyone think of general reasons why the same userscript, when loaded via an XPI addon, would be considerably and noticeably slower than that same script is when loaded via Greasemonkey?
Page-mods are UserScripts are implemented differently, the former is more all proxified and more secure, but also slower in some cases. The better written your page-mod is the more it will benefit.

Problems with multiline EditBox widget in World of Warcraft AddOn

When I try to set the width of a multiline EditBox widget, it flickers for a moment, then gets set.
Is there a way to get rid of the flickering? Or, alternatively, is there a workaround?
It might be a problem with the way the UI rendering is optimized. Try changing your UIFaster setting as described here: http://www.wowwiki.com/CVar_UIFaster
I've usually seen this as a result of multiple calls to :SetWidth() occurring in quick succession. There are two ways this can happen — (a) it's genuinely getting called multiple times, or (b) it's been hooked/replaced with another function which is internally causing multiple calls. As a quick test, try running the following command (or equivalent) via the WoW chat window while your edit box is visible:
/script MyEditBox:SetWidth(100)
If the size changes without flicker, you've got scenario A — go over your addon's logic paths and make sure :SetWidth() is only being called when appropriate (and only once). If it does flicker, you're probably looking at screnario B (or, of course, the UI issue Cogwheel mentions). This may be harder to debug, unless you're hooking/replacing SetWidth yourself, but a good first step would be to disable all other addons and see if the problem resolves itself. If not, my first guess would be a library issue (assuming you're using any).

Resources