So I'm using capybara to test my backbone app. The app uses jquery animations to do slide transitions.
So I have been getting all kinds of weird issues. Stuff like element not found ( even when using the waiting finders and disabling the jquery animations ) I switched from the chrome driver back to Firefox and it fixed some of the issues. My current issues include:
Sometimes it doesn't find elements if the browser window is not maximized even though they return true for .visible? if I inspect w pry. (this is a fixed with slide w no responsive stuff )
and the following error:
Failure/Error: click_link "Continue"
Selenium::WebDriver::Error::StaleElementReferenceError:
Element not found in the cache - perhaps the page has changed since it was looked up
Basically, my questions are:
what am I doing wrong to trigger these issues.
can you tell me what if I have any other glaring issues in my code?
and when using a waiting Finder, do I need to chain my click to the returned element to ensure it has waited correctly or can I just find the element and call the click on another line:
Do I have to chain like this
page.find('#myDiv a').click_link('continue')
Or does this work?
page.find('h1').should have_content('Im some headline')
click_link('continue')
Here is my code: http://pastebin.com/z94m0ir5
I've also seen issues with off-screen elements not being found. I'm not sure exactly what causes this, but it might be related to the overflow CSS property of the container. We've tried to work around this by ensuring that windows are opened at full size on our CI server, or in some cases scrolling elements into view by executing JavaScript. This seems to be a Selenium limitation: https://code.google.com/p/selenium/issues/detail?id=4241
It's hard to say exactly what's going wrong, but I'm suspicious of the use of sleep statements and a lot of use of evaluate_script/execute_script. These are often bad signs. With the waiting finder and assertion methods in Capybara, sleeps shouldn't be necessary (you may need to set longer wait times for some actions). JavaScript execution, other than being a poor simulation of how the user interacts with the page, don't wait at all, and when you use jQuery, actions on selectors that don't match anything will silently fail, so that could result in the state of the page not being correct.
You do not have to chain. Waiting methods in Capybara are all synchronous.
Related
I'm trying to run a webscraper that scrapes indeed.com and applies for jobs. What really gets me is the inconsistent, yet random errors. I'm not a programmer, but as far as I understand, if 2+2=4, then it should always be 4.
Here is the script I'm trying to run:
https://github.com/jmopr/job-hunter/blob/master/scraper.rb
Seems to only work with firefox v45.0.2 because of the geckodriver
My own fixes in scraper.rb if you wish to execute the script yourself:
config.allow_url("indeed.com")
JobScraper.new('https://www.indeed.com/', ARGV[0], ARGV[3]).scrape(ARGV[1], ARGV[2])
ERRORS
Example 1
def perform_search
# For indeed0
save_and_open_page
fill_in 'q', :with => #skillset
fill_in 'l', :with => #region
find('#fj').click
sleep(1)
end
Error: Unable to find class #fj. So it was able to find q, and l, but not fj. q and l are forms while fj is a button. How was it able to find the forms but not the button...????? Re-executed code via the terminal command rails server and the error went away. later came back again, how random in nature!!!! How is this possible? I can't even predict when it will happen so i can save_and_open_page
Example 2: error comes up when you run a search. no jobs get posted.
Error: block passed to #window_opened _by opened 0 windows instead of 1 (Capybara::Window Error)
Re-execute code, error went away, later comes back...
To clarify on example 2:
That error sometimes comes up since I have a Canadian IP address and it redirects me to indeed.ca. However, when used a US ip address via a VPN, that error was consistent 100% of the time. In an attempt to work around this, i've modified the code to go to the US version of the site, again, that error is consistent 100% of the time. Any idea on why this window is not popping up when i'm on the US version of indeed.com?
Summary:
i'm not necessarily looking for solutions, but an understanding of what is going on. Why the randomness in error.
2+2=4 under a given set of assumptions and conditions. Browsers and scrapers unfortunately aren't that predictable, with random delays, page throttling, changing pages, varying support levels for different technologies, etc.
In your current case the reason for the window_opened_by error could have been not having Capybara.default_max_wait_time set long enough (how long Capybara will wait for the window to open), however if you try the search manually you'll see that indeed no longer opens the job description in a new window if the current window is wide enough to show it in a right panel. Basically the code you're trying to use is no longer fully compatible with indeed.com due to changes in how indeed.com works. You could fix this by setting the drivers window size to a size where indeed.com will always open a new window, or by setting the window size big enough job descriptions open on the same page and rewriting the code to not look for a new window.
As for the no '#fj' issue, the easiest way to debug that is to put
save_and_open_screenshot if page.has_no_css?('#fj')
before the find('#fj').click and see what the page looks like when there is no '#fj' element on it. Doing that shows indeed.com is randomly returning the mobile site. Why this is happening I have no idea, but it could just be what indeed.com does when it doesn't recognize the current user agent. If that's the case you can probably work around that by setting the user agent the capybara-webkit driver uses, or you could just switch to calling click_button('Find Jobs') which should click the button on both the mobile and non-mobile pages.
When pressing the F1 key, the win32 API first sends the appropriate key message then sends a WM_HELPmessage to the control that has the focus.
As it does not process it, it gets sent up the parenting chain all the way to the form which reacts to the message.
In Delphi (XE7) this happens because of calls to CallWindowProc inside Vcl.Controls.TWinControl.DefaultHandler
While this works fine in pretty much all locations inside my applications, there is one place where WM_HELP never reaches the top form.
Trying to reproduce it, I came up with a test application that you may find here:
http://obones.free.fr/wm_help.zip
After having built the application and started it, place the focus inside the In SubLevel or Level 1 edits and press F1.
You will see that WM_HELP is caught by the form.
Now, if you do the same inside In SubLevel2 or Level 15 edits you will see that nothing is logged, the form never sees WM_HELP
Tracing in the VCL I found out that for those deep levels, the calls to CallWindowProc inside Vcl.Controls.TWinControl.DefaultHandler immediately returns on one of the controls in the hierarchy, thus preventing the form from ever receiving the message.
However, I couldn't figure out why the Win32 API code thinks it should not propagate the message anymore, except for one thing: If I remove the WH_CALLWNDPROC hook, then everything is back to normal.
You can see the effect of disabling it if you uncheck the Use hook checkbox.
Now, one will argue that I shouldn't have such deep hierarchies of components, and I agree. However, the structure in the center with two frames inside one another is directly inspired by what's in the application where I noticed the issue.
This means that it can be quite easy to trigger the problem without actually noticing it. Hopefully, in my case, I can remove a few panels and go back below the limit.
But did anyone encounter the situation before? If yes, were you able to solve it? Or is this a known behavior of the Win32 API?
This is caused by a "Windows kernel stack overflow" that happens if you send window messages recursively. On a 64 bit Windows the kernel stack overflow happens much faster than on a 32 bit Windows.
This bug also caused the VCL to not resize deeply nested controls correctly before it got fixed by changing the recursive AlignControls code to (my) iterative version (more about the stack overflow: http://news.jrsoftware.org/news/toolbar2000/msg07779.html)
What's Happening
In our Rspec + Capybara + selenium (FF) test suite we're getting A LOT of inconsistent "Capybara::ElementNotFound" errors.
The problem is they only happen sometimes. Usually they won't happen locally, they'll happen on CircleCi, where I expect the machines are much beefier (and so faster)?
Also the same errors usually won't happen when the spec is run in isolation, for example by running rspec with a particular line number:42.
Bare in mind however that there is no consistency. The spec won't consistently fail.
Our current workaround - sleep
Currently the only thing we can do is to litter the specs with 'sleeps'. We add them whenever we get an error like this and it fixes it. Sometimes we have to increase the sleep times which is making out tests very slow as you can imagine.
What about capybara's default wait time?
Doesn't seem to be kicking in I imagine as the test usually fails under the allocated wait time (5 seconds currently)
Some examples of failure.
Here's a common failure:
visit "/#/things/#{#thing.id}"
find(".expand-thing").click
This will frequently result in:
Unable to find css ".expand-thing"
Now, putting a sleep in between those two lines fixes it. But a sleep is too brute force. I might put a second, but the code might only need half a second.
Ideally I'd like Capybara's wait time to kick in because then it only waits as long as it needs to, and no longer.
Final Note
I know that capybara can only do the wait thing if the selector doesn't exist on the page yet. But in the example above you'll notice I'm visiting the page and the selecting, so the element is not on the page yet, so Capybara should wait.
What's going on?
Figured this out. SO, when looking for elements on a page you have a few methods available to you:
first('.some-selector')
all('.some-selector') #returns an array of course
find('.some-selector')
.first and .all are super useful as they let you pick from non unique elements.
HOWEVER .first and .all don't seem to auto-wait for the element to be on the page.
The Fix
The fix then is to always use .find(). .find WILL honour the capybara wait time. Using .find has almost completely fixed my tests (with a few unrelated exceptions).
The gotcha of course is that you have to use more unique selectors as .find MUST only return a single element, otherwise you'll get the infamous Capybara::Ambiguous exception.
Ember works asynchronously. This is why Ember generally recommends using Qunit. They've tied in code to allow the testing to pause/resume while waiting for the asynchronous functions to return. Your best bet would be to either attempt to duplicate the pause/resume logic that's been built up for qunit, or switch to qunit.
There is a global promise used during testing you could hook up to: Ember.Test.lastPromise
Ember.Test.lastPromise.then(function(){
//continue
});
Additionally visit/click return promises, you'll need some manner of telling capybara to pause testing before the call, then resume once the promise resumes.
visit('foo').then(function(){
click('.expand-thing').then(function(){
assert('foobar');
})
})
Now that I've finished ranting, I'm realizing you're not running these tests technically from inside the browser, you're having them run through selenium, which means it's not technically in the browser (unless selenium has made some change since last I used it, possible). Either way you'll need to watch that last promise, and wait on it before you can continue, testing after an asynchronous action.
When I use webrat in selenium mode, visit returns quickly, as expected. No prob.
I am trying to assert that my styles get applied correctly (by looking at background images on different elements). I am able to get this information via JS, but it seems like the stylesheets have not loaded and/or gotten applied during my test.
I see that you can "wait" for elements to appear, but I don't see how I can wait for all the styles to get applied. I can put in a general delay, but that seems like built-in flakiness or slowness, which I am trying to avoid.
Obviously since I know what styles I'm looking for I can wait for them to appear. I'll write such a helper, but I was thinking there might be a more general mechanism already in place that I haven't seen.
Is there an easy way detect that the page is really really "ready"?
That's strange. I know that wait_for_page_to_load waits for the whole page, stylesheets included.
If you still think it's not waiting as it should, you can use wait_for_condition which will execute a javascript and wait until is returns true. Here's an example:
#selenium.wait_for_condition "selenium.browserbot.getCurrentWindow().document.body.style.backgroundColor == 'white'", "60000"
We ran into this when a page was reporting loaded even though a Cold Fusion portion was still accessing a database for info to display. Subsequent processing would then occur too soon.
Look at the abstract Wait class in the Selenium API. You can write your own custom until() clause that could test for certain text to appear, text to go away (in the case of a floating message that goes away when the loading is done) or any other event that you can test for in the Selenium repertoire. The API page even has a nice example that helps a lot getting it set up.
When I try to set the width of a multiline EditBox widget, it flickers for a moment, then gets set.
Is there a way to get rid of the flickering? Or, alternatively, is there a workaround?
It might be a problem with the way the UI rendering is optimized. Try changing your UIFaster setting as described here: http://www.wowwiki.com/CVar_UIFaster
I've usually seen this as a result of multiple calls to :SetWidth() occurring in quick succession. There are two ways this can happen — (a) it's genuinely getting called multiple times, or (b) it's been hooked/replaced with another function which is internally causing multiple calls. As a quick test, try running the following command (or equivalent) via the WoW chat window while your edit box is visible:
/script MyEditBox:SetWidth(100)
If the size changes without flicker, you've got scenario A — go over your addon's logic paths and make sure :SetWidth() is only being called when appropriate (and only once). If it does flicker, you're probably looking at screnario B (or, of course, the UI issue Cogwheel mentions). This may be harder to debug, unless you're hooking/replacing SetWidth yourself, but a good first step would be to disable all other addons and see if the problem resolves itself. If not, my first guess would be a library issue (assuming you're using any).