I've got an integration test here which is passing flawlessly using the poltergeist driver every time, but when I run this test using Selenium it passes on average 3 times and fails 1 time.
def fill_in_inclusion_criteria
find("div.measure#age label[for='16']").click
find("div.measure#substance_use_met label[for='1']").click
find("div.measure#participant_consent label[for='1']").click
click_link("Next")
end
When it fails, the error that I get back is this
expected to find css "div.measure#participant_consent" but there were no matches. Also found "", which matched the selector but not all filters.
The participant consent button appears when div.measure#age label[for='16'] is clicked, so it's dependent on Javascript. I see this happening in Firefox most of the time, but when it errors, the div isn't visible on the page.
It seems like it's not waiting for the element to display on the page before clicking it, but I thought that wrapping it in a "find" waits for the element to be visible on the page before trying to click it?
Any idea why this could be happening?
The one confusing thing with your question is that the error message you posted isn't actually for the code you've shown, since if it was the error message would be expected to find css "div.measure#participant_consent label[for='1']" ... . Assuming that's just a copy paste error/from a previous slightly different version of the code and the line you specified is where the error is actually coming from:
Since the previous find/click lines are working there are 2 potential reasons for the third one to not find the label element
The age label[for='16'] element click either isn't actually occurring, or is occurring before the JS that enables the showing behavior is attached. You can check for this by adding a sleep for a few seconds before it and seeing whether the failures go away
The participant_consent find/click isn't waiting long enough for the element to appear. find waits up to Capybara.default_max_wait_time seconds for elements to appear, so if that is long enough you could increase that setting, or pass a :wait option to find to override the setting for that call
find("div.measure#participant_consent label[for='1']", wait: 10).click
Technically, there is a third potential cause but it's highly unlikely due to the sporadic nature of the failure, which would be a JS failure on the page. You can check for this by rescuing the error and pausing your test so that you can look at the developer console in the browser for any errors.
Related
I am trying to write an integration test in which I must check the visibility of an element on button click. The code works perfectly in one machine and failing in the other. The element is displayed until the data comes from the backend. So its visibility depends on the speed of the machine also. Is that the problem? This is the code:
assert page.has_css?('#my_element_id')
assert find('#my_element_id', visible: true)
But I am getting an error: expected false to be truthy.
Is there any other way to assert the visibility of the element?
You can also try:
assert find('#my_element_id').visible?
from: https://rubydoc.info/gems/capybara/0.4.0/Capybara/Element#visible%3F-instance_method
It does say however:
visible? ⇒ Boolean
Whether or not the element is visible. Not all drivers support CSS, so
the result may be inaccurate.
I assume you are talking about whether #my_element_id is visible.
EDIT:
If you are waiting for an element to be visible first before checking for the element with the id #my_element_id this posts might be helpful:
How to make Capybara check for visibility after some JS has run?
So you could wait for the backend data to come through then check visibility. If you are try to check that it is visible before that data, I am not quite sure, it seems like it would depend on the machine's internet connection to me.
Don’t use plain assert, use the assertions provided by Capybara which include retrying behavior
assert_css(‘#my_element_id’)
By default that would check only for visible elements, but if you’ve set Capybara.ignore_hidden_elements = false (don’t do that, really dont) then you would need to also pass the :visible option
Note: you may still have issues if it’s only visible for a very short time - in that case if you’re using Chrome you can set the network conditions to very slow in order to increase the time data takes to return
I'm writing test cases in Robot Framework using AppiumLibrary.
I'm importing AppiumLibrary with the following code to get a screenshot whenever something goes wrong:
AppiumLibrary.__init__(self, run_on_failure="Capture Page Screenshot")
Is there a way to NOT take a screenshot for a specific keyword? This keyword will for example create 15 screenshots (if it's not able to find Donald):
Wait Until Keyword Succeeds 30 seconds 2 seconds Element Text Should Be Username Donald
There is nothing built-in to do what you want. There are many solutions, however.
One solution would be to turn off capturing the keyword (using register keyword to run on failure) immediately before calling wait until keyword succeeds. You could then call wait until keyword succeeds, and then turn capturing back on afterwards.
Or, you can register your own custom keyword instead of Capture page screenshot. Your own keyword can use whatever logic it wants to determine whether to capture screenshots or not. For example, it could look for a global variable that tells it whether to capture or not.
You could also write your own keyword to use in place of wait until keyword succeeds which also uses one of the other two solutions.
For example, create a keyword named wait until element contains text which turns off the capturing, runs wait until keyword succeeds, and then turns it back on. Then, in your test you still just have a single statement:
wait until element contains text Username Donald
Register Keyword To Run On Failure NONE
${Status} Run Keyword And Return Status Wait Until Keyword Succeeds 30 seconds 2 seconds Element Text Should Be Username Donald
Register Keyword To Run On Failure Capture Page Screenshot
IF ${Status}==False
Element Text Should Be Username Donald
END
-Lets Walkthrough whats happening in the above code snippet
Register Keyword To Run On Failure NONE (To avoid multiple
screenshots when Element Text Should Be keyword fails in the
next line )
After waiting for 30 seconds it will return status If the keyword passes it returns pass and if it fails it returns fails and gets stored in status variable (Remember no screenshots are generated in this process since we turned off screenshots in point 1.)
Again setting back Capture Page Screenshot Keyword on Failure to take screenshot.
Lastly one more time it will run Element Text Should Be and if it fails it generates a screenshot and keyword fails.(Since ${Status}==False that means it will run only if Wait Until Keyword Succeeds keyword fails otherwise this IF block will not run since our keyword already passed)
we're currently working on a piece of mapping software, where we use Leaflet with custom left and right sidebars as well as a text-filter where we filter for different POI features. The whole thing looks like this:
The flow is as follows
A user visits a map under a unique link
The controller renders the HTML template first (no data is bein published)
Inside our javascript an ajax call fetches the data and renders markers, some panels, etc., etc.
We use capybara with poltergeist for all our feature tests.
In our master everything is working as it should be.
In another branch I added password protection, hence a bootstrap modal pops up if a map is password protected and has not yet been unlocked within the current session.
Everything is working fine except for some feature tests that fail lately and after messing around with stuff I still don't have a clue why exactly.
Let's see for example this test
feature 'Places map filter', js: true do
before do
#map = create :map, :full_public
create :place, :unreviewed, categories: 'Playground', map: #map
visit map_path(map_token: #map.public_token)
find('.open-sidebar').trigger('click')
end
scenario 'Nothing filters nothing' do
show_places
show_events
show_places_list_panel
expect(page).to <...>
end
...
end
Capybara claims to be unable to find some css elements. Calling screenshot_and_open_image reveals that it is still showing an overlay (hiding everything else) until all data have been loaded. Something seems to be hanging within my Javascript...
.
I've been messing around with the test-environment, which had an effect:
config.action_controller.asset_host = "file://#{::Rails.root}/public"
config.assets.prefix = 'assets_test'
The test passes since the data is now there. A screenshot reveals missing assets, which is guided by a proper warning message Not allowed to load local resource: <path>. I'm puzzled since querying the data happens via an ajax-call from one of the files that capybara tells to be unaccessible.
I don't know how to continue, since I don't want to start skipping tests. I hope you can help guiding me finding the error.
Thanks in advance,
Andi
Update
Thanks to Thomas for his hint on ES6 features. I used poltergeist's inspector mode and hence was able to discover an arrow function I introduced! That's why the JS driver couldn't deal with a callback I was passing to a promise which did not resolve...
Firstly, ensure you have js_errors: true in your Poltergeist driver registration - https://github.com/teampoltergeist/poltergeist#customization - so that you will get runtime JS errors reported.
Secondly, if you're using any ES6+ features in your JS code, make sure you transpiling them into ES5 compatible code since Poltergeist/PhantomJS only supports JS <= ES5, and will silently fail at JS parse time if it parses JS using features like let.
And finally, by using trigger you are bypassing Poltergeists checks that the button is actually clickable by the user, so make sure you're not clicking a button too early (before whatever behavior gets attached to the button is actually attached)
I am getting an "Element is no longer attached to the DOM" error from Geb tests. The thing that's confusing me is that the error is from within waitFor itself -- I inserted the wait specifically to allow the async activity on the page to complete before moving ahead with clicking a link, which was previously the source of the same error. If the wait itself fails, now I'm at a loss.
The code is something like
waitFor { $("div", text: "... search string ... ") }
$("a", id: "element-id").click()
and the stack trace shows that the waitFor itself is actually the problem:
at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:187)
at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:145)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:554)
at org.openqa.selenium.remote.RemoteWebElement.execute(RemoteWebElement.java:268)
at org.openqa.selenium.remote.RemoteWebElement.getText(RemoteWebElement.java:152)
at geb.navigator.NonEmptyNavigator.matches_closure28(NonEmptyNavigator.groovy:474)
at geb.navigator.NonEmptyNavigator.matches(NonEmptyNavigator.groovy:471)
at geb.navigator.NonEmptyNavigator.filter_closure2(NonEmptyNavigator.groovy:63)
at geb.navigator.NonEmptyNavigator.filter(NonEmptyNavigator.groovy:63)
at geb.navigator.NonEmptyNavigator.find(NonEmptyNavigator.groovy:48)
at geb.content.NavigableSupport.$(NavigableSupport.groovy:96)
at geb.Browser.methodMissing(Browser.groovy:193)
at geb.spock.GebSpec.methodMissing(GebSpec.groovy:51)
at [my test]_closure7([my test].groovy:147)
at [my test]_closure7([my test].groovy)
at geb.waiting.Wait.waitFor(Wait.groovy:106)
From the stacktrace I can see that you use that selector inside of a test class and not a module so the possibility of a module base element being detached can be ruled out.
If this is happening consistently for you then it means that one of the elements selected by the div selector gets removed from DOM before its text is being retrieved to filter on it.
There are two reasons why this can happen:
Your selector is very slow - selecting all div elements in a page and then filtering them based on text in the JVM can take a lot of time. Assuming that you use the default waiting preset then if that selector takes more than 5 seconds then the waitFor {} block will simply run once, get the exception and never retry because it runs out of time. You should do as much filtering as possible in the browser, that is use a CSS3 compatible selector and use Geb's text filtering extension on an as small as possible element set.
Your page is async in a periodic way and it changes quicker than the selector is able to filter based on element text. This would be again possible because your selector looks like it could be potentially very slow.
Basically I would suggest coming up with a more specific selector than what you have there currently.
I'm using Capybara 2.1 with Ruby 1.9.3 using the selenium driver (with Minitest and Test Unit) in order to test a web app.
I am struggling with the StaleElementReferenceException problem. I have seen quite a number of discussions on the topic but I haven't been able to find a solution to the issue that I am facing.
So basically, I'm trying to find all pagination elements on my page using this code:
pagination_elements = page.all('.pagination a')
Then I'm doing some assertions on those elements like:
pagination_elements.first.must_have_content('1')
After those assertions, I'm continuing the test by clicking on the Next Page link to make sure that my future first pagination element will be the Previous Page.
To do that I'm retrieving paginations elements again :
new_pagination_elements = page.all('.pagination a')
And the Stale Error is occurring here, because I'm reaching elements that I've already reached. ( Here is the error )
You can see the link states here.
I really have no idea how to make this common test work properly.
Do you have any tips for a better way to reach my pagination elements?
I sometimes have some problem with AJAX intensive pages, in my case this workaround solves it:
begin
...
rescue Selenium::WebDriver::Error::StaleElementReferenceError
sleep 1
retry
end
I saw the main message in the gist is:
Element not found in the cache -
perhaps the page has changed since it was looked up
I have similar case before. There are two solutions:
Add page.reload before checking same stuff in new page, if you have set Capybara.automatic_reload = false in spec_helper
find a special element in new page which previous page doesn't have. This effect is equivalent to wait.
Another method is to use specific selector. For example, instead of
pagination_elements = page.all('.pagination a')
Use
pagination_elements = page.all('#post_123 .pagination a')
Append a unique id area to the selector and you should not meet such problem.
Interesting link about this error and how to fix it : http://stefan.haflidason.com/testing-with-rails-and-capybara-methods-that-wait-method-that-wont/
Apparently, in addition to race conditions, this error also appears due to misused within blocks. For example:
within '.edit_form' do
click '.edit_button'
# The error will appear here if the 'edit_button' is not a
# descendant of the 'edit_form'
end
HAve you tried to use WebDriver directly rather than via Capybara? This woudl potentially give you more control of when to and when to not cache objects.
e.g. (Apologies for the java syntax but should get the idea)
WebElement searchField = driver.findElement(By.CssSelector("input.foo"));
searchField.click();
searchField.sendKeys("foo foo");
System.out.println(searchField.getText());
//Do something elsewhere on the page which causes html to change (e.g. submit form)
.....
....
//This next line would throw stale object
System.out.println(searchField.getText());
//This line will not throw exception
searchField = driver.findElement(By.CssSelector("input.foo"));
System.out.println(searchField.getText());
Assigning "findElement" again to "searchField" means that we re-find the element. Knowing when to and when not re-assign is key went deciding how to cache your webelements.
I have not used Capybara, but I assume that it hides the caching strategy from you?