I'm getting a weird error while trying to click on a Capybara Element
I'm using find(:xpath,"//a[contains(text(),'Connect')]").click
(find(:xpath,"//a[contains(text(),'Connect')]").present? return true)
the error I get is:
Selenium::WebDriver::Error::MoveTargetOutOfBoundsError Exception: Element cannot be scrolled into view:javascript:void(0);
i did some research and the only solution i found is that setting the selenium version to 2.16 may fix this issue (i'm using 2.25).
anybody got an idea?
It may happen when the page being tested is not fit into the current window size. If you know such pages where usually these error happening, you may explicitly scroll down before doing the operation on such hidden elements(like click, clear etc). Here the code to explicitly scroll down the page.
In java,
JavascriptExecutor js = (JavascriptExecutor) driver;
js.executeScript("javascript:window.scrollBy(250,350)");
From the times I used selenium webdriver to test .NET apps, I would get that error when the issue was exactly what it sounds like: It's looking for an object on the page that it cant scroll to for some reason. In my case it was because some dialogue boxes would appear without scrollbars and the driver had no way to "scroll the object into view"
Can you watch the execution of your test and see if that's the case? I had some luck rolling back to a previous version of firefox because 15+ was (as of about 2 months ago when I had the issue) unsupported by web driver and had this problem periodically. Rolling back selenium versions may help too.
First step though is definitely to watch the execution of the test and see whats happening though. And a good debugging idea may be to try to work through your steps manually yourself to make sure the test works by hand.
Its also worth noting that for the webdriver to be able to execute click the object actually has to be visible. IsPresent doesnt require that, it just searches the page files. Also an issue I ran into. IsPresent will still return true for objects that are not and cannot be made visible on the page (i.e. something at the bottom of the page that you cant see at the time)
Couple of tips here:
Webdriver should ideally be on the most recent update, it's what most use (Unless you're doing Ruby Automation)
Use css selectors, xpath (Whilst rendered), is almost always heavier on both resources and code.
Try defensive coding, first of all ascertain it exists. There are many ways to do that dependent on what package you are using. In ruby you would do page.has_css?('css_string')
Related
when I run our vaadin 23 app in production mode, I get following error in the browser console, while the corresponding site is rendered twice:
Uncaught (in promise) TypeError: [Vaadin.Router] Expected router outlet to be a valid DOM Node (but got null)
First it's rendered with a corrupt textfield (UI written in Vaadin); the second page looks fine. When I try to debug (IntelliJ), it gets rendered correctly, so I added log messages, where I learned, that the HomeView gets initialized even three times, whereas in dev mode it is initialized once. I find it hard to figure out why, since that is run in a thread (I'm far from knowing Vaadin well).
We have two apps, a backoffice and a webshop. The non-responsive backoffice does not show this issue, only the webshop. The webshop also uses two lit web components (but even when I comment them out, I have the same error). The rest is all kept in Java.
Does anybody have an idea how to solve this, or in what direction to search and debug?
Thanks a lot! Sura
This is likely caused by a known bug with the eagerServerLoad flag. As a workaround, try disabling this flag to prevent the issue.
Add vaadin.eagerServerLoad=false to application.properties to disable the flag, assuming that your application uses Spring Boot. You could find alternative ways of setting the property see the Configuration Properties article.
I'm working on automated test using Appium with Robot framework on Android device. I create schedule run on Jenkins. My test flow is entering some data in page A and submit, then switch to page B to check the result and switch to page A to enter a new data. I repeat this loop for around 10+ time. Everything works fine in around 4-5 rounds but after that there show up an error :
StaleElementReferenceException: Message: Cached element 'By.xpath:
//android.widget.TextView[#text='Limit']' do not exists in DOM anymore
The TextView is in the page A. I monitored the robot and saw that the TextView was shown up but the robot did not see it. I tried restart the device but the problem is not solved. I search through the internet and found some who facing the same issue but they use different programming language like Java or Python. I have no idea what I have to do next.
Development Tools :
Appium version: 1.21.0
Robot Framework version: 4.1.2 (Python 3.10.0 on win32)
First I do not use Robot Framework, but the code should be similar according to this https://robocorp.com/docs/languages-and-frameworks/robot-framework/try-except-finally-exception-catching-and-handling.
Second, I'm not sure if this is the best way to get around this. I think there is something you can do with the expected conditions class to get around this in a "cleaner way" but I'm not quite familiar with it enough to show/tell you. Instead what I've done is something like this...
from selenium.common.exceptions import StaleElementReferenceException
while some_limiting_factor:
try:
# logic for submitting page A, assertions for page B
except StaleElementReferenceException:
element = driver.find_element('By.xpath: //android.widget.TextView[#text='Limit']' )
As much as I want to cache elements in appium, it seems that the service itself does NOT want you to, at least not in my experience. Getting a fresh element(s) every time seems to ensure a "slow but steady" test. Hopefully someone can show me the deep appium secrets one day.
Here is a test (I've opened the inspector during the test and that's definitely the element hierarchy):
within('table.foo') do
find("tr#foo_#{ #foo.id }").click
end
Calling find() on the element returns:
could not be scrolled into view (Selenium::WebDriver::Error::WebDriverError)
I have a pretty good idea why--the page renders a single db entry created for the purpose of the test, and so the document doesn't extend past the window, making it unscrollable, which I think is what's throwing this error.
I have tried updating geckodriver to no avail.
Is there a method in cucumber that doesn't prompt scrolling? Would be better than a) testing in a really tiny window or b) creating more test data just to stretch the document.
This sounds like a bug that exists in an older version of chromedriver that is caused with newer/later versions of Chrome. The bug is the driver would lose the ability to scroll the tab in focus.
I would recommend downloading the latest driver and replacing the one you are using.
https://sites.google.com/a/chromium.org/chromedriver/downloads
Cheers.
Is it just me, or is the angular-material "Getting Started" example broken?
On that page (link above), there's an inline codepen to show using angular-material. But the demo doesn't work! (In particular, I don't see a button to collapse the sidebar.)
Since I used this example in my started project, I spent quite some time troubleshooting it -- to no avail. Then, I realized the example itself it may be broken. And sure 'nuff, it is!
Does anyone know what the actual bug is, so I can work around it on my test app? It must have worked at some time; but I can't figure out why it's broken now.
Thanks!
That particular pen is working fine for me but I have noticed a few are not working, and it's due to the angular-material.js link being incorrect in the dependencies (under the pen's settings) the link provided redirects to the CSS.
This is intentional - the sidebar only becomes collapsable on smaller screens and is open on larger screens. Shrinking your browser window will show the collapse button.
That being said, the Getting Started page is definitely in need of an update. It is a good guide for a basic page structure, but the individual demo pages will be a lot better if you're looking to try out some of the components. (Every demo has a CodePen link to open an editable version.)
A website of mine is behaving weirdly. The layout sometimes is fine, and sometimes it is screwy. An example page that I see the problem on is this one: link
Disclaimer: I have yet to start my investigation into cause in earnest. I am turning to Stackoverflow because I am lazy and I hope someone will say "That happened to me once, it is probably this...". So please, no one get stuck into this working out this issue if it is something you have never seen before, as it wouldn't be fair as I have not done it myself.
Ok, some background:
The problem usually (maybe always) occurs when first viewing the page
The problem does not show up always, only sometimes
When the page shows up munged, if you refresh it usually reloads looking as it should
The site is a rails app
The css is passed through the neat Smurf Gem, which automatically minifies the CSS and Javascript on the page.
The layout problems happen in firefox (both linux and winXP)
The CSS is served up in the production environment using the ":cache => true" option which concatenates all the css files into one file
Anyway, I am hoping that this has happened to someone before and it will be really simple to fix. If not, I'll go and investigate and return with the solution (or a request for more help).
Thanks in advance!
James.
[edit]I added the first two bullet points, inspired by the comments and first answer[/edit]
We have had something similar when using HAML and SASS that resulted in the CSS being completely unavailable. It only happened on deploys. We determined it was a combination of the Rails stylesheet merging and the generation of the CSS from SASS. Sass was not done generating the CSS, which it did so on the first request to the application, when Rails attempted to merge it all together. The result, a corrupt useless CSS file. Then we stumbled upon this article which has a solution for preventing this issue.
Based on all this, my best guess is that the Smurf gem is attempting to generate your file on the first request, but Rails is serving it out before its done. The generation completes then each following request is fine. If this is the problem then the only solution i know of is to get the file generated before the first request. Of course, this does assume that it is related to deployments or application restarts in some way.
Peer
I had such a problem. The problem was only at the first time the page was loaded. Just reload it and it was fine.
The problem in my case was that the images where not there in the cache for the first time so the browser didnt know it's dimensions when preparing the page which caused the problem
If an image doesn't have a height/width assigned to it, a place is created on the page and it's put there. If the image doesn't quite fit, the browser may not know this until it's refreshed. Then it already knows the size and can properly fit it onto the page.