Selenium WebDriver Chrome timeout and invalid URL - ruby-on-rails

I am using Selenium Webdriver Chrome initialized in my Rails app as follows
host=XXX
port=XXX
Capybara.register_driver :selenium_chrome do |app|
options = Selenium::WebDriver::Chrome::Options.new
options.add_argument("--proxy-server=#{host}:#{port}")
options.add_argument("--headless") # Remove this option to run on Chrome browser
Capybara::Selenium::Driver.new( app,
browser: :chrome,
options: options
)
end
However, it is timing out always when I run a spec giving this error
Net::ReadTimeout upon running the command visit url
URI.parse(current_url) returns #<URI::Generic data:,> which looks incorrect and probably the reason why it is timing out. I looked into the Selenium Webdriver gem and added debugging to see how the request is fetched for this command but for some reason it is not stopping at the pry when the command is get_current_url
Why does current_url look incorrect and why would it not stop for the command get_current_url ?
EDIT: the url is obtained from here and returns the following locally
[6] pry(Selenium::WebDriver::Remote::Bridge)> Remote::OSS::Bridge.new(capabilities, bridge.session_id, **opts).url
=> "data:,"
Adding a pry to the URL method doesn't stop, so wondering how it is obtaining the value.
Ruby: ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [x86_64-darwin20]
Selenium-Webdriver: 3.142.7

Related

Webdrivers::NetworkError - Mac64 M1 - ChromeDriver

My Capybara Selenium Webdriver set up is failing when trying to make a connection to ChromeDriver - It appears they released a version without an M1 version to find at the Chromedriver API https://chromedriver.storage.googleapis.com/index.html?path=106.0.5249.61/
Error:
Webdrivers::NetworkError:
Net::HTTPServerException: 404 "Not Found" with https://chromedriver.storage.googleapis.com/106.0.5249.61/chromedriver_mac64_m1.zip
CODE:
Capybara.register_driver :headless_chrome do |app|
options.add_argument("--disable-gpu")
options.add_argument("--headless")
options.add_argument("--no-sandbox")
options.add_argument("--window-size=1920,1080")
driver = Capybara::Selenium::Driver.new(app, browser: :chrome, options: options)
### Allow file downloads in Google Chrome when headless
### https://bugs.chromium.org/p/chromium/issues/detail?id=696481#c89
bridge = driver.browser.send(:bridge)
path = "/session/:session_id/chromium/send_command"
path[":session_id"] = bridge.session_id
bridge.http.call(:post, path, cmd: "Page.setDownloadBehavior",
params: {
behavior: "allow",
downloadPath: "/tmp/downloads",
})
###
driver
end
When the application calls driver.browser I get the error above and that is because the file it's looking for does not exist.
Can I set a specific version of chrome driver or what system to look for when initializing the driver?
Fix is posted here: https://github.com/titusfortner/webdrivers/pull/239 - This is a known issue in "webdrivers"

watir selenium: unrecognized arguments for Browser constructor

In my rails app I have a nokogiri, watir crawler that was working fine.
After I upgraded my gems (also upgrading e.g. selenium), when I open the crawlers browser with:
BROWSER_OPTIONS = %w[--headless --no-sandbox --disable-dev-shm-usage --disable-gpu --remote-debugging-port=9230]
Watir::Browser.new :chrome, args: BROWSER_OPTIONS
I get the following error:
ArgumentError: {:args=>["--headless", "--no-sandbox", "--disable-dev-shm-usage", "--disable-gpu", "--remote-debugging-port=9230"]} are unrecognized arguments for Browser constructor
from /Users/myname/.rbenv/versions/3.0.1/lib/ruby/gems/3.0.0/gems/watir-7.1.0/lib/watir/capabilities.rb:79:in `process_browser_options'
Hope someone can help.
I solved it myself.
The solution was changing it to:
Watir::Browser.new :chrome, options: {args: BROWSER_OPTIONS}

"Refused to connect" using ChromeDriver, Capybara & Docker Compose

I'm trying to make the move from PhantomJS to Headless Chrome and have run into a bit of a snag. For local testing, I'm using Docker Compose to get all dependent services up and running. To provision Google Chrome, I'm using an image that bundles both it and ChromeDriver together while serving it on port 4444. I then link it to the my app container as follows in this simplified docker-compose.yml file:
web:
image: web/chrome-headless
command: [js-specs]
stdin_open: true
tty: true
environment:
- RACK_ENV=test
- RAILS_ENV=test
links:
- "chromedriver:chromedriver"
chromedriver:
image: robcherry/docker-chromedriver:latest
ports:
- "4444"
cap_add:
- SYS_ADMIN
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
Then, I have a spec/spec_helper.rb file that bootstraps the testing environment and associated tooling. I define the :headless_chrome driver and point it to ChromeDriver's local binding; http://chromedriver:4444. I'm pretty sure the following is correct:
Capybara.javascript_driver = :headless_chrome
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.register_driver :headless_chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w[headless disable-gpu window-size=1440,900] },
)
Capybara::Selenium::Driver.new app,
browser: :chrome,
url: "http://chromedriver:4444/",
desired_capabilities: capabilities
end
We also use VCR, but I've configured it to ignore any connections to the port used by ChromeDriver:
VCR.configure do |c|
c.cassette_library_dir = 'spec/vcr_cassettes'
c.default_cassette_options = { record: :new_episodes }
c.ignore_localhost = true
c.allow_http_connections_when_no_cassette = false
c.configure_rspec_metadata!
c.ignore_hosts 'codeclimate.com'
c.hook_into :webmock, :excon
c.ignore_request do |request|
URI(request.uri).port == 4444
end
end
I start the services with Docker Compose, which triggers the test runner. The command is pretty much this:
$ bundle exec rspec --format progress --profile --tag 'broken' --tag 'js' --tag '~quarantined'
After a bit of waiting, I encounter the first failed test:
1) Beta parents code redemption: redeeming a code on the dashboard when the parent has reached the code redemption limit does not display an error message for cart codes
Failure/Error: fill_in "code", with: "BOOK-CODE"
Capybara::ElementNotFound:
Unable to find field "code"
# ./spec/features/beta_parents_code_redemption_spec.rb:104:in `block (4 levels) in <top (required)>'
All specs have the same error. So, I shell into the container to run the tests myself manually and capture the HTML it's testing against. I save it locally and open it up in my browser to be welcomed by the following Chrome error page. It would seem ChromeDriver isn't evaluating the spec's HTML because it can't reach it, so it attempts to run the tests against this error page.
Given the above information, what am I doing wrong here? I appreciate any and all help as moving away from PhantomJS would solve so many headaches for us.
Thank you so much in advance. Please, let me know if you need extra information.
The issue you're having is that Capybara, by default, starts the AUT bound to 127.0.0.1 and then tells the driver to have the browser request from the same. In your case however 127.0.0.1 isn't where the app is running (From the browsers perspective), since it's on a different container than the browser. To fix that, you need to set Capybara.server_host to whatever the external interface of the "web" container is (which is reachable from the "chromedriver" container). That will cause Capybara to bind the AUT to that interface and tell the driver to have the browser make requests to it.
In your case that probably means you can specify 'web'
Capybara.server_host = 'web'

"No such session" / "chrome not reachable" error when running Chrome in parallel in Jenkins

I am getting the issue with running automation tests on Chrome in Jenkins using parallel_test gem (also use with Capybara, selenium - Ruby language). I'm running it in headless mode with Xvfb. However, most of all test scenarios failed due to 'no such session' or 'chrome not reachable' errors.
This is my run command on test job in Jenkins:
xvfb-run -a --server-args='-screen 0 1680x1050x24' bundle exec parallel_cucumber features/ -n 4 -o '-t ~#ignore -p jenkins_chrome'
This is my register_driver in env.rb:
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app,
browser: :chrome,
desired_capabilities: {
"chromeOptions" => {
"args" => %w{ --start-maximized --disable-impl-side-painting --no-sandbox }
}
})
end
And this is error message:
(Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Linux 4.0.5 x86_64) (Selenium::WebDriver::Error::NoSuchDriverError)
./features/step_definitions/view_a_profile.rb:204:in `/^user has signed in as "([^"]*)"$/'
Someone said the errors are due to xvfb, some said due to Chrome can't run in parallel.
Have anyone experienced this issue? How can I solve this problem?
Chrome with chromedriver and xvfb won't run on any
suse. (probably not related here)
Usually on build servers everything gets executed as root, which
causes this exact error for me with chrome. (for this I am searching for a workaround myself)

Getting an error when running headless jasmine tests in rails

I've got a rake task set up to run headless jasmine tests on a build server and output the results in junit format. Here's the task:
namespace :jasmine do
desc "Runs Jasmine tests headlessly and writes out junit xml."
task :headless_junit do |t, args|
run_jasmine_tests(Dir.pwd)
end
end
def run_jasmine_tests(output_dir)
require 'headless'
require 'jasmine'
require 'rspec'
require 'rspec/core/rake_task'
output_file = "#{output_dir}/jasmine_results.xml"
Headless.ly do
RSpec::Core::RakeTask.new(:jasmine_continuous_integration_runner) do |t|
t.rspec_opts = ['--format', 'RspecJunitFormatter', '--out', output_file ]
t.verbose = true
t.rspec_opts += ["-r #{File.expand_path(File.join(::Rails.root, 'config', 'environment'))}"]
t.pattern = [Jasmine.runner_filepath]
end
Rake::Task['jasmine_continuous_integration_runner'].invoke
end
end
When I run the this I get this error:
TypeError: jasmine.getEnv(...).currentSpec is null in http://localhost:34002/assets/jquery.js?body=true (line 1129)
expect#http://localhost:34002/assets/jquery.js?body=true:1129
#http://localhost:34002/__spec__/activity_catalog_search_filters_spec.js:15
jasmine.Block.prototype.execute#http://localhost:34002/__jasmine__/jasmine.js:1064
jasmine.Queue.prototype.next_#http://localhost:34002/__jasmine__/jasmine.js:2096
jasmine.Queue.prototype.next_/onComplete/<#http://localhost:34002/__jasmine__/jasmine.js:2086
... LOTS MORE ...
I'm using rails 3.2.13, jasmine 1.3.2, headless 1.0.1, rspec 2.14.1 and Jasmine-jQuery 1.5.8
I think it could be similar to the problem this guy is having:
TypeError: jasmine.getEnv().currentSpec is null
Turns out the issue was with a test that was using jQuery.get to load a url into the dom. An empty string was being passed down as the url (since the test writer didn't really care what was loaded I guess) but that caused jQuery to fetch the current page (the jasmine tests themselves) and load that into the dom. Massive chaos ensued.
The more interesting thing (and perhaps more helpful) was how we figured that out. Turns out the fancy rake task was not the issue. It's just that the headless tests use Firefox and I usually load them manually in chrome, where this error didn't seem to happen. One I had the error reproduced in Firefox it was easy enough to track down the cause with the debugger.
So the bottom line is, if you're ci tests are failing and you can't reproduce it, try loading them manually in Firefox.

Resources