"Refused to connect" using ChromeDriver, Capybara & Docker Compose - ruby-on-rails

I'm trying to make the move from PhantomJS to Headless Chrome and have run into a bit of a snag. For local testing, I'm using Docker Compose to get all dependent services up and running. To provision Google Chrome, I'm using an image that bundles both it and ChromeDriver together while serving it on port 4444. I then link it to the my app container as follows in this simplified docker-compose.yml file:
web:
image: web/chrome-headless
command: [js-specs]
stdin_open: true
tty: true
environment:
- RACK_ENV=test
- RAILS_ENV=test
links:
- "chromedriver:chromedriver"
chromedriver:
image: robcherry/docker-chromedriver:latest
ports:
- "4444"
cap_add:
- SYS_ADMIN
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
Then, I have a spec/spec_helper.rb file that bootstraps the testing environment and associated tooling. I define the :headless_chrome driver and point it to ChromeDriver's local binding; http://chromedriver:4444. I'm pretty sure the following is correct:
Capybara.javascript_driver = :headless_chrome
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.register_driver :headless_chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w[headless disable-gpu window-size=1440,900] },
)
Capybara::Selenium::Driver.new app,
browser: :chrome,
url: "http://chromedriver:4444/",
desired_capabilities: capabilities
end
We also use VCR, but I've configured it to ignore any connections to the port used by ChromeDriver:
VCR.configure do |c|
c.cassette_library_dir = 'spec/vcr_cassettes'
c.default_cassette_options = { record: :new_episodes }
c.ignore_localhost = true
c.allow_http_connections_when_no_cassette = false
c.configure_rspec_metadata!
c.ignore_hosts 'codeclimate.com'
c.hook_into :webmock, :excon
c.ignore_request do |request|
URI(request.uri).port == 4444
end
end
I start the services with Docker Compose, which triggers the test runner. The command is pretty much this:
$ bundle exec rspec --format progress --profile --tag 'broken' --tag 'js' --tag '~quarantined'
After a bit of waiting, I encounter the first failed test:
1) Beta parents code redemption: redeeming a code on the dashboard when the parent has reached the code redemption limit does not display an error message for cart codes
Failure/Error: fill_in "code", with: "BOOK-CODE"
Capybara::ElementNotFound:
Unable to find field "code"
# ./spec/features/beta_parents_code_redemption_spec.rb:104:in `block (4 levels) in <top (required)>'
All specs have the same error. So, I shell into the container to run the tests myself manually and capture the HTML it's testing against. I save it locally and open it up in my browser to be welcomed by the following Chrome error page. It would seem ChromeDriver isn't evaluating the spec's HTML because it can't reach it, so it attempts to run the tests against this error page.
Given the above information, what am I doing wrong here? I appreciate any and all help as moving away from PhantomJS would solve so many headaches for us.
Thank you so much in advance. Please, let me know if you need extra information.

The issue you're having is that Capybara, by default, starts the AUT bound to 127.0.0.1 and then tells the driver to have the browser request from the same. In your case however 127.0.0.1 isn't where the app is running (From the browsers perspective), since it's on a different container than the browser. To fix that, you need to set Capybara.server_host to whatever the external interface of the "web" container is (which is reachable from the "chromedriver" container). That will cause Capybara to bind the AUT to that interface and tell the driver to have the browser make requests to it.
In your case that probably means you can specify 'web'
Capybara.server_host = 'web'

Related

Webdrivers::NetworkError - Mac64 M1 - ChromeDriver

My Capybara Selenium Webdriver set up is failing when trying to make a connection to ChromeDriver - It appears they released a version without an M1 version to find at the Chromedriver API https://chromedriver.storage.googleapis.com/index.html?path=106.0.5249.61/
Error:
Webdrivers::NetworkError:
Net::HTTPServerException: 404 "Not Found" with https://chromedriver.storage.googleapis.com/106.0.5249.61/chromedriver_mac64_m1.zip
CODE:
Capybara.register_driver :headless_chrome do |app|
options.add_argument("--disable-gpu")
options.add_argument("--headless")
options.add_argument("--no-sandbox")
options.add_argument("--window-size=1920,1080")
driver = Capybara::Selenium::Driver.new(app, browser: :chrome, options: options)
### Allow file downloads in Google Chrome when headless
### https://bugs.chromium.org/p/chromium/issues/detail?id=696481#c89
bridge = driver.browser.send(:bridge)
path = "/session/:session_id/chromium/send_command"
path[":session_id"] = bridge.session_id
bridge.http.call(:post, path, cmd: "Page.setDownloadBehavior",
params: {
behavior: "allow",
downloadPath: "/tmp/downloads",
})
###
driver
end
When the application calls driver.browser I get the error above and that is because the file it's looking for does not exist.
Can I set a specific version of chrome driver or what system to look for when initializing the driver?
Fix is posted here: https://github.com/titusfortner/webdrivers/pull/239 - This is a known issue in "webdrivers"

Selenium WebDriver Chrome timeout and invalid URL

I am using Selenium Webdriver Chrome initialized in my Rails app as follows
host=XXX
port=XXX
Capybara.register_driver :selenium_chrome do |app|
options = Selenium::WebDriver::Chrome::Options.new
options.add_argument("--proxy-server=#{host}:#{port}")
options.add_argument("--headless") # Remove this option to run on Chrome browser
Capybara::Selenium::Driver.new( app,
browser: :chrome,
options: options
)
end
However, it is timing out always when I run a spec giving this error
Net::ReadTimeout upon running the command visit url
URI.parse(current_url) returns #<URI::Generic data:,> which looks incorrect and probably the reason why it is timing out. I looked into the Selenium Webdriver gem and added debugging to see how the request is fetched for this command but for some reason it is not stopping at the pry when the command is get_current_url
Why does current_url look incorrect and why would it not stop for the command get_current_url ?
EDIT: the url is obtained from here and returns the following locally
[6] pry(Selenium::WebDriver::Remote::Bridge)> Remote::OSS::Bridge.new(capabilities, bridge.session_id, **opts).url
=> "data:,"
Adding a pry to the URL method doesn't stop, so wondering how it is obtaining the value.
Ruby: ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [x86_64-darwin20]
Selenium-Webdriver: 3.142.7

SSL Error on RSpec system tests in dockerized Rails 5 app with selenium/standalone-chrome

Coming from Rails 3 and having never used Capybara/Selenium I am now running a Rails 5 application in docker and want to use the "updated" way of writing tests, i.e. using system, request and model specs rather than controller specs. I've added another docker instance running the same image with Guard to trigger the appropriate tests whenever something changed. And as RSpecs system tests need Selenium, I'm running the official selenium/standalone-chrome image for docker. Everything seems to be plugged in perfectly, however the system tests fail, because it seems that Selenium is trying to use SSL. Why is that the case, if I've clearly requested to use http? Is there any way to switch that off?
I'm using docker-compose to get everything up and running, with the important parts of the docker-compose.yml being:
version: '3.5'
services:
app:
build: .
volumes:
- .:/srv/mldb
ports:
- "3000:3000"
depends_on:
- db
command: bundle exec rails s -p 3000 -b '0.0.0.0'
tty: true
guard:
build: .
volumes:
- .:/srv/mldb
depends_on:
- app
- chrome
command: bundle exec guard --no-bundler-warning --no-interactions
tty: true
chrome:
image: selenium/standalone-chrome
volumes:
- /dev/shm:/dev/shm
ports:
- "4444:4444"
And here are all relevant lines from the spec/rails_helper.rb that I use to set up Capybara and Selenium:
selenium_url = "http://chrome:4444/wd/hub"
Capybara.register_driver :selenium_remote do |app|
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: selenium_url,
desired_capabilities: :chrome)
end
RSpec.configure do |config|
config.before(:each, type: :system, js: true) do
driven_by :selenium_remote
host! "http://app:3000"
end
end
The errors that I am getting when running an example test that simply tries to fill in an input field and press a submit button.
From the Guard container running the system spec:
1) Movie management given correct input values allows user to create movie
Capybara::ElementNotFound:
Unable to find button "New Movie"
From the app container running rails:
2019-04-06 16:28:22 +0000: HTTP parse error, malformed request (): #<Puma::HttpParserError: Invalid HTTP format, parsing fails.>
The content of the log-file:
(0.4ms) SELECT "schema_migrations"."version" FROM "schema_migrations" ORDER BY "schema_migrations"."version" ASC
(0.2ms) BEGIN
(0.2ms) ROLLBACK
And the Screenshot that Capybara takes:
Error
All other tests (request and model specs) run perfectly and testing everything manually works perfectly as well.
Here is the system spec that fails:
require 'rails_helper'
RSpec.describe "Movie management", type: :system, js: true do
context "given correct input values" do
it "enables users to create movies" do
visit "/movies"
expect(page).to_not have_text("Hellboy")
click_button "New Movie"
fill_in "Titel", with: "Hellboy"
fill_in "Jahr", with: "2004"
click_button "Film anlegen"
expect(page).to have_current_path("/movies")
expect(page).to have_text("Film erfolgreich angelegt.")
expect(page).to have_text("Hellboy")
expect(page).to have_text("2004")
end
end
end
The problem is that Google owns the TLDs app and enforces the browser to use HTTPS. Here's the link about it.
https://superuser.com/questions/1276048/starting-with-chrome-63-urls-containing-app-redirects-to-https
The solution is to not use app as the container name for you website. When using website as name my websites container my tests were running fine.
More info about Google TLDs in this link
https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json#285

PhantomJS, Capybara, Poltergeist failed to reach server

I'm working with poltergeist for the first time, so I don't really know what I'm doing but I couldn't find any solution on the web. Please tell me if any information is missing.
Error message:
Capybara::Poltergeist::StatusFailError: Request failed to reach
server, check DNS and/or server status
this issue doesn't happen on production but only on development and staging environment.
This line of code is the one that is making the trouble
phantomjs_options: ['--ignore-ssl-errors=yes', '--ssl-protocol=any', '--load-images=no', '--proxy=localhost:9050', '--proxy-type=socks5']
without '--proxy=localhost:9050' everything's working perfectly on every environment but I don't want to delete it in case it's critical for the production.
I've also noticed that on staging/development there's no 9050 port listening but on production there is one
full config code part (capybara_drivers.rb):
Capybara.register_driver :polt do |app|
Capybara::Poltergeist::Driver.new(
app,
js_errors: false, # break on js error
timeout: 180, # maximum time in second for the server to produce a response
debug: false, # more verbose log
window_size: [1280, 800], # not responsive, used to simulate scroll when needed
inspector: false, # use debug breakpoint and chrome inspector,
phantomjs_options: ['--ignore-ssl-errors=yes', '--ssl-protocol=any', '--load-images=no', '--proxy=localhost:9050', '--proxy-type=socks5']
)
end
Sounds like your production environment needs to make outbound connections through a socks5 proxy, and your other environments don't. You'll need to make the configuration dependent on environment
Capybara.register_driver :polt do |app|
phantomjs_options = ['--ignore-ssl-errors=yes', '--ssl-protocol=any', '--load-images=no']
phantomjs_options.push('--proxy=localhost:9050', '--proxy-type=socks5') if Rails.env.production?
Capybara::Poltergeist::Driver.new(
app,
js_errors: false, # break on js error
timeout: 180, # maximum time in second for the server to produce a response
debug: false, # more verbose log
window_size: [1280, 800], # not responsive, used to simulate scroll when needed
inspector: false, # use debug breakpoint and chrome inspector,
phantomjs_options: phantomjs_options
)
end

"No such session" / "chrome not reachable" error when running Chrome in parallel in Jenkins

I am getting the issue with running automation tests on Chrome in Jenkins using parallel_test gem (also use with Capybara, selenium - Ruby language). I'm running it in headless mode with Xvfb. However, most of all test scenarios failed due to 'no such session' or 'chrome not reachable' errors.
This is my run command on test job in Jenkins:
xvfb-run -a --server-args='-screen 0 1680x1050x24' bundle exec parallel_cucumber features/ -n 4 -o '-t ~#ignore -p jenkins_chrome'
This is my register_driver in env.rb:
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app,
browser: :chrome,
desired_capabilities: {
"chromeOptions" => {
"args" => %w{ --start-maximized --disable-impl-side-painting --no-sandbox }
}
})
end
And this is error message:
(Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Linux 4.0.5 x86_64) (Selenium::WebDriver::Error::NoSuchDriverError)
./features/step_definitions/view_a_profile.rb:204:in `/^user has signed in as "([^"]*)"$/'
Someone said the errors are due to xvfb, some said due to Chrome can't run in parallel.
Have anyone experienced this issue? How can I solve this problem?
Chrome with chromedriver and xvfb won't run on any
suse. (probably not related here)
Usually on build servers everything gets executed as root, which
causes this exact error for me with chrome. (for this I am searching for a workaround myself)

Resources