Set up Selenium with a docker rails app on wsl2 - ruby-on-rails

After following a couple of tutorials I can't figure out how to configure Selenium with:
a rails app on debian wsl2
the gem "Webdrivers"
docker
With my local application, Selenium worked properly with
require "webdrivers"
Since I have dockerized the app, everything works properly except Selenium.
When I run a js test I get
Webdrivers::NetworkError:
Net::HTTPServerException: 404 "Not Found" with https://github.com/mozilla/geckodriver/releases/download/v0/geckodriver-v0-linux64.tar.gz
It seems that the request url is not build correctly "v0"
When I hard code a version number in my config file like so
Webdrivers::Geckodriver.required_version = "0.32.0"
geckodriver is downloaded properly.
1- How can I configure webdriver to handle verssion automatically ?
Then I get this error message
Selenium::WebDriver::Error::SessionNotCreatedError:
Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binary flag set on the command line
What I understand is that it tries to reach for the firefox application. I have firefox installed on windows side and firefox-esr on wsl2/debian side.
2- How can I configure webdrivers and/or docker to run my tests with selenium ?
I have tried to hard code the firefox path.
Here is my webdrivers config, with many settings attempts.
Capybara.register_driver :remote_firefox do |app|
firefox_capabilities = ::Selenium::WebDriver::Remote::Capabilities.firefox
# (
# "moz:firefoxOptions": {
# "args": %w[headless window-size=1400,1400]
# },
# "binary": "/mnt/c/Users/Fz/AppData/Local/Microsoft/WindowsApps/firefox.exe"
# )
firefox_options = Selenium::WebDriver::Firefox::Options.new
firefox_options.binary = "/mnt/c/Users/Fz/AppData/Local/Microsoft/WindowsApps/firefox.exe"
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: "http://172.28.0.7:5555",
# desired_capabilities: firefox_capabilities,
options: firefox_options
)
end
RSpec.configure do |config|
config.before(:each) do
Capybara.current_driver = :remote_firefox
Capybara.javascript_driver = :remote_firefox
Capybara.app_host = "http://#{IPSocket.getaddress(Socket.gethostname)}:3000"
Capybara.server_host = IPSocket.getaddress(Socket.gethostname)
Capybara.server_port = 5555
end
end
###############################################
# Selenium::WebDriver::Firefox.path = "/mnt/c/Users/Fz/AppData/Local/Microsoft/WindowsApps/firefox.exe"
# Selenium::WebDriver::Firefox.path = "/usr/bin/firefox"
# Selenium::WebDriver::Firefox.path = "//wsl.localhost/Debian/usr/bin/firefox"
# options = Selenium::WebDriver::Firefox::Options.new
# options.binary = "/usr/bin/firefox"
#
# driver = Selenium::WebDriver.for :remote,
# url: "http://localhost:4444",
# desired_capabilities: :firefox,
# options: ""
#########################################
with 'desired_capabilities' I get
ArgumentError:
unknown keyword: :desired_capabilities
without I get
Selenium::WebDriver::Error::WebDriverError
unexpected response, code=404, content-type=""
{
"value": {
"error": "unknown command",
"message": "Unable to find handler for (POST) \u002fsession",
"stacktrace": ""
}
}
here is my docker-compose file
version: '3'
networks:
development:
test:
volumes:
db_data:
es_data:
gem_cache:
shared_data:
selenium_data:
services:
(...)
capoeiragem_dev:
build:
context: .
dockerfile: ./config/containers/Dockerfile.dev
container_name: capoeiragem_dev
volumes:
- .:/var/app
- shared_data:/usr/share
- gem_cache:/usr/local/bundle/gems
networks:
- development
ports:
- 3000:3000
stdin_open: true
tty: true
env_file: .env.development
entrypoint: entrypoint-dev.sh
command: [ 'bundle', 'exec', 'rails', 'server', '-p', '3000', '-b', '0.0.0.0' ]
environment:
RAILS_ENV: development
depends_on:
- capoeiragem_db
- capoeiragem_es
(...)
guard:
tty: true
stdin_open: true
build:
context: .
dockerfile: ./config/containers/Dockerfile.dev
volumes:
- .:/var/app
- gem_cache:/usr/local/bundle/gems
networks:
- development
environment:
RAILS_ENV: development
command: [ 'bundle', 'exec', 'guard', '--no-bundler-warning' ]
ports:
- 35729:35729
depends_on:
- capoeiragem_dev
- capoeiragem_selenium
(...)
capoeiragem_firefox:
image: selenium/node-firefox:dev
networks:
- development
shm_size: 2gb
depends_on:
- capoeiragem_selenium
environment:
- SE_EVENT_BUS_HOST=capoeiragem_selenium
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
capoeiragem_chrome:
image: selenium/node-chrome:dev
networks:
- development
shm_size: 2gb
depends_on:
- capoeiragem_selenium
environment:
- SE_EVENT_BUS_HOST=capoeiragem_selenium
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
capoeiragem_selenium:
image: selenium/hub:latest
volumes:
- selenium_data:/usr/share/selenium/data
networks:
- development
container_name: capoeiragem_selenium
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
I can see the firefox node at http://localhost:4444/ui

Related

Using Sidekiq with multi APIs, but Sidekiq server executed wrong API code

I've built a rails app with docker-compose like below.
For example,
API A created job A1, that pushed to redis by Sidekiq Client SA.
And API B created job B1, that pushed to redis by Sidekiq Client SB.
But when these jobs executed, It pointed to application code in only API A.
So job B1 was failed because it was executed by API A.
I know that because the uninitialized constant error was raised.
I also used redis-namespace, but it still pointed to wrong API.
Can you help me explain how Sidekiq Server executed jobs.
And how it point to the right API that the job belongs to.
Many thanks.
config_redis = {
url: ENV.fetch('REDIS_URL_SIDEKIQ', 'redis://localhost:6379/0'),
namespace: ENV.fetch('REDIS_NAMESPACE_SIDEKIQ', 'super_admin')
}
Sidekiq.configure_server do |config|
config.redis = config_redis
end
Sidekiq.configure_client do |config|
config.redis = config_redis
end
initializer/sidekiq.rb
config_redis = {
url: ENV.fetch('REDIS_URL_SIDEKIQ', 'redis://localhost:6379/0'),
namespace: ENV.fetch('REDIS_NAMESPACE_SIDEKIQ', 'ignite')
}
Sidekiq.configure_server do |config|
config.redis = config_redis
end
Sidekiq.configure_client do |config|
config.redis = config_redis
end
docker-compose.yml
version: "3.9"
services:
ccp-ignite-api-gmv: # ----------- IGNITE SERVER
build: ../ccp-ignite-api-gmv/.
entrypoint: ./entrypoint.sh
command: WEB 3001
# command: MIGRATE # Uncomment this if you want to run db:migrate only
ports:
- "3001:3001"
volumes:
- ../ccp-ignite-api-gmv/.:/src
depends_on:
- db
- redis
links:
- db
- redis
tty: true
stdin_open: true
environment:
RAILS_ENV: ${RAILS_ENV}
REDIS_URL_SIDEKIQ: redis://redis:6379/ignite
REDIS_NAMESPACE_SIDEKIQ: ignite
ccp-super-admin-api-gmv: # ----------- SUPER ADMIN API SERVER
build: ../ccp-super-admin-api-gmv/.
entrypoint: ./entrypoint.sh
command: WEB 3005
# command: MIGRATE # Uncomment this if you want to run db:migrate only
ports:
- "3005:3005"
volumes:
- ../ccp-super-admin-api-gmv/.:/src
depends_on:
- db
- redis
links:
- db
- redis
tty: true
stdin_open: true
environment:
RAILS_ENV: ${RAILS_ENV}
REDIS_URL_SIDEKIQ: redis://redis:6379/super_admin
REDIS_NAMESPACE_SIDEKIQ: super_admin
db:
image: mysql:8.0.22
volumes:
- ~/docker/mysql:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: password
ports:
- "3307:3306"
redis:
image: redis:5-alpine
command: redis-server
ports:
- 6379:6379
volumes:
- ~/docker/redis:/data
sidekiq_ignite:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=ignite
sidekiq_super_admin:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=super_admin
Thanks st.huber for reminding me,
That confused came from my wrong docker-compose config in 2 sidekiq service.
I've pointed to wrong folder in "build" and "command"
Before:
sidekiq_ignite:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=ignite
sidekiq_super_admin:
depends_on:
- db
- redis
build: .
command: bundle exec sidekiq
volumes:
- .:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=super_admin
Fixed:
sidekiq_ignite:
depends_on:
- db
- redis
build: ../ccp-ignite-api-gmv/.
command: bundle exec sidekiq
volumes:
- ../ccp-ignite-api-gmv/.:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=ignite
sidekiq_super_admin:
depends_on:
- db
- redis
build: ../ccp-super-admin-api-gmv/.
command: bundle exec sidekiq
volumes:
- ../ccp-super-admin-api-gmv/.:/src
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/0
- REDIS_NAMESPACE_SIDEKIQ=super_admin

Couldn't connect to server for "http://mercure:7001/.well-known/mercure" // TransportException

i'm little confused about this so i want your help
in my docker-compose i think i have configured all want it must be.
But the thing is when im trying publish somthing in mercure via symfony he dont want to do it
but when i try without passing via symfony(with js) it work
If some one has an idea it will be cool
version: '3.8'
services:
web:
build:
context: .
target: Symfony_PHP
container_name: web_symfony
ports:
- 90:90
- 943:943
restart: always
volumes:
- ./portailcse/:/var/www/app/
- symfony-cache:/var/www/app/var/cache
- symfony-log:/var/www/app/var/log
- symfony-vendor:/var/www/app/vendor
networks:
- dev
depends_on:
- db
mercure:
image: dunglas/mercure
container_name: mercure
restart: unless-stopped
environment:
# Uncomment the following line to disable HTTPS
SERVER_NAME: ':90'
MERCURE_PUBLISHER_JWT_KEY: 'testjwt'
MERCURE_SUBSCRIBER_JWT_KEY: 'testjwt'
# PUBLISH_ALLOWED_ORIGINS: '*'
# CORS_ALLOWED_ORIGINS: '*'
# ALLOW_ANONYMOUS: 1
# DEBUG: 'debug'
# Uncomment the following line to enable the development mode
command: /usr/bin/caddy run -config /etc/caddy/Caddyfile.dev
ports:
- 7001:90
volumes:
- caddy_data:/data
- caddy_config:/config
networks:
- dev
networks:
dev:
volumes:
symfony-vendor:
symfony-log:
symfony-cache:
caddy_data:
caddy_config:
enter image description here
###> symfony/mercure-bundle ###
# See https://symfony.com/doc/current/mercure.html#configuration
# The URL of the Mercure hub, used by the app to publish updates (can be a local URL)
MERCURE_URL=http://mercure:7001/.well-known/mercure
# The public URL of the Mercure hub, used by the browser to connect
MERCURE_PUBLIC_URL=http://mercure:7001/.well-known/mercure
# The secret used to sign the JWTs
MERCURE_JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJtZXJjdXJlIjp7InB1Ymxpc2giOlsiKiJdLCJzdWJzY3JpYmUiOlsiKiJdLCJwYXlsb2FkIjp7InJlbW90ZUFkZHIiOiIxMjcuMC4wLjEifX19.y6WSZvqaNkya67gRr2ta9R4fnPAv9-7EmPhQvfz6GJA
###< symfony/mercure-bundle ###
thanks you all
enter image description here

Connect Rails app and browserless/chrome container

I am trying to use the browserless/chrome docker image and connect to it from a Ruby on Rails (6.0) app running in another docker container. I'm having difficulty getting the two to connect.
The browserless container is running properly (I know because I can access the web debug interface it comes with).
In the Rails app, I am trying to use capybara with selenium-webdriver to drive a headless chrome instance.
I am getting errors saying that either it can't connect or that it cannot post to '/session', which I think means it's properly connecting but can't initialize a session within chrome for some reason.
Below is my current configuration + code:
#docker-compose.yml
version: "3"
services:
web:
build: .
image: sync-plus.web:latest
volumes:
- .:/app
ports:
- "3000:3000"
environment:
RAILS_ENV: development
RACK_ENV: development
REDIS_URL: redis://redis:6379/0
DATABASE_URL_TEST: postgres://postgres:super#postgres:5432/salesync_test
command: "bin/puma -C config/puma.rb"
depends_on:
- postgres
- mailcatcher
- redis
sidekiq:
image: sync-plus.web:latest
volumes:
- .:/app
environment:
RAILS_ENV: development
RACK_ENV: development
REDIS_URL: redis://redis:6379/0
DATABASE_URL_TEST: postgres://postgres:super#postgres:5432/salesync_test
command: "bundle exec sidekiq"
depends_on:
- postgres
- mailcatcher
- web
- redis
guard:
image: sync-plus.web:latest
volumes:
- .:/app
environment:
RAILS_ENV: test
RACK_ENV: test
REDIS_URL: redis://redis:6379/1
DATABASE_URL_TEST: postgres://postgres:super#postgres:5432/salesync_test
command: "bin/rspec"
depends_on:
- postgres
redis:
image: redis:alpine
command: redis-server
volumes:
- 'redis:/data'
postgres:
image: postgres:latest
volumes:
- postgres_volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: super
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
browserless:
image: browserless/chrome:latest
ports:
- "4444:3000"
volumes:
postgres_volume:
redis:
#rails_helper.rb
...
require "rspec/rails"
# This require needs to be below the rspec/rails require
require 'capybara/rails'
...
I am using an Rspec test to check if the browserless connection/driving is functional:
#browserless_spec.rb
require "rails_helper"
RSpec.describe 'Browserless', type: :feature do
describe "Connection" do
it "can connect to browserless container" do
#selenium webdriver looks at localhost:4444 by default
driver = Selenium::WebDriver.for :remote
driver.visit("/users/sign_in")
expect(page).to have_selector("btn btn-outline-light my-2")
end
end
end
I can't seem to find any solid solutions or leads elsewhere, so any help would be much appreciated!
Thanks!

Docker/Wallaby - localhost refused to connect

I am trying to run my feature test using Wallaby but I keep getting the localhost refused to connect error.
Here is my compose.yml:
version: '2'
services:
app:
image: hin101/phoenix:1.5.1
build: .
restart: always
ports:
- "4000:4000"
- "4002:4002"
volumes:
- ./src:/app
depends_on:
- db
- selenium
hostname: app
db:
image: postgres:10
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=hinesh_blogs
ports:
- 5432:5432
volumes:
- database:/var/lib/postgresql/data
selenium:
build:
context: .
dockerfile: Dockerfile-selenium
container_name: selenium
image: selenium/standalone-chrome-debug:2.52.0
restart: always
ports:
- "4444:4444"
- "5900:5900"
hostname: selenium
volumes:
database:
test_helper.ex:
ExUnit.start()
Ecto.Adapters.SQL.Sandbox.mode(HineshBlogs.Repo, :auto)
{:ok, _} = Application.ensure_all_started(:ex_machina)
Application.put_env(:wallaby, :base_url, HineshBlogsWeb.Endpoint.url)
Application.put_env(:wallaby, :screenshot_on_failure, true)
{:ok, _} = Application.ensure_all_started(:wallaby)
config/test.exs:
use Mix.Config
# Configure your database
#
# The MIX_TEST_PARTITION environment variable can be used
# to provide built-in test partitioning in CI environment.
# Run `mix help test` for more information.
config :hinesh_blogs, HineshBlogs.Repo,
username: "postgres",
password: "postgres",
database: "hinesh_blogs_test#{System.get_env("MIX_TEST_PARTITION")}",
hostname: "db",
pool: Ecto.Adapters.SQL.Sandbox,
pool_size: 10
# We don't run a server during test. If one is required,
# you can enable the server option below.
config :hinesh_blogs, HineshBlogsWeb.Endpoint,
http: [port: 4002],
server: true
config :hinesh_blogs, :sql_sandbox, true
# Print only warnings and errors during test
config :logger, level: :warn
# Selenium
config :wallaby, otp_app: :hinesh_blogs_web, base_url: "http://localhost:4002/", driver: Wallaby.Selenium, hackney_options: [timeout: :infinity, recv_timeout: :infinity]
I run the tests using the command: docker-compose run app mix test
Do I need to have any additional configurations to run these tests and if not, what is the best way to configure wallaby to use docker containers?

Selenium routing with subdomains

I have created a docker compose file for doing Capybara tests inside a container.
The problem i'm currently facing to is that i can't find a ability to to route the subdomains of my lvh.me domain. When I add lvh.me to the hosts file of Selenium I get the same result that my tests are failing. In which way can I add some routing for subdomains to Selenium to accept subdomains like {{user}}.lvh.me:3001
My Capybara configuration
Capybara.register_driver :selenium do |app|
Capybara.app_host = "http://0.0.0.0:3001"
Capybara.server_host = '0.0.0.0'
Capybara.server_port = '3001'
Capybara.always_include_port = true
args = ['--no-default-browser-check', '--headless', '--start-maximized']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: "http://hub:4444/wd/hub",
desired_capabilities: caps
)
end
Capybara.configure do |config|
config.default_driver = :rack_test
config.javascript_driver = :selenium
end
And my docker compose file
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
redis:
image: redis
volumes:
- ./tmp/redis:/var/lib/redis/data
web:
build: .
environment:
- REDIS_URL=redis://redis
- DATABASE_HOST=db
command: sh "/myapp/docker-entrypoint.sh"
volumes:
- .:/myapp
links:
- db
- redis
- hub
depends_on:
- db
- redis
ports:
- "3001:3001"
- "3000:3000"
hub:
container_name: hub
image: selenium/hub:3.9
ports:
- "4444:4444"
selenium:
container_name: selenium
image: selenium/node-chrome:3.9
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
depends_on:
- hub
links:
- hub
Firstly you shouldn't be specifying the Capybara config inside the driver registration. Secondly, this is going to assume you're running your tests on the web docker instance -- if you're actually trying to run your tests on the host then things would be slightly different.
Capybara.app_host needs to be set to a URL that points to where the app under test is running from the perspective of the browser. In your case the browser is running on the selenium docker instance, and the tests should start the AUT on the web instance - that would mean Capybara.app_host should be http://web (port is not needed since you've specified alway_include_port). That means you should end up with
Capybara.register_driver :selenium do |app|
args = ['--no-default-browser-check', '--headless', '--start-maximized']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: "http://hub:4444/wd/hub",
desired_capabilities: caps
)
end
Capybara.configure do |config|
config.app_host = "http://web"
config.server_host = '0.0.0.0'
config.server_port = '3001'
config.always_include_port = true
config.default_driver = :rack_test
config.javascript_driver = :selenium
end
Your next issue it you want to use which lvh.me which resolves to 127.0.0.1 but you need it to resolve to whatever ip is assigned to the web docker instance. If you have a fixed number of subdomains used in your tests you should be able to handle that via link aliases specified in the selenium docker instance config - https://docs.docker.com/compose/compose-file/#links - or via network aliases if you specify networks in your docker compose config - https://docs.docker.com/compose/compose-file/#aliases. If you do actually need to resolve wildcard (*.lvh.me) then you'll need to run your own DNS server (possibly in your docker setup) with a wildcard CNAME entry that resolves *.lvh.me to web

Resources