Selenium routing with subdomains - ruby-on-rails

I have created a docker compose file for doing Capybara tests inside a container.
The problem i'm currently facing to is that i can't find a ability to to route the subdomains of my lvh.me domain. When I add lvh.me to the hosts file of Selenium I get the same result that my tests are failing. In which way can I add some routing for subdomains to Selenium to accept subdomains like {{user}}.lvh.me:3001
My Capybara configuration
Capybara.register_driver :selenium do |app|
Capybara.app_host = "http://0.0.0.0:3001"
Capybara.server_host = '0.0.0.0'
Capybara.server_port = '3001'
Capybara.always_include_port = true
args = ['--no-default-browser-check', '--headless', '--start-maximized']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: "http://hub:4444/wd/hub",
desired_capabilities: caps
)
end
Capybara.configure do |config|
config.default_driver = :rack_test
config.javascript_driver = :selenium
end
And my docker compose file
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
redis:
image: redis
volumes:
- ./tmp/redis:/var/lib/redis/data
web:
build: .
environment:
- REDIS_URL=redis://redis
- DATABASE_HOST=db
command: sh "/myapp/docker-entrypoint.sh"
volumes:
- .:/myapp
links:
- db
- redis
- hub
depends_on:
- db
- redis
ports:
- "3001:3001"
- "3000:3000"
hub:
container_name: hub
image: selenium/hub:3.9
ports:
- "4444:4444"
selenium:
container_name: selenium
image: selenium/node-chrome:3.9
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
depends_on:
- hub
links:
- hub

Firstly you shouldn't be specifying the Capybara config inside the driver registration. Secondly, this is going to assume you're running your tests on the web docker instance -- if you're actually trying to run your tests on the host then things would be slightly different.
Capybara.app_host needs to be set to a URL that points to where the app under test is running from the perspective of the browser. In your case the browser is running on the selenium docker instance, and the tests should start the AUT on the web instance - that would mean Capybara.app_host should be http://web (port is not needed since you've specified alway_include_port). That means you should end up with
Capybara.register_driver :selenium do |app|
args = ['--no-default-browser-check', '--headless', '--start-maximized']
caps = Selenium::WebDriver::Remote::Capabilities.chrome("chromeOptions" => {"args" => args})
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: "http://hub:4444/wd/hub",
desired_capabilities: caps
)
end
Capybara.configure do |config|
config.app_host = "http://web"
config.server_host = '0.0.0.0'
config.server_port = '3001'
config.always_include_port = true
config.default_driver = :rack_test
config.javascript_driver = :selenium
end
Your next issue it you want to use which lvh.me which resolves to 127.0.0.1 but you need it to resolve to whatever ip is assigned to the web docker instance. If you have a fixed number of subdomains used in your tests you should be able to handle that via link aliases specified in the selenium docker instance config - https://docs.docker.com/compose/compose-file/#links - or via network aliases if you specify networks in your docker compose config - https://docs.docker.com/compose/compose-file/#aliases. If you do actually need to resolve wildcard (*.lvh.me) then you'll need to run your own DNS server (possibly in your docker setup) with a wildcard CNAME entry that resolves *.lvh.me to web

Related

Set up Selenium with a docker rails app on wsl2

After following a couple of tutorials I can't figure out how to configure Selenium with:
a rails app on debian wsl2
the gem "Webdrivers"
docker
With my local application, Selenium worked properly with
require "webdrivers"
Since I have dockerized the app, everything works properly except Selenium.
When I run a js test I get
Webdrivers::NetworkError:
Net::HTTPServerException: 404 "Not Found" with https://github.com/mozilla/geckodriver/releases/download/v0/geckodriver-v0-linux64.tar.gz
It seems that the request url is not build correctly "v0"
When I hard code a version number in my config file like so
Webdrivers::Geckodriver.required_version = "0.32.0"
geckodriver is downloaded properly.
1- How can I configure webdriver to handle verssion automatically ?
Then I get this error message
Selenium::WebDriver::Error::SessionNotCreatedError:
Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binary flag set on the command line
What I understand is that it tries to reach for the firefox application. I have firefox installed on windows side and firefox-esr on wsl2/debian side.
2- How can I configure webdrivers and/or docker to run my tests with selenium ?
I have tried to hard code the firefox path.
Here is my webdrivers config, with many settings attempts.
Capybara.register_driver :remote_firefox do |app|
firefox_capabilities = ::Selenium::WebDriver::Remote::Capabilities.firefox
# (
# "moz:firefoxOptions": {
# "args": %w[headless window-size=1400,1400]
# },
# "binary": "/mnt/c/Users/Fz/AppData/Local/Microsoft/WindowsApps/firefox.exe"
# )
firefox_options = Selenium::WebDriver::Firefox::Options.new
firefox_options.binary = "/mnt/c/Users/Fz/AppData/Local/Microsoft/WindowsApps/firefox.exe"
Capybara::Selenium::Driver.new(
app,
browser: :remote,
url: "http://172.28.0.7:5555",
# desired_capabilities: firefox_capabilities,
options: firefox_options
)
end
RSpec.configure do |config|
config.before(:each) do
Capybara.current_driver = :remote_firefox
Capybara.javascript_driver = :remote_firefox
Capybara.app_host = "http://#{IPSocket.getaddress(Socket.gethostname)}:3000"
Capybara.server_host = IPSocket.getaddress(Socket.gethostname)
Capybara.server_port = 5555
end
end
###############################################
# Selenium::WebDriver::Firefox.path = "/mnt/c/Users/Fz/AppData/Local/Microsoft/WindowsApps/firefox.exe"
# Selenium::WebDriver::Firefox.path = "/usr/bin/firefox"
# Selenium::WebDriver::Firefox.path = "//wsl.localhost/Debian/usr/bin/firefox"
# options = Selenium::WebDriver::Firefox::Options.new
# options.binary = "/usr/bin/firefox"
#
# driver = Selenium::WebDriver.for :remote,
# url: "http://localhost:4444",
# desired_capabilities: :firefox,
# options: ""
#########################################
with 'desired_capabilities' I get
ArgumentError:
unknown keyword: :desired_capabilities
without I get
Selenium::WebDriver::Error::WebDriverError
unexpected response, code=404, content-type=""
{
"value": {
"error": "unknown command",
"message": "Unable to find handler for (POST) \u002fsession",
"stacktrace": ""
}
}
here is my docker-compose file
version: '3'
networks:
development:
test:
volumes:
db_data:
es_data:
gem_cache:
shared_data:
selenium_data:
services:
(...)
capoeiragem_dev:
build:
context: .
dockerfile: ./config/containers/Dockerfile.dev
container_name: capoeiragem_dev
volumes:
- .:/var/app
- shared_data:/usr/share
- gem_cache:/usr/local/bundle/gems
networks:
- development
ports:
- 3000:3000
stdin_open: true
tty: true
env_file: .env.development
entrypoint: entrypoint-dev.sh
command: [ 'bundle', 'exec', 'rails', 'server', '-p', '3000', '-b', '0.0.0.0' ]
environment:
RAILS_ENV: development
depends_on:
- capoeiragem_db
- capoeiragem_es
(...)
guard:
tty: true
stdin_open: true
build:
context: .
dockerfile: ./config/containers/Dockerfile.dev
volumes:
- .:/var/app
- gem_cache:/usr/local/bundle/gems
networks:
- development
environment:
RAILS_ENV: development
command: [ 'bundle', 'exec', 'guard', '--no-bundler-warning' ]
ports:
- 35729:35729
depends_on:
- capoeiragem_dev
- capoeiragem_selenium
(...)
capoeiragem_firefox:
image: selenium/node-firefox:dev
networks:
- development
shm_size: 2gb
depends_on:
- capoeiragem_selenium
environment:
- SE_EVENT_BUS_HOST=capoeiragem_selenium
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
capoeiragem_chrome:
image: selenium/node-chrome:dev
networks:
- development
shm_size: 2gb
depends_on:
- capoeiragem_selenium
environment:
- SE_EVENT_BUS_HOST=capoeiragem_selenium
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
capoeiragem_selenium:
image: selenium/hub:latest
volumes:
- selenium_data:/usr/share/selenium/data
networks:
- development
container_name: capoeiragem_selenium
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
I can see the firefox node at http://localhost:4444/ui

Firefox in docker - why can't my tests show activestorage blobs?

I am writing a Rails application that supports the upload of images using ActiveStorage. I'm trying to write a system test, which is running in a Firefox. Firefox is being driven by Selenium over a network; ultimately it is installed inside a docker container.
I can write a system test that runs, and passes. The docker image I'm using supports interactively viewing the tests through OpenVNC - if you navigate to http://localhost:7900 in a browser, you can watch the tests run and interact with the test browser inside your host system browser.
The test I've written uploads a file, and checks that the file is uploaded by going to its dedicated "show" page. The page loads, and renders an image tag like
<img src="http://web:37193/rails/active_storage/blobs/redirect/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBVZz09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--4c0a214932e528a8cea25a8f6bd6aad7b1e19994/rails-logo.svg">
Unfortunately, the image is blank in the browser and firefox reports that it "could not load the image". The source is definitely correct, and the network appears to be able to access it. If I add a break statement, and visit the given url in the test browser, it downloads the image just fine. By the way, it works fine in my development environment too. "37193" is the port Capybara is running on. "web" is the name of my rails container - I'll share my docker-compose below.
In my spec/rails_helper.rb,
Capybara.server_host = '0.0.0.0'
Capybara.app_host = "http://#{ENV.fetch("HOSTNAME")}"
Capybara.default_host = "http://#{ENV.fetch("HOSTNAME")}"
config.before(:each, type: :system) do
# To watch the system tests interactively, visit localhost:7900
driven_by :selenium, using: :firefox, screen_size: [1000,1000], options: { browser: :remote, url: 'http://firefox:4444' }
default_url_options[:host] = "#{ENV.fetch("HOSTNAME")}"
end
and docker-compose.yml,
version: "3.9"
services:
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: app
MYSQL_USER: user
MYSQL_PASSWORD: password
web:
build: .
container_name: web
hostname: web
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/web
- gems:/usr/local/bundle/
ports:
- "3000:3000"
depends_on:
- db
- firefox
tty: true
stdin_open: true
environment:
DB_USER: root
DB_NAME: app
DB_PASSWORD: password
DB_HOST: db
firefox:
image: selenium/standalone-firefox
container_name: firefox
ports:
- "7900:7900"
- "4444:4444"
shm_size: "2gb"
restart: unless-stopped
volumes:
gems:
The issue turned out to be that I was using an SVG in tests. I only used a PNG when testing in my development environment, so never noticed the difference.
An image with src that is an SVG is valid. However, the image will only be correctly displayed if it has the content type header correctly set to be an image. To prevent XSS, ActiveStorage will serve SVGs as application/octet-stream. For the case I'm testing (as the src for an img tag) that's unnecessary, because scripts are disabled in this case. But if a user opens the image in a new tab, they'd be vulnerable.

Connect Rails app and browserless/chrome container

I am trying to use the browserless/chrome docker image and connect to it from a Ruby on Rails (6.0) app running in another docker container. I'm having difficulty getting the two to connect.
The browserless container is running properly (I know because I can access the web debug interface it comes with).
In the Rails app, I am trying to use capybara with selenium-webdriver to drive a headless chrome instance.
I am getting errors saying that either it can't connect or that it cannot post to '/session', which I think means it's properly connecting but can't initialize a session within chrome for some reason.
Below is my current configuration + code:
#docker-compose.yml
version: "3"
services:
web:
build: .
image: sync-plus.web:latest
volumes:
- .:/app
ports:
- "3000:3000"
environment:
RAILS_ENV: development
RACK_ENV: development
REDIS_URL: redis://redis:6379/0
DATABASE_URL_TEST: postgres://postgres:super#postgres:5432/salesync_test
command: "bin/puma -C config/puma.rb"
depends_on:
- postgres
- mailcatcher
- redis
sidekiq:
image: sync-plus.web:latest
volumes:
- .:/app
environment:
RAILS_ENV: development
RACK_ENV: development
REDIS_URL: redis://redis:6379/0
DATABASE_URL_TEST: postgres://postgres:super#postgres:5432/salesync_test
command: "bundle exec sidekiq"
depends_on:
- postgres
- mailcatcher
- web
- redis
guard:
image: sync-plus.web:latest
volumes:
- .:/app
environment:
RAILS_ENV: test
RACK_ENV: test
REDIS_URL: redis://redis:6379/1
DATABASE_URL_TEST: postgres://postgres:super#postgres:5432/salesync_test
command: "bin/rspec"
depends_on:
- postgres
redis:
image: redis:alpine
command: redis-server
volumes:
- 'redis:/data'
postgres:
image: postgres:latest
volumes:
- postgres_volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: super
mailcatcher:
image: schickling/mailcatcher:latest
ports:
- "1080:1080"
browserless:
image: browserless/chrome:latest
ports:
- "4444:3000"
volumes:
postgres_volume:
redis:
#rails_helper.rb
...
require "rspec/rails"
# This require needs to be below the rspec/rails require
require 'capybara/rails'
...
I am using an Rspec test to check if the browserless connection/driving is functional:
#browserless_spec.rb
require "rails_helper"
RSpec.describe 'Browserless', type: :feature do
describe "Connection" do
it "can connect to browserless container" do
#selenium webdriver looks at localhost:4444 by default
driver = Selenium::WebDriver.for :remote
driver.visit("/users/sign_in")
expect(page).to have_selector("btn btn-outline-light my-2")
end
end
end
I can't seem to find any solid solutions or leads elsewhere, so any help would be much appreciated!
Thanks!

Running Gitlab in Docker

I want to host a private Gitlab server on my Debian VPS. I figured using Docker would be a good setup.
I tried running Gitlab with the following code:
version: '3'
services:
gitlab:
image: 'gitlab/gitlab-ce'
restart: always
hostname: 'gitlab.MYDOMAIN.com'
links:
- postgresql:postgresql
- redis:redis
environment:
GITLAB_OMNIBUS_CONFIG: |
postgresql['enable'] = false
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "gitlab"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
redis['enable'] = false
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_port'] = '6379'
external_url 'http://gitlab.MYDOMAIN.com:30080'
gitlab_rails['gitlab_shell_ssh_port'] = 30022
ports:
# both ports must match the port from external_url above
- "30080:30080"
# the mapped port must match ssh_port specified above.
- "30022:22"
# the following are hints on what volumes to mount if you want to persist data
# volumes:
# - data/gitlab/config:/etc/gitlab:rw
# - data/gitlab/logs:/var/log/gitlab:rw
# - data/gitlab/data:/var/opt/gitlab:rw
postgresql:
restart: always
image: postgres:9.6.2-alpine
environment:
- POSTGRES_USER=gitlab
- POSTGRES_PASSWORD=gitlab
- POSTGRES_DB=gitlabhq_production
# the following are hints on what volumes to mount if you want to persist data
# volumes:
# - data/postgresql:/var/lib/postgresql:rw
redis:
restart: always
image: redis:3.0.7-alpine
Running this (docker-compose run -d) allows me to reach Gitlab on MYDOMAIN.com:30080, but not on gitlab.MYDOMAIN.com:30080.
Have I made an error in the configuration? Or do I need to use reverse proxies (NGINX or Traefik)?
I'm pretty sure the hostname: gitlab.MYDOMAIN.rocks needs to match the external_url 'http://gitlab.MYDOMAIN.com:30080' until the port exactly
So for example:
hostname: gitlab.MYDOMAIN.com
. . . more configuration . . .
external_url 'http://gitlab.MYDOMAIN.com:30080'
Did you check that the subdomain gitlab in dns is pointing to the right ip? Looks like an infrastructure problem more than a docker configuration one.
Regards
I managed to fix it myself!
I totally forgot to add an A-record, setting gitlab.mydomain.com to point to the same IP address as #.
I added the following block to the nginx configuration:
upstream gitlab.mydomain.com {
server 1.2.3.4:30080; # IP address of Docker container
}
server {
server_name gitlab.mydomain.com;
location / {
proxy_pass http://gitlab.mydomain.com;
}
}
I use upstream because otherwise the url set in new Gitlab projects is set to the IP address, as mentioned here.

How to set custom IP for Solr server from Docker compose file?

In short:
I have a hard time figuring out how to set custom IP for a Solr container from the docker-compose.yml file.
Detailed
We want to deploy local dev environments, for Drupal instances, via Docker.
The propblem is, that while from the browser I can access the Solr server via the "traditional" http://localhost:8983/solr, Drupal cannot connect to it this way. The internal 0.0.0.0, and 127.0.0.1 doesn't work either. The only way Drupal can connect to the Solr server is via lan IP, which differs for every station obviously, and since the configuration in Drupal needs to be updated anyway, I thought that specifying a custom IP on which they can communicate would be my best choice, but it's not straightforward.
I am aware that assigning static IP to the container is not the best solution, but it seems more feasible than tinkering with solr.in.sh, and if someone has a different approach to achieve this, I am opened to solutions.
Most likely I could use some command line parameter along with docker run, but we need to run the containers with docker-compose up -d, so this wouldn't be an optimal solution.
Ideal would be a Solr container section example for the compose file. Thanks.
Note:
This link shows an example how to set it, but I can't understand it well. Please keep in mind that I am by no means an expert.
Forgot to mention that the host is based on Linux, mostly Ubuntu and Debian.
Edit:
As requested, here is my compose file:
version: "2"
services:
db:
image: wodby/drupal-mariadb
environment:
MYSQL_RANDOM_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
# command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci # The simple way to override the mariadb config.
volumes:
- ./data/mysql:/var/lib/mysql
- ./docker-runtime/mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
php:
image: wodby/drupal-php:7.0 # Allowed: 7.0, 5.6.
environment:
DEPLOY_ENV: dev
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
PHP_XDEBUG_ENABLED: 1 # Set 1 to enable.
# PHP_SITE_NAME: dev
# PHP_HOST_NAME: localhost:8000
# PHP_DOCROOT: public # Relative path inside the /var/www/html/ directory.
# PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
# PHP_XDEBUG_ENABLED: 1
# PHP_XDEBUG_AUTOSTART: 1
# PHP_XDEBUG_REMOTE_CONNECT_BACK: 0 # This is needed to respect remote.host setting bellow
# PHP_XDEBUG_REMOTE_HOST: "10.254.254.254" # You will also need to 'sudo ifconfig lo0 alias 10.254.254.254'
links:
- db
volumes:
- ./docroot:/var/www/html
nginx:
image: wodby/drupal-nginx
hostname: testing
environment:
# NGINX_SERVER_NAME: localhost
NGINX_UPSTREAM_NAME: php
# NGINX_DOCROOT: public # Relative path inside the /var/www/html/ directory.
DRUPAL_VERSION: 7 # Allowed: 7, 8.
volumes_from:
- php
ports:
- "${PORT_WEB}:80"
pma:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
PMA_USER: ${MYSQL_USER}
PMA_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${PORT_PMA}:80'
links:
- db
mailhog:
image: mailhog/mailhog
ports:
- "8002:8025"
redis:
image: redis:3.2-alpine
# memcached:
# image: memcached:1.4-alpine
# memcached-admin:
# image: phynias/phpmemcachedadmin
# ports:
# - "8006:80"
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
# entrypoint:
# - docker-entrypoint.sh
# - solr-precreate
ports:
- "8983:8983"
# varnish:
# image: wodby/drupal-varnish
# depends_on:
# - nginx
# environment:
# VARNISH_SECRET: secret
# VARNISH_BACKEND_HOST: nginx
# VARNISH_BACKEND_PORT: 80
# VARNISH_MEMORY_SIZE: 256M
# VARNISH_STORAGE_SIZE: 1024M
# ports:
# - "8004:6081" # HTTP Proxy
# - "8005:6082" # Control terminal
# sshd:
# image: wodby/drupal-sshd
# environment:
# SSH_PUB_KEY: "ssh-rsa ..."
# volumes_from:
# - php
# ports:
# - "8006:22"
A docker run example would be
IP_ADDRESS=$(hostname -I)
docker run -d -p 8983:8983 solr bin/solr start -h ${IP_ADDRESS} -p 8983
Instead of assigning static IPs, you could use the following method to get the container's IP dynamically.
When you link containers together, they share there network information (IP, port) to each other. The information is stored in each container as environmental variables.
Example
docker-compose.yml
service:
build: .
links:
- redis
ports:
- "3001:3001"
redis:
build: .
ports:
- "6369:6369"
The service container will now have the following environmental variables:
Dynamic IP Address Stored Within "service" container:
REDIS_PORT_6379_TCP_ADDR
Dynamic PORT Stored Within "service" container:
REDIS_PORT_6379_TCP_PORT
You can always check this out by shelling into the container and looking yourself.
docker exec -it [ContainerID] bash
printenv
Inside your nodeJS app you can use the environmental variable in your connection function by using process.env.
let client = redis.createClient({
port: process.env.REDIS_PORT_6379_TCP_ADDR,
host: process.env.REDIS_PORT_6379_TCP_PORT
});
Edit
Here is the updated docker-compose.yml "solr" section:
solr:
image: makuk66/docker-solr:4.10.3
volumes:
- ./docker-runtime/solr:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
ports:
- "8983:8983"
links:
- db
In the above example the "solr" container is now linked with the "db" container. this is done using the "links" field.
You can do the same thing if you wanted to link the solr container to any other container within the docker-compose.yml file.
The db containers information will now be available to the solr container (via the enviromental variables I mentioned earlier).
Without the linking, you will not see those enviromental variables listed when you do the printenv command.

Resources