TL;DR: Any idea about how to properly configure capybara to be able to drive a remote selenium browser in a docker container with default Rails minitest system test?
I'm running Rails in a dockerized env..
Now I want to start some "system tests" but since I'm running inside Docker I come up with some issues.
I'm using the default test suite (minitest?) with capybara and selenium-webdriver gems.
I've already installed the chromedriver packet in the container using the following:
RUN apt-get install -y chromedriver \
&& ln -s /usr/lib/chromium-browser/chromedriver /usr/local/bin
But running rails test:system outputs the following error Selenium::WebDriver::Error::WebDriverError: Unable to find chromedriver.
In fact I don't know if chrome itself is installed or not?
which chrome outputs nothing.
which chromium outputs /usr/bin/chromium.
I also tried with xvfb without success.
So (since I had no clue) I tried to go further and actually go with a dockerized system test environment as well.
I found some Docker images from selenium. So I ran among my rails and database containers a selenium-standalone-chrome container (the actual docker-compose.yml I'm using is here)
Then I found some useful information about the configuration to be done to let capybara driver the remote selenium browser.
All the examples I found on internet use rspec, but since I'm using the default minispec I tried to adapt the capybara driver to minispec but I had some doubt about how to do it and where to put the configuration.
For system tests I guessed that the best location is the file application_system_test_case.rb. Also I found and I tried many different capybara configurations and I end up with the following which seems to be the most complete (available here)
At that moment the test seems to perform well since I have no error but it always fails.
It fails regardless of making a call to the driver configuration (the setup_remote method where I defined the server host and port) before the tests case.
With or without the call I got the "site can't be reached" error (here is the screenshot)
Here is the test file I used. (Testing some react dynamic display)
However I can access to the selenium container with the given URL from the browser from my host machine. And both containers sees each others. I did some ping from within the containers shell.
The following SO questions being helpful don't work for me:
Dockerized selenium browser cannot access Capybara test url
How can I run headless browser system tests in Rails 5.1?
Any idea about how to properly configure capybara to be able to drive a remote selenium browser in a docker container with default Rails minitest system test?
Thank you very much.
You have to override the host method so Capybara uses the container's IP address. Check out this post: https://medium.com/#pacuna/using-rails-5-1-system-tests-with-docker-a90c52ed0648
Related
My PHP project is stored in WSL, accessed by PhpStorm installed on Windows and running with Docker Desktop installed on Windows.
The Project itself is totally fine, but running Tests is not possible because PhpStorm cannot find the vendor autoload or phpunit.phar in Test Framework configuration.
Setup:
Windows 10 with WSL2 Ubuntu 20.04 LTS
PhpStorm on Windows
Docker Desktop on Windows, Docker Compose files in WSL
Code in home folder in WSL (see following screens)
I read in some older threads that Docker Compose v2 needs to be enabled in Docker Desktop. It is:
Docker is configured inside of PhpStorm and shows that the connection is successful (I know that works because things like Xdebug is working without any issues):
Notice that I configured a path mapping here for the project root.
in WSL: \\wsl$\Ubuntu\home\USERNAME\workspace\PROJECTNAME-web-docker
in Docker: /var/www/PROJECTNAME-web
I can see that those paths are correct by either logging into the Docker container or by checking the Service Tab of PhpStorm and inspecting files:
This is my CLI Interpreter using the docker-compose configuration:
It does not matter if I use the existing container or if it should be starting a new one
PHP Version is always detected
And finally the error inside of Test Framework:
Here I tried different things:
use composer autoloader or phpunit.phar
it doesn't matter if I use a full path /var/www... or just vendor/...
tried different path mappings here
clicking on refresh shows this error in a small popup
Cannot parse PHPUnit version output: Could not open input file: /var/www/PROJECTNAME-web/SUBTOPIC/vendor/phpunit/phpunit/phpunit
autoload.php is definitely available and correct, phpunit is installed and available.
Maybe someone has a hint what is missing or wrong? Thanks!
EDIT:
How do I know that autoload is available or path mapping is correct?
I have Xdebug configured and running. When Xdebug stops in my code, I know that the path mapping is correct. The output of Debug -> Console for example shows stuff like this:
PHP Deprecated: YAML mapping driver is deprecated and will be removed in Doctrine ORM 3.0, please migrate to annotation or XML driver. in /var/www/PROJECTNAME-web/SUBTOPIC/vendor/doctrine/orm/lib/Doctrine/ORM/Mapping/Driver/YamlDriver.php on line 62
so I know the path mapping for xdebug works, but seems like Test Framework config does not like it.
My Python project is very windows-centric, we want the benefits of containers but we can't give up Windows just yet.
I'd like to be able to use the Dockerized remote python interpreter feature that comes with IntelliJ. This works flawlessly with Python running on a standard Linux container, but appears to work not at all for Python running on a Windows container.
I've built a new image based on a standard Microsoft Server core image. I've installed Miniconda, bootstrapped a Python environment and verified that I can start an interactive Python session from the command prompt.
Whenever I try to set this up I get an error message: "Can't retrieve image ID from build stream". This occurs at the moment when IntelliJ would have normally detected the python interpreter and it's installed libraries.
I also tried giving the full path for the interpreter: c:\miniconda\envs\htp\python.exe
I've never seen any mention that this works in the documentation, but nor have I seen any mention that it does not work. I totally accept that Windows Containers are an oddity, so it's entirely possible that IntelliJ's remote-Python feature was never tested on Python running in Windows containers.
So, has anybody got this feature working with Python running on a Windows container yet? Is there any reason to believe that it does or does not work?
Regrettably, it is not supported yet. Please vote for the feature request https://youtrack.jetbrains.com/issue/PY-45222 in order to increase its priority.
I am trying to containerize the automation tests to run in docker environment. When the build runs on the automation code, it creates an docker image and updates in DTR. I have a separate jenkins pipeline which runs the automation commands in the docker image and uploads the results in the workspace. All of this setup working in fine in non-docker environment (ie., on local mac terminal), but the same tests are failing in docker environment. I am trying to figure out a solution, but it doesn't seem to work.
I get below errors when running the protractor tests in docker environment
After # test/cucumber/stepDefinitions/hooks.ts:31
WebDriverError: invalid session id
(Driver info: chromedriver=73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72),platform=Linux 4.9.125-linuxkit x86_64)
I built my docker image FROM circleci/node (https://hub.docker.com/r/circleci/node/) and this image has required libraries installed (node, npm,yarn, chrome and chrome drivers).
Before running the tests I made sure the protractor, cucumber and webdriver modules are installed.
Even then, i am trying to install chrome and chrome driver while building the image using apt-get package manager.
The docker env is on Debian GNU/Linux 9 \n \l
The chrome driver version is
73.0.3683.75-1~deb9u1
Google Chrome version is 73.0.3683.103
I am running headless
Making sure the webdriver manager is updated before starting it
Web driver version 13.0
Running below:
webdriver-manager update --ignore_ssl --versions.chrome 73.0.3683.75-1~deb9u1
webdriver-manager start --detach
protractor test/cucumber/config/cucumberConfig.ts
I expect all the tests to run in docker environment in the same way it ran in mac terminal, but getting below errors:
And Log out application # test/cucumber/stepDefinitions/common-step-def.ts:64
✖ After # test/cucumber/stepDefinitions/hooks.ts:31
WebDriverError: invalid session id
(Driver info: chromedriver=73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72),platform=Linux 4.9.125-linuxkit x86_64)
at Object.checkLegacyResponse (/node_modules/selenium-webdriver/lib/error.js:546:15)
at parseHttpResponse (/node_modules/selenium-webdriver/lib/http.js:509:13)
at doSend.then.response (/node_modules/selenium-webdriver/lib/http.js:441:30)
at
at process._tickCallback (internal/process/next_tick.js:189:7)
From: Task: WebDriver.takeScreenshot()
at thenableWebDriverProxy.schedule (/node_modules/selenium-webdriver/lib/webdriver.js:807:17)
at thenableWebDriverProxy.takeScreenshot (/node_modules/selenium-webdriver/lib/webdriver.js:1085:17)
at run (/node_modules/protractor/built/browser.js:59:33)
at ProtractorBrowser.to.(anonymous function) [as takeScreenshot] (/node_modules/protractor/built/browser.js:67:16)
at World. (/test/cucumber/stepDefinitions/hooks.ts:36:17)
Any thoughts?
I run into the same problem recently. It looks like browser instance can't start due to some reason. In my case adding --disable-dev-shm-usage to chrome-options solved the issue.
ChromeOptions options = new ChromeOptions();
options.addArguments("--disable-dev-shm-usage");
ChromeDriver driver = new ChromeDriver(options);
Why this helps:
By default, Docker runs a container with a /dev/shm shared memory space 64MB. This is typically too small for Chrome and will cause Chrome to crash when rendering large pages. To fix, run the container with docker run --shm-size=1gb to increase the size of /dev/shm. Since Chrome 65, this is no longer necessary. Instead, launch the browser with the --disable-dev-shm-usage flag:
~ Google troubleshooting guide
According to that, another idea would be to try using --shm-size=1gb when running the container if you really want to use /dev/shm.
May be check the Chrome version compatible with the OS version you are using in the Docker.
From the logs it seems the page is not even loaded or crashed on loading. Either it requires more memory to load the page or the chromes extensions might have been enabled.
try adding these options to config
chromeOptions: {
args: [
'incognito',
'disable-extensions',
'disable-infobars',
]
}
I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.
Intro
Hi, I'm developing rails application, which use capybara, selenium-webdriver and rspec for tests.
Problem
Now I have a functional tests, which runs in firefox (default selenium browser) and works with redirects to other hosts. For example, rspec before hook to get fresh google access token.
Locally on my laptop all tests runs with success:
bundle exec rspec
But codeship's builds fails.
Questions
Do I need to setup codeship to support "firefox" tests? If yes, how can I do it?
Does codeship supports redirects to other hosts?
Thanks!
Codeship supports running selenium tests in the CI , you can find more info here https://documentation.codeship.com/continuous-integration/browser-testing/
However when i tried to run the selenium tests in CI , chrome failed to start 90% of the times , so i am planning to spin up a selenium grid elsewhere and run the tests in codeship