Websockets-Rails gives "eventmachine not initialized" error in test environment - ruby-on-rails

I have recently started using the websocket-rails gem and all is fine in development and even in production, but when I run rspec tests I get the following error:
Failure/Error: Unable to find matching line from backtrace
RuntimeError:
eventmachine not initialized: evma_install_oneshot_timer
The message seems to appear on the second test I run. When I run a previously failing test on it's own by adding :focus => true to the test and then rspec --tag focus its passes OK. If I add the focus to more than one test, usually the first one passes and the second test gives the error.
I am using rspec Capybara with the Selemium web driver. I think this may be to do with the web server in test not being an "eventmachine based web server" but I am not 100% sure. I have tried setting up the websockets server as standalone by adding the following in my websocket initializer config.standalone = true and then starting the server by rake websocket_rails:start_server RAILS_ENV=test.

I solved this by configuring rspec to use the Thin server by adding the following within my spec_helper.rb file:
RSpec.configure do |config|
Capybara.server do |app, port|
require 'rack/handler/thin'
Rack::Handler::Thin.run(app, :Port => port)
end
#other configuration here....
end

Related

Capybara error on HTTP.get: HTTP::ConnectionError Exception: failed to connect: Connection refused - connect(2) for "localhost" port 3000

My app is set to use Capybara and minitest with RackTest driver. This is the main config in test_helper.rb:
require 'capybara/rails'
require 'capybara/minitest'
class ActionDispatch::IntegrationTest
include Capybara::DSL
include Capybara::Minitest::Assertions
fixtures :all
...
Capybara.app_host = "http://localhost:3000"
Capybara.run_server = true
Capybara.server_port = 3000
Capybara.register_driver :rack_test do |app|
Capybara::RackTest::Driver.new app,
follow_redirects:false
end
...
end
Now, when I perform request directly on my tests they work fine. Such as:
post '/api/v4/login', params: {"email": u.email, "password": u.password }
But in one test I'm calling a class (inside /app) that performs the following method:
HTTP.get(url,params).body
For which i appear to have no server running and get the following Error message in response:
HTTP::ConnectionError Exception: failed to connect: Connection refused - connect(2) for "localhost" port 3000
First, you should not be using post or get in tests that use Capybara (feature/system tests). They should only be used in request/raw integration tests, which don't use Capybara or the server it starts (lazily when the need is detected through a visit call) and are generally used for API tests.
Second, you should not be setting the port (to 3000), or app_host (generally) when having Capybara run the AUT. Port 3000 is generally the port your development server gets run on (rails s) so having Capybara run on the same port in test mode would conflict. If you don't have a really specific need for Capybara to be on a specific port (firewall forwarding, etc) then just let it pick a random port.
Capybara.run_server = true
Capybara.register_driver :rack_test do |app|
Capybara::RackTest::Driver.new app, follow_redirects:false
end
That will then have Capybara start the app on 127.0.0.1:<random_port>, if you want it specifically on localhost (due to special networking needs, IPv6, etc) then you can set Capybara.server_host = 'localhost'. Also the use of follow_redirects: false is questionable since your tests that use Capybara really shouldn't be checking status codes, but rather what the user sees.
Beyond all that, if you are running a request test that ends up calling app code that does a HTTP::get you'll either need to change that test to be a feature/system test (uses Capybara, starts its own server, uses visit, etc) or mock/stub the request.

Capybara: First test times out with 'failed to reach server, check DNS and/or server status', all other tests work fine

I maintain several extension for the Spree/Solidus Rails platform(s), and for some reason on one extension in particular I’m having a Capybara issue I can’t seem to track down.
Once I build a test environment, the first test of the first run through the specs always fails, with the following:
Capybara::Poltergeist::StatusFailError:
Request to 'http://127.0.0.1:52234/products' failed to reach server,
check DNS and/or server status - Timed out with no open resource requests
All subsequent specs pass. If I run bundle exec rspec spec again, then all specs pass.
I have tried increasing the Capybara timeout values to super high numbers:
RSpec.configure do |config|
config.include Spree::TestingSupport::CapybaraHelpers, type: :feature
Capybara.register_driver(:poltergeist) do |app|
Capybara::Poltergeist::Driver.new app, timeout: 90
end
Capybara.javascript_driver = :poltergeist
Capybara.default_max_wait_time = 90
end
But it seems to have no effect.
All my Travis builds fail on the first spec of the first run (and pass on everything else), which is making it hard to maintain the project, as all PRs look red.
Any ideas what might be going on here?
Most likely this is failing when Rails processes the asset pipeline at the first request. Try precompiling the test mode assets before running the test or increase the timeout in the driver registration even more. The Capybara.default_max_wait_time shouldn't affect this at all.

selenium rspec features not running on linux

I'm a rails dev working on a rails 4.0.4 app. As of yesterday (possibly before, yesterday was the first I noticed it, because I usually use CI), my archlinux dev machine doesn't run rspec features marked with js: true metadata tags, it just returns passes for all of them. e.g.:
$ be rspec spec/features/activity_spec.rb
..........
Finished in 0.38366 seconds
10 examples, 0 failures
There are no such problems for any other specs so far as I can tell, just those that use selenium. It does not spawn a browser (we use firefox, though I tried chrome with chromedriver). It seems to not even call the Procs created in spec_helper, as raising an exception doesn't happen:
Capybara.server do |app, port|
raise "Hell"
require 'rack/handler/thin'
Rack::Handler::Thin.run(app, :Port => port)
end
# use BROWSER=safari,chrome,etc
browser = (ENV["BROWSER"] || "firefox").to_sym
Capybara.register_driver :selenium do |app|
raise "Hell"
Capybara::Selenium::Driver.new(app, browser: browser)
end
Updating to the most recent capybara/selenium-webdriver/rspec does not change anything, nor does checking out old tags from my repo, which used previous versions of gems/ruby.
The rest of my team (all running OSX) have no problems with the exact same branch/set of gems/ruby version (these same specs ran previously on 2.1.1 and 1.9.3 on the same machine).
All of this screams "OS specific problem" at me. Any suggestions on what to try (other than switching to OSX - have had enough of that from my colleagues) would be appreciated. Cheers.
This was due to setting RETRIES=0 when using rspec-retry. Oops.

delayed_job tasks failing in daemon mode in production environment

I'm trying to use delayed_jobs. It works perfectly, locally, when using the rake jobs:work task. However, when I try to use it as a daemon, the tasks fail. I've stopped it from deleting failed tasks. I'm starting the daemon with 'RAILS_ENV=production bin/delayed_jobs start'. When I do status it shows the worker is active.
They seem to be failing with this error 'Job failed to load: undefined class/module ProductAquisition.'
This is a snippet from the module I'm trying to use (lib/product_aquisition.rb).
module ProductAquisition
require 'net/http'
def ProductAquisition.get_updates
.....
end
end
This is from the rakefile I'm trying to use to trigger this
require 'debugger'
require 'date'
require 'thread'
# require 'active_support'
require "#{Rails.root}/lib/product_aquisition.rb"
require "#{Rails.root}/app/helpers/soap_helper"
include SoapHelper
include ProductAquisition
task :get_updates => :environment do
ProductAquisition.delay.get_updates
end
I don't understand how it's unable to load and use the module 'ProductAquisition' when running as a daemon. Can anyone shed some light on this?
Edit: Full error message
Full error output can be seen here: https://gist.github.com/smithmr8/70f1f736fe1c679ceff1/raw/9586759c868adce3797ad9e7697789ef417821a6/gistfile1.rb
I tried posting it here but it wouldn't wrap correctly.

change capybara driver without restarting spork

I am running integration tests for my Rails app using Cucumber with Capybara. I also use Spork for faster test runs. Capybara supports selenium and the headless poltergeist. Headless browser is good for faster test runs, but sometimes I also need to view what the browser is showing. This leads me to changing the Capybara driver and restarting spork.
Is there a way to change the Capybara driver without restarting spork?
Things that I have tried
[1] Added a module containing the driver code in features/support and referred that in Spork's each_run. It does not work and throws up an error.
[2] Using each_run to change the driver does not register the change until spork is restarted.
Spork.each_run do
Capybara.javascript_driver = :poltergeist
end
Figured it out. Capybara's github page had some information on this. Here's what I did:
Added the following code to features/support/hooks.rb
Before('#javascript') do
Capybara.current_driver = :poltergeist
end
This can be changed at any point without needing a Spork restart. I was unable to figure out a way to place it in the each_run block.

Resources