Continuous Integration - Running parallel tests suites that require xvfb - ruby-on-rails

I'm having issues with parallel builds running that require an xvfb server. I was previously using the headless ruby gem, but had sporadic failures when certain test suites that both require capybara-webkit and an xvfb server are running in parallel.
My guess was that they were both trying to use the same DISPLAY, so I attempted to set different DISPLAY values and then run them in parallel, but there was still a failure.
I then tried removing the headless gem and running my test suite with:
DISPLAY=localhost:$display_num.0 xvfb-run bundle exec rake where $display_num is a previously set bash variable that is different between the two test suites.
I then get the error: xvfb-run: error: Xvfb failed to start when they were run in parallel.
Any assistance on deciphering this would be great!

Here is the gist, but ultimately you need to start one headless per process.
This is effectively done with the features/support/javascript.rb file referenced in the gist, the relevant section being:
# Unnecessary on mac
if (!OS.mac? && !$headless_started)
require 'headless'
# allow display autopick (by default)
# allow each headless to destroy_at_exit (by default)
# allow each process to have their own headless by setting reuse: false
headless_server = Headless.new(:reuse => false)
headless_server.start
$headless_started = true
puts "Process[#{Process.pid}] started headless server display: #{headless_server.display}"
end

Related

Capybara: First test times out with 'failed to reach server, check DNS and/or server status', all other tests work fine

I maintain several extension for the Spree/Solidus Rails platform(s), and for some reason on one extension in particular I’m having a Capybara issue I can’t seem to track down.
Once I build a test environment, the first test of the first run through the specs always fails, with the following:
Capybara::Poltergeist::StatusFailError:
Request to 'http://127.0.0.1:52234/products' failed to reach server,
check DNS and/or server status - Timed out with no open resource requests
All subsequent specs pass. If I run bundle exec rspec spec again, then all specs pass.
I have tried increasing the Capybara timeout values to super high numbers:
RSpec.configure do |config|
config.include Spree::TestingSupport::CapybaraHelpers, type: :feature
Capybara.register_driver(:poltergeist) do |app|
Capybara::Poltergeist::Driver.new app, timeout: 90
end
Capybara.javascript_driver = :poltergeist
Capybara.default_max_wait_time = 90
end
But it seems to have no effect.
All my Travis builds fail on the first spec of the first run (and pass on everything else), which is making it hard to maintain the project, as all PRs look red.
Any ideas what might be going on here?
Most likely this is failing when Rails processes the asset pipeline at the first request. Try precompiling the test mode assets before running the test or increase the timeout in the driver registration even more. The Capybara.default_max_wait_time shouldn't affect this at all.

Getting Capybara::DriverNotFoundError when trying to run Cucumber tests

I'm getting this error when I run the cucumber tests. Everything seemed to be working fine the previous day but I can't figure out why it stopped working. I was trying to get capybara webkit working and I had changed a couple of files but I don't see why it should affect my tests. Any idea on how to fix this error I'm getting while running the cucumber tests?
Capybara::DriverNotFoundError: no driver called :rack was found, available drivers: :rack_test, :selenium, :webkit, :webkit_debug
You mentioned that you edited many files. Could it be that you didn't revert all the changes you made? I think Capybara would pick the 'rack_test' driver by default, and your system could not find the 'rack' driver.
Since you're doing Cucumber testing, you must have a file called 'env.rb' under the features/support folder. Make sure you don't force 'rack' as your Capybara driver, and your tests should run fine.

Running bundle from rails shell command

I have a Rails 4 application that executes several shell commands and everything works fine, but now I am trying to execute a shell command from it that checks the bundle of a different app or even an engine not used by this app, and all I get is the result as if it was checking its own bundle.
This is probably confusing so let me try to make it clearer:
Rails app
|--operations folder
|--app1
|--engine1
|--app2
now the rails app executes a shell command to check the bundle of any of those apps/engines in the operations folder like this:
out = %x[cd operations/app1 && bundle list 2>&1]
but the result is the list of gems used by the executing Rails app, not the list of gems from app1 that i want to check.
Why is that happening? I've also tried specifying the Gemfile using the --gemfile= option to no avail. how can I execute bundle operations on the target app?
The reason for this is that I have built a continuous integration application that tests and builds packages from our other apps and engines and sometimes some of the apps/engines require gems that the CI doesn't have, so running their tests fails and I wanted to make the CI install them before running the tests if it doesn't have them.
By default, child processes inherit the environment set up by Bundler.
To suppress this behaviour, call Bundler.with_clean_env with a block that contains the commands you want to run in the clean environment:
out = Bundler.with_clean_env { %x[cd operations/app1 && bundle list 2>&1] }
For more on this, see http://bundler.io/man/bundle-exec.1.html#Shelling-out

Rails + Capybara-webkit – javascript code coverage?

I am looking into using capybara-webkit to do somewhat close-to-reality tests of app. This is absolutely neccessary as the app features a very rich JS-based UI and the Rails part is mostly API calls.
The question is: is there any tools to integrate into testing pipeline which could instrument Javascript code and report its coverage? The key here is the ability to integrate into testing workflow (just like rcov/simplecov) easily – I don't like the idea do it myself with jscoverage or analogue :)
Many thanks in advance.
This has now been added to JSCover (in trunk) - the related thread at JSCover is here.
I managed to get JSCover working in the Rails + Capybara pipeline, but
it did take quite a bit of hacking to get it to work
These changes are now in JSCover's trunk and will be part of version 1.0.5. There's working examples (including a Selenium IDE recorded example) and documentation in there too.
There is some additional work needed to get the branch detection to
work since that uses objects that cannot be easily serialized to JSON
This is a function to do this which is used in the new code.
Anyway, the end result works nicely
I agree. This makes JSCover useable by higher level tools that don't work well with iFrames or multiple windows which are avoided by this approach. It also means code coverage can be added to existing Selenium tests with two adjustments:
Make the tests run through the JSCover proxy
Save the coverage report at the end of the test suite
See JSCover's documentation for more information. Version 1.0.5 containing these changes should be released in a few days.
Update: Starting from JSCover version 1.05 the hacks I outlined in my previous answer are no longer needed. I've updated my answer to reflect this.
I managed to get JSCover working in the Rails + Capybara pipeline, but it did take some hacking to get it to work. I built a little rake task that:
uses the rails asset pipeline to generate the scripts
calls the java jar to instrument all the files and generate an empty report into a temp dir
patches the jscover.js script to operate in "report mode" (simply add jscoverage_isReport=true at the end)
copies the result to /public/assets so the tests pick it up without needing any changes and so the coverage report can be opened automatically in the browser
Then I added a setup task to clear out the browser's localStorage at the start of the tests and a teardown task that writes out the completed report at the end.
def setup
unless $startup_once
$startup_once=true
puts 'Clearing localStorage'
visit('/')
page.execute_script('localStorage.removeItem("jscover");')
end
end
def teardown
out=page.evaluate_script("typeof(_$jscoverage)!='undefined' && jscoverage_serializeCoverageToJSON()")
unless out.blank? then
File.open(File.join(Rails.root,"public/assets/jscoverage.json"), 'w') {|f| f.write(out) }
end
end
Anyway, the end result works nicely, the advantage of doing it this way is that it also works on headless browsers so it can also be included in CI.
*** Update 2: Here is a rake task that automates the steps, drop this in /lib/tasks
# Coverage testing for JavaScript
#
# Usage:
# Download JSCover from: http://tntim96.github.io/JSCover/ and move it to
# ~/Applications/JSCover-1
# First instumentalize the javascript files:
# rake assets:coverage
# Then run browser tests
# rake test
# See the results in the browser
# http://localhost:3000/assets/jscoverage.html
# Don't forget to clean up instrumentalization afterwards:
# rake assets:clobber
# Also don't forget to re-do instrumentalization after changing a JS file
namespace :assets do
desc 'Instrument all the assets named in config.assets.precompile'
task :coverage do
Rake::Task["assets:coverage:primary"].execute
end
namespace :coverage do
def jscoverage_loc;Dir.home+'/Applications/JSCover-1/';end
def internal_instrumentalize
config = Rails.application.config
target=File.join(Rails.public_path,config.assets.prefix)
environment = Sprockets::Environment.new
environment.append_path 'app/assets/javascripts'
`rm -rf #{tmp=File.join(Rails.root,'tmp','jscover')}`
`mkdir #{tmp}`
`rm -rf #{target}`
`mkdir #{target}`
print 'Generating assets'
require File.join(Rails.root,'config','initializers','assets.rb')
(%w{application.js}+config.assets.precompile.select{|f| f.is_a?(String) && f =~ /\.js$/}).each do |f|
print '.';File.open(File.join(target,f), 'w') {|ff| ff.write(environment[f].to_s) }
end
puts "\nInstrumentalizing…"
`java -Dfile.encoding=UTF-8 -jar #{jscoverage_loc}target/dist/JSCover-all.jar -fs #{target} #{tmp} #{'--no-branch' unless ENV['C1']} --local-storage`
puts 'Copying into place…'
`cp -R #{tmp}/ #{target}`
`rm -rf #{tmp}`
File.open("#{target}/jscoverage.js",'a'){|f| f.puts 'jscoverage_isReport = true' }
end
task :primary => %w(assets:environment) do
unless Dir.exist?(jscoverage_loc)
abort "Cannot find JSCover! Download from: http://tntim96.github.io/JSCover/ and put in #{jscoverage_loc}"
end
internal_instrumentalize
end
end
end

Why are tests running in production mode and causing my script/runners to fail?

So I have some script/runners set up in a cron job, but according to the logs, I'm getting this error below. First, I'm not sure why Test::Unit automatic runner is happening in production to begin with. I don't have autospec or autotest going on. Secondly, I'm not sure how to resolve this pesky invalid option error. I'm using the javan-whenever gem to handle the cron schedule. Any help out there?
0 tests, 0 assertions, 0 failures, 0 errors
invalid option: -e
Test::Unit automatic runner.
Usage: /apps/ion/releases/20091210210633/script/runner [options] [-- untouched arguments]
-r, --runner=RUNNER Use the given RUNNER.
(c[onsole], f[ox], g[tk], g[tk]2, t[k])
-n, --name=NAME Runs tests matching NAME.
(patterns may be used).
-t, --testcase=TESTCASE Runs tests in TestCases matching TESTCASE.
(patterns may be used).
-I, --load-path=DIR[:DIR...] Appends directory list to $LOAD_PATH.
-v, --verbose=[LEVEL] Set the output level (default is verbose).
(s[ilent], p[rogress], n[ormal], v[erbose])
-- Stop processing options so that the
remaining options will be passed to the
test.
-h, --help Display this help.
Deprecated options:
--console Console runner (use --runner).
--gtk GTK runner (use --runner).
--fox Fox runner (use --runner).
This is because something in your environment is requiring Test::Unit. I was having the same problem running rails runner in a Rails 3.1.0 application, and it was because we had gem test-unit in our Gemfile for all groups, rather than just for the test group where it was actually needed. Once I moved it into the test group, my runners ran as expected.
This code seems to be the culprit:
# test_unit/lib/test/unit.rb
at_exit do
unless $! || Test::Unit.run?
exit Test::Unit::AutoRunner.run
end
end
If you can't remove the requirement on Test::Unit, you can add this hack somewhere in your environment to prevent AutoRunner from auto-running:
Test::Unit.run = true

Resources