Rails + Capybara-webkit – javascript code coverage? - ruby-on-rails

I am looking into using capybara-webkit to do somewhat close-to-reality tests of app. This is absolutely neccessary as the app features a very rich JS-based UI and the Rails part is mostly API calls.
The question is: is there any tools to integrate into testing pipeline which could instrument Javascript code and report its coverage? The key here is the ability to integrate into testing workflow (just like rcov/simplecov) easily – I don't like the idea do it myself with jscoverage or analogue :)
Many thanks in advance.

This has now been added to JSCover (in trunk) - the related thread at JSCover is here.
I managed to get JSCover working in the Rails + Capybara pipeline, but
it did take quite a bit of hacking to get it to work
These changes are now in JSCover's trunk and will be part of version 1.0.5. There's working examples (including a Selenium IDE recorded example) and documentation in there too.
There is some additional work needed to get the branch detection to
work since that uses objects that cannot be easily serialized to JSON
This is a function to do this which is used in the new code.
Anyway, the end result works nicely
I agree. This makes JSCover useable by higher level tools that don't work well with iFrames or multiple windows which are avoided by this approach. It also means code coverage can be added to existing Selenium tests with two adjustments:
Make the tests run through the JSCover proxy
Save the coverage report at the end of the test suite
See JSCover's documentation for more information. Version 1.0.5 containing these changes should be released in a few days.

Update: Starting from JSCover version 1.05 the hacks I outlined in my previous answer are no longer needed. I've updated my answer to reflect this.
I managed to get JSCover working in the Rails + Capybara pipeline, but it did take some hacking to get it to work. I built a little rake task that:
uses the rails asset pipeline to generate the scripts
calls the java jar to instrument all the files and generate an empty report into a temp dir
patches the jscover.js script to operate in "report mode" (simply add jscoverage_isReport=true at the end)
copies the result to /public/assets so the tests pick it up without needing any changes and so the coverage report can be opened automatically in the browser
Then I added a setup task to clear out the browser's localStorage at the start of the tests and a teardown task that writes out the completed report at the end.
def setup
unless $startup_once
$startup_once=true
puts 'Clearing localStorage'
visit('/')
page.execute_script('localStorage.removeItem("jscover");')
end
end
def teardown
out=page.evaluate_script("typeof(_$jscoverage)!='undefined' && jscoverage_serializeCoverageToJSON()")
unless out.blank? then
File.open(File.join(Rails.root,"public/assets/jscoverage.json"), 'w') {|f| f.write(out) }
end
end
Anyway, the end result works nicely, the advantage of doing it this way is that it also works on headless browsers so it can also be included in CI.
*** Update 2: Here is a rake task that automates the steps, drop this in /lib/tasks
# Coverage testing for JavaScript
#
# Usage:
# Download JSCover from: http://tntim96.github.io/JSCover/ and move it to
# ~/Applications/JSCover-1
# First instumentalize the javascript files:
# rake assets:coverage
# Then run browser tests
# rake test
# See the results in the browser
# http://localhost:3000/assets/jscoverage.html
# Don't forget to clean up instrumentalization afterwards:
# rake assets:clobber
# Also don't forget to re-do instrumentalization after changing a JS file
namespace :assets do
desc 'Instrument all the assets named in config.assets.precompile'
task :coverage do
Rake::Task["assets:coverage:primary"].execute
end
namespace :coverage do
def jscoverage_loc;Dir.home+'/Applications/JSCover-1/';end
def internal_instrumentalize
config = Rails.application.config
target=File.join(Rails.public_path,config.assets.prefix)
environment = Sprockets::Environment.new
environment.append_path 'app/assets/javascripts'
`rm -rf #{tmp=File.join(Rails.root,'tmp','jscover')}`
`mkdir #{tmp}`
`rm -rf #{target}`
`mkdir #{target}`
print 'Generating assets'
require File.join(Rails.root,'config','initializers','assets.rb')
(%w{application.js}+config.assets.precompile.select{|f| f.is_a?(String) && f =~ /\.js$/}).each do |f|
print '.';File.open(File.join(target,f), 'w') {|ff| ff.write(environment[f].to_s) }
end
puts "\nInstrumentalizing…"
`java -Dfile.encoding=UTF-8 -jar #{jscoverage_loc}target/dist/JSCover-all.jar -fs #{target} #{tmp} #{'--no-branch' unless ENV['C1']} --local-storage`
puts 'Copying into place…'
`cp -R #{tmp}/ #{target}`
`rm -rf #{tmp}`
File.open("#{target}/jscoverage.js",'a'){|f| f.puts 'jscoverage_isReport = true' }
end
task :primary => %w(assets:environment) do
unless Dir.exist?(jscoverage_loc)
abort "Cannot find JSCover! Download from: http://tntim96.github.io/JSCover/ and put in #{jscoverage_loc}"
end
internal_instrumentalize
end
end
end

Related

How do I ensure assets are present with Rail 7, cssbundling-rails, jsbundling-rails in test mode (RSpec)?

I'm upgrading a large, commercial (proprietary) Rails 6 application to Rails 7. We never used Webpacker, and are instead going directly from bundled gems for things like Bootstrap, to the "Rails 7 way".
It turns out that the "no Node" workflow for Rails 7 has no good answer for components that consist of both a CSS and JS component. In our case, the most obvious offender there is Bootstrap. Faced with maintaining the JS "half" of Bootstrap through import maps and the CSS "half" through something like the old Bootstrap gem or manual vendoring (and yes, there really is no other solution without Node here) we end up back at a full Node workflow.
This is coming together. All front-end components that provide CSS and/or JS were already happily available in NPM, so now that's all managed via package.json & Yarn, with bin/dev driving Sass & esbuild compilation of the SCSS and JS components pulled from either app/assets, app/javascript or node_modules/...; the asset pipeline manifest.js contains only references to the build and images folders inside app/assets as a result.
It feels like a bit of backwards step with all the heavyweight manual maintenance of lists of filenames (wildcard imports no longer supported) along with the complexity of the multiple processes now running under Foreman vs just having things synchronously processed in Sprockets on a per-request basis, but with all that stuff being deprecated/abandonware, it was clearly time to update.
This all works fine in dev & production mode, but what about test? We use RSpec; in CI, there's no built assets and developers don't want to have to remember to run esbuild  or assets:precompile or whatever every time they're about to run rspec. Apart from anything else, it's quite slow.
What's the official, idiomatic Rails 7 solution in a Yarn/Node-based workflow specifically using cssbundling-rails and jsbundling-rails, when you want to run tests with up to date assets?
This is pretty ropey but better than nothing for now; it'll ensure CI always builds assets and also ensure that local development always has up-to-date assets, even if things have been modified when e.g. bin/dev isn't running.
# Under Rails 7 with 'cssbundling-rails' and/or the 'jsbundling-rails' gems,
# entirely external systems are used for asset management. With Sprockets no
# longer synchronously building assets on-demand and only when the source files
# changed, compiled assets might be (during local development) or will almost
# always be (CI systems) either out of date or missing when tests are run.
#
# People are used to "bundle exec rspec" and things working. The out-of-box gem
# 'cssbundling-rails' hooks into a vanilla Rails "prepare" task, running a full
# "css:build" task in response. This is quite slow and generates console spam
# on every test run, but points to a slightly better solution for RSpec.
#
# This class is a way of packaging that solution. The class wrapper is really
# just a namespace / container for the code.
#
# First, if you aren't already doing this, add the folllowing lines to
# "spec_helper.rb" somewhere *after* the "require 'rspec/rails'" line:
#
# require 'rake'
# YourAppName::Application.load_tasks
#
# ...and call MaintainTestAssets::maintain! (see that method's documentation
# for details). See also constants MaintainTestAssets::ASSET_SOURCE_FOLDERS and
# MaintainTestAssets::EXPECTED_ASSETS for things you may want to customise.
#
class MaintainTestAssets
# All the places where you have asset files of any kind that you expect to be
# dynamically compiled/transpiled/etc. via external tooling. The given arrays
# are passed to "Rails.root.join..." to generate full pathnames.
#
# Folders are checked recursively. If any file timestamp therein is greater
# than (newer than) any of EXPECTED_ASSETS, a rebuild is triggered.
#
ASSET_SOURCE_FOLDERS = [
['app', 'assets', 'stylesheets'],
['app', 'javascript'],
['vendor']
]
# The leaf files that ASSET_SOURCE_FOLDERS will build. These are all checked
# for in "File.join(Rails.root, 'app', 'assets', 'builds')". Where files are
# written together - e.g. a ".js" and ".js.map" file - you only need to list
# any one of the group of concurrently generated files.
#
# In a standard JS / CSS combination this would just be 'application.css' and
# 'application.js', but more complex applications might have added or changed
# entries in the "scripts" section of 'package.json'.
#
EXPECTED_ASSETS = %w{
application.js
application.css
}
# Call this method somewhere at test startup, e.g. in "spec_helper.rb" before
# tests are actually run (just above "RSpec.configure..." works reasonably).
#
def self.maintain!
run_build = false
newest_mtime = Time.now - 100.years
# Find the newest modificaftion time across all source files of any type -
# for simplicity, timestamps of JS vs CSS aren't considered
#
ASSET_SOURCE_FOLDERS.each do | relative_array |
glob_path = Rails.root.join(*relative_array, '**', '*')
Dir[glob_path].each do | filename |
next if File.directory?(filename) # NOTE EARLY LOOP RESTART
source_mtime = File.mtime(filename)
newest_mtime = source_mtime if source_mtime > newest_mtime
end
end
# Compile the built asset leaf names into full file names for convenience.
#
built_assets = EXPECTED_ASSETS.map do | leaf |
Rails.root.join('app', 'assets', 'builds', leaf)
end
# If any of the source files are newer than expected built assets, or if
# any of those assets are missing, trigger a rebuild task *and* force a new
# timestamp on all output assets (just in case build script optimisations
# result in a file being skipped as "already up to date", which would cause
# the code here to otherwise keep trying to rebuild it on every run).
#
run_build = built_assets.any? do | filename |
File.exist?(filename) == false || File.mtime(filename) < newest_mtime
end
if run_build
Rake::Task['javascript:build'].invoke()
Rake::Task[ 'css:build'].invoke()
built_assets.each { | filename | FileUtils.touch(filename, nocreate: true) }
end
end
end
(EDIT) As a commenter below points out, you'll need to make sure Rake tasks are loaded in your spec_helper.rb, e.g.:
require 'rake'
Rails.application.load_tasks
Both jsbundling-rails and cssbundling-rails append themselves into a rake task called test:prepare.
There are a few ways to cause test:prepare to run, depending on your overall build process.
Call it directly:
bundle exec rails test:prepare test
Or, if running rspec outside of the rails command:
bundle exec rails test:prepare && bundle exec rspec
Use a test task that already calls test:prepare.
Curiously, only some test tasks call (depend on) test:prepare, while others (including the default test task) don't. Example:
bundle exec rails test:all
Make test:prepare a dependency on your preferred test task.
For example, if you normally use the spec task by running bundle exec rails spec, add this to a new or existing task file (such as lib/tasks/tests.rake):
task spec: ['css:build', 'javascript:build']
Background
test:prepare is an empty task defined by Rails. Both cssbundling-rails and jsbundling-rails add themselves as dependencies of that task.
In general, test:prepare is a useful place to add any kind of dependency needed to run your tests, with the caveat that only some of Rails' default test tasks depend on it. But as mentioned above, you can always call it directly or add your own dependencies.
In most cases, calling test:prepare is going to be equivalent to calling both css:build and javascript:build, which is why I showed test:prepare in most of the above examples. On occasion, other gems or your app may an extended test:prepare with additional commands as well, in which case those will run as well (and would likely be wanted).
Also note that assets:precompile also depends on css:build and javascript:build. In my experience, test:prepare (or css:build and javascript:build separately) runs faster than assets:precompile, likely because we're running a lightweight configuration of sprockets-rails (as opposed to propshaft) and assets:precompile runs the entire sprockets compilation process.

Can rspec be configured to only run tests what have been modified within a single spec file?

I'm working with Rails 5 and rspec (gem version 4). I was wondering if RSpec can be configured to only run tests that have been modified within a single file when only running that file, i.e.
bundle exec rspec spec/my_spec.rb
. If my file is like this
RSpec.describe MyClass do
context "context 1" do
it "tests condition 1" do
end
it "tests condition 2" do
end
...
end
context "context 2" do
...
end
...
end
and I only update tests in "context 1," is it possible to test the single file and have only modified tests from within that file run? With respect to this answer -- Can I get RSpec to only run changed specs?, it appears that only relates to actual files that have changed when running the complete suite of rspec tests.
I think you are looking for guard.
Checkout nicely written article https://collectiveidea.com/blog/archives/2017/02/09/guard-is-your-friend
There is a Ruby Gem called retest designed to do exactly that. Just run retest, and it will watch for changes in the code or the specs themselves, and rerun just the respective spec file.

rake task not running sub-tasks in order specified

In a rails 4.2 app, in Rakefile, I have this:
task(:default).clear
task :default => [:test, 'bundle:audit']
The output, always has bundle:audit running first. Why is that?
I read in some places that rake executes tasks as dependencies arise, but bundle:audit, as far as I can tell, does not depend on test. It is defined here:
https://github.com/rubysec/bundler-audit/blob/master/lib/bundler/audit/task.rb
To quote a comment discussing the same problem in Rake's GitHub repository:
It turns out that your problem is due to the way rails creates its test tasks:
desc "Run tests quickly by merging all types and not resetting db"
Rails::TestTask.new(:all) do |t|
t.pattern = "test/**/*_test.rb"
end
https://github.com/rails/rails/blob/v4.2.7.1/railties/lib/rails/test_unit/testing.rake#L24-L27
Here Rails uses Rails::TestTask for the test:all target which will load all test files.
def define
task #name do
if ENV['TESTOPTS']
ARGV.replace Shellwords.split ENV['TESTOPTS']
end
libs = #libs - $LOAD_PATH
$LOAD_PATH.unshift(*libs)
file_list.each { |fl|
FileList[fl].to_a.each { |f| require File.expand_path f }
}
end
end
https://github.com/rails/rails/blob/v4.2.7.1/railties/lib/rails/test_unit/sub_test_task.rb#L106-L118
But unlike Rake::TestTask, which immediately runs the tests, Rails::TestTask only requires the files necessary to run the tests then relies on the at_exit handler in Minitest to run the tests. This means rake dependencies are completely ignored for running tests.
I updated the links to the source code, because the discussion was about Rails 4.1.8, but the problem still exists the source code of Rails 4.2.7.1.
This problem was reported as an issue to the Rails team on GitHub and it was fixed in this PR.
That said: This problem should be fixed since Rails 5.0.0.

Capistrano: Check if a folder in Git has changed?

Our app (1 repo) has a Rails backend and an Angular frontend. As such, the deployment process has an npm install, bower install, grunt build --force at some point. The problem is that it takes a long time to deploy, since these commands are still executed even though we are just updating Rails-related things.
Is there some kind of hook so that I can check that if the folder containing frontend code has changes, then npm install? Or should we just split the repo into two repos with their own deploy processes each?
apistrano-faster-assets plugins enables such functionality for plain Rails assets.
You might want to check the core task to see how that's done and adapt or copy-paste the code for your use.
Here's my attempt to extract only the relevant steps and provide some more comments:
class PrecompileRequired < StandardError; end
begin
# get the previous release
latest_release = capture(:ls, '-xr', releases_path).split[1]
# precompile if this is the first deploy
raise PrecompileRequired unless latest_release
# create a 'Pathname' object for latest_relase
latest_release_path = releases_path.join(latest_release)
# execute raises if there is a diff
execute(:diff, '-Naur', release_path.join('path/to/frontend/code'), latest_release_path.join('path/to/frontend/code')) rescue raise(PrecompileRequired)
info("Skipping asset precompile, no asset diff found")
# copy over all of the assets from the last release
execute(:cp, '-r', latest_release_path.join('public', fetch(:assets_prefix)), release_path.join('public', fetch(:assets_prefix)))
rescue PrecompileRequired
# execute compile command here
end

Capybara specs failing on different servers

I've recently moved mi CI server (Teamcity) to another powerful machine with same configuration and pretty similar OS.
Since then some of my integration specs have started to fail. My setup is pretty standard, Rails 3 + capybara + poltergeist + phantomjs.
Failures are deterministic, they always happen and they are always related to some missing nodes in the DOM. Also, failures happens across different projects with similar setup so it's not something related to project configuration. This is happening with both capybara 1.x and capybara 2.
This is the simplest failing spec. Note that this spec runs with no need of javascript, so the issue is also present in rack only specs.
scenario 'require an unsubscription' do
visit unsubscribe_index_path
within main_content do
choose list.description
fill_in 'Email', :with => subscriber.email
click_button 'Unsubscribe'
end
save_page # <--- Added to debug output
# !!! HERE is the first failing assertion
page.should have_content('You should have received a confirmation message')
# Analytics event recorded
# !!! this also is failing
page.should have_event('Unsubscription', 'Sent', list_name)
# If I comment previous two lines the spec passes on CI machine
# this means that the form is submitted with success since email is triggered
# from controller code
last_email_sent.should have_subject 'Unsubscribe request received'
last_email_sent.should deliver_to subscriber.email
end
What I've tried:
run the specs on different machines, they works on every dev machine and also in a staging server. I can only reproduce the failure on the CI machine even outside of CI environment (i.e. by running the specs via command line)
Increased Capybara.default_wait_time to a ridiculous 20
Tried with a brutal sleep before the page.should have_content line
upgrade RVM, ruby, capybara, poltergeist on their latest versions on the CI machine.
upgrade teamcity to its latest version
The strangest thing I found is when I've added a save_page call just before the failing line. If i run the spec on my machine and then on the CI where the server is failing and comparing those two files the result is this:
$ diff capybara-201309071*.html
26a27,29
> <script type='text/javascript'>
> _gaq.push(["_trackEvent","Unsubscription","Sent","listname"]);
> </script>
90a94,96
> <div class="alert-message message notice">
> <p>You should have received a confirmation message</p>
> </div>
Which are the two missing pieces which make the spec failing, so the form is submitted, controller action is run successfully but there are two missing pieces of dom. How that is possible? And why this is happening only on one machine?
For the records, those two pieces of DOM are added with standard rails tools one with
redirect_to unsubscribe_index_path, notice: ...
and the other with the analytical gem
I've found the issue, the two failing projects I'm using dalli_store as session store and I've put the config.cache_store = :dalli_store line in config/application.rb instead of config/environments/production.rb.
On the old CI server there was a memcached daemon running hence all specs was running fine.
In the new server since it's just a CI server and it doesn't run any production or staging code memcached is not present thus any session write (such as flash messages) was silently discarded and this is the reason why all that kind of specs was failing.
I solved by putting the config.cache line in the appropriate environment file, but I'm still wondering why dalli gem doesn't raise any warning when no memcached is available. While the choice of not failing on missing cache daemon is reasonable since the application should work with no cached data, it could be a performance killer in production code and it might go unnoticed if no warning is given.

Resources