minitest with simplecov - ruby-on-rails

I'm trying to test minitest files like this:
COVERAGE=true ruby -Itest test/views/info_pages_test.rb
COVERAGE=true ruby -Itest test/views/errors_test.rb
now my info_pages_test has 97% coverage and my errors_test has 75% coverage. Together they should cover 100%.. but each time I run the above commands, I get one result or the other 75% or 97%. Is there a way to combine the results of the two test files into one coverage report?
Help would be greatly appreciated!
Here is the top of my minitest_helper.rb file
## SimpleCOV
require 'simplecov'
if ENV["COVERAGE"]
SimpleCov.start('rails') do
add_filter "/test/"
end
puts "Started SimpleCOV"
end
I also have a .simplecov file in the application root but using it gives me unpredictable results.. I get 100% coverage once in a while and it is unpredictable..
.simplecov file
SimpleCov.use_merging true
SimpleCov.merge_timeout 3600

The problem you are bumping into is that each of those "test suites" will overwrite each other because the suite name (configurable via SimpleCov.command_name 'xyz'), as opposed to when merging for example cucumber and rspec results.
Preferred solution: Generate the coverage report by running the whole test suite at once, using rake test or some other, similar facility.
If you insist on running individual test files you can trick SimpleCov into merging those results instead of overwriting them by supplying a pseudo-random command name, i.e. SimpleCov.command_name "MiniTest #{Time.now}", or (depending on your setup) using ARGV, i.e. SimpleCov.command_name "Minitest #{File.basename(ARGV[1])}". The latter has the advantage of not duplicating results on re-runs of the same test file since those will be overwritten on merge, but may fail when you run all your tests and do not check for the presence of ARGV correctly, or your test framework tampers with ARGV before you can grab it.
Although you can make this work for individual test runs, in general I'd recommend to base coverage reports off full test suite runs only, as the other approaches leave room for error.

Related

Execute an rspec test in Rails Console

Say I have a user_spec.rb for my User model, and I want to run that test inside the rails console.
My first thought is to execute the usual shell command:
exec("./spec/user_spec.rb")
But is there a simpler way to run the spec? I'm trying to automate some of the tests (and reinvent the wheel a little, yes), so being able to trigger an rspec test inside of another Ruby class seems ideal.
Edit:
output = `./spec/user_spec.rb`
This will provide the rspec output and $?.success? will then provide a pass fail value. Is this the best solution here? Or is there a way to call an RSpec class itself?
As pointed out by Anthony in his comment, you can use RSpec::Core::Runner to basically invoke the command line behavior from code or an interactive console. However, if you use something like Rails, consider that your environment is likely going to be set to development (or even production, if this is where you'll execute the code). So make sure that whatever you do doesn't have any unwanted side-effects.
Another thing to consider is that RSpec globally stores its configuration including all example groups that were registerd with it before. That's why you'll need to reset RSpec between subsequent runs. This can be done via RSpec.reset.
So putting it all together, you'll get:
require 'rspec/core'
RSpec::Core::Runner.run(['spec/path/to_spec_1.rb', 'spec/path/to_spec_2.rb'])
RSpec.reset
The call to RSpec::Core::Runner.run will output to standard out and return the exit code as a result (0 meaning no errors, a non-zero exit code means a test failed).
..
Finished in 0.01791 seconds (files took 17.25 seconds to load)
2 example, 0 failures
=> 0
You can pass other IO objects to RSpec::Core::Runner.run to specify where it should output to. And you can also pass other command line parameters to the first array of RSpec::Core::Runner.run, e.g. '--format=json' to output the results in JSON format.
So if you, for example, want to capture the output in JSON format to then further do something with it, you could do the following:
require 'rspec/core'
error_stream = StringIO.new
output_stream = StringIO.new
RSpec::Core::Runner.run(
[
'spec/path/to_spec_1.rb',
'spec/path/to_spec_2.rb',
'--format=json'
],
error_stream,
output_stream
)
RSpec.reset
errors =
if error_stream.string
JSON.parse(error_stream.string)
end
results =
if output_stream.string
JSON.parse(output_stream.string)
end
Run bundle exec rspec to run all tests or bundle exec rspec ./spec/user_spec.rb to run the specific test

minitest: early look at failing tests

When running minutest tests, is it possible to peek at the information about the errors that has happened?
For example, this test suite takes ten minutes to complete. But I would like some more info about the letter E appearing in the tests result.
I don't want to wait ten minutes.
*** Running FRONTEND component engine specs
Run options: --seed 29704
# Running:
......................................................................................................................................................................................E...........
That's E for "error", so one of your tests is failing. Normally you get output that explains more. Once you identify which test is failing you can run that test in a more focused capacity, like:
ruby test/unit/broken_test.rb --name=test_that_is_broken
Where that is the path to your test script and the name of the testing method that's failing.
You may need to make your tests self-contained, able to be run this way, by using:
require_relative '../test_helper'
Or whatever the helper stub is that kicks off the testing framework. Some skeleton files contain things like require 'test_helper' which won't be found in your current $LOAD_PATH.

How do I use --order option in Rspec to order files

I want to run feature specs written in rspec/capybara in a fixed sequence as follows:
signup_spec.rb
login_spec.rb
project_creation_spec.rb
project_migration_spec.rb
The --order feature of given here says that
Use the --order option to tell RSpec how to order the files, groups, and
examples
How would I use .rspec file to mention pass my requirements ?
I have a shell script with test cases running in a sequence like:
rspec spec/features/signup_spec.rb
rspec spec/features/login_spec.rb
rspec spec/features/project_creation_spec.rb
rspec spec/features/project_migration_spec.rb
The order option does not allow this. It allows switching between the default ordering (which is essentially that they are run in the order they are defined which in turns depends on file system ordering ) or random ordering (optionally with a seed)
I would consider any order dependence to be a bug - the random ordering option is there to flush out such bugs.

Showing only specific test case in test.log

I am writing Rails tests using the standard Test::Unit/TestCase.
Is there any way to somehow filter what gets printed to the log, so that you only print the stack for specific test cases.
I have a functional test file with many test cases in it, and I'm really only interested in debugging one test case. Printing my own log statements still requires searching through a few thousand lines of generated log. Or something similar to the RSpec 'pending' functionality.
Run from a command line ruby test/unit/my_model.rb to run one test suite. You can also use a debugger, such as (wrapped by) RubyMine or pry, to stop on a specific test case and look at the log.
But if a sledge-hammer does not solve the problem, you can use tweezers: config.logger.level = Logger::WARN in your test.rb, from Set logging levels in Ruby on Rails
It is probably better if instead of strangling the output to log/test.log, you become familiar with a command such as grep. Grep allows you to run very advanced search queries through files or directories, as long as your running on some flavor of *nix. The simplest use would be
grep search_term file_name
The reason I say you shouldn't constrict the log output is because someday that could bit you in the **s. Hope this helps.

SimpleCov with multiple apps - or in short, how does Simplecov work?

I'm trying to setup SimpleCov to generate reports for 3 applications that share most of their code(models, controllers) from a local gem but the specs for the code that each app uses are inside each ./spec and not on the gem itself.
For a clearer example. When i run bundle exec rspec spec inside app_1 that uses the shared models from the local gem I want to get(accurate) reports for all the specs that this app_1 has inside ./spec.
The local gem also has some models that belong exclusively for app_2, inside a namespace, so i want to skip the report for those files when i run the test suite inside app_1.
I'm trying to achieve this with something like the following code in app_1/spec/spec_helper.
# This couple of lines are needed to generate report for the models, etc. inside the local gem.
SimpleCov.adapters.delete(:root_filter)
SimpleCov.filters.clear
SimpleCov.adapters.define 'my_filter' do
root = SimpleCov.root.split("/")
root.pop
add_filter do |src|
!(src.filename =~ /^#{root.join("/")}/)
end
add_filter "/app_2_namespace/"
end
if ENV["COVERAGE"] == "true"
SimpleCov.start 'rails'
end
This works, until some questions begin to arise.
Why i get a 85% coverage for a model that's inside the gem but the spec is inside app_2(I'm running the spec inside app_1).
The first time that was a problem, was when i tried to improve that model so i clicked on the report for it and saw which lines were uncovered and i tried to fix them writing tests for them on app_2/spec/namespace/my_model_spec.rb.
But that didn't make any difference, i tried a more aggressive test and i erased all the content on the spec file but somehow i still was getting the 85% of coverage, so the my_model_spec.rb is not related to the coverage results of my_model.rb. Kind of unexpected.
But since this file was on app_2 i decided to add a filter on the SimpleCov.start block on app_1 spec_helper, like:
add_filter "/app_2_name_space/"
I moved then to the app_2 folder and started setting up SimpleCov and see what results i would get here. And they turned out weirder.
For the same model i got 100% coverage, i did the same test of empty'ing the my_model_spec.rb file and still got the 100%. So this really f**ed up, or i don't understand something.
How does this work?(with the Ruby 1.9 Coverage module you say, well when i run locally the example on the official documentation i get different results, so i think there's a bug or outdated documentation there)
ruby-doc: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 1, nil, 0, nil]}
locally: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 0, nil, 1, nil]}
I hope the reports don't show positive results for lines that get evaluated somewhere on the app code, no matter where.
I think the expected behavior is that the results for a model for example are related to it's spec, same thing for controllers, etc.
Is this the case? If so, why am i getting this strange results.
Or do you think the structure of my apps could be messing up with SimpleCov and Coverage?
Thank you for taking the time to read this, if you need more detailed info, just ask.
Regarding your confusion with the model being 100% covered, as I'm not sure that I understand correctly: There's no way for Coverage (and therefore SimpleCov) to know whether your code has been executed from a spec or "somewhere else". Say I have a method "foo" and a method "bar" that calls foo. If I invoke bar in my specs, of course foo will also be shown as covered.
As to your general problem: I think it should be possible to have the coverage reported. Just because the source code is at some different point than your project root should not lead to the loss of coverage reporting.
Two things in your base config: Removing the base adapter (line 2) is unneccessary as adapters basically are glorified config chunks and at this point you'll have it already executed (since it gets invoked when Simplecov is loaded). Resetting the filters should be sufficient.
Also, the custom adapter you define is not used. Please refer to the README as to how to properly set up adapters, but I think you'd be fine with simply using this in the SimpleCov config block when you start the coverage run for now:
SimpleCov.start 'rails' do
your_custom_config
end
What you'll probably want though is a merged coverage report for all your apps. For this, you'll have to define a command_name for each of your spec suites first, inside your config block, like this: command_name 'App1 Specs'.
You'll also have to define a central coverage_path, which will store away your coverage reports across your app suites. Say you have ~/projects/my_project/app[1-3], then putting that into my_project/coverage might make sense. This will lead to your different test suite results getting merged into one single report, just like when using SimpleCov with Cucumber & RSpec for example. Merging has a default timeout of ~10 minutes, so you might need to set this to a higher value using merge_timeout 3600 in your config (those are seconds). For the specifics of these configuration options please again check out the README and the SimpleCov::Configuration documentation. These things are outlined there in fair detail.
So, to sum it up, each of your apps should look somewhat like this:
require 'SimpleCov'
SimpleCov.start 'rails' do
reset_filters!
command_name 'App1 Spec'
coverage_path File.dirname(__FILE__) + '../../coverage' # Assuming this is in my_project/app1/spec/spec_helper.rb
merge_timeout 3600
end
Next thing you might want to add filters to reject all non-project gems by path, and you should be up & running.

Resources