When running minutest tests, is it possible to peek at the information about the errors that has happened?
For example, this test suite takes ten minutes to complete. But I would like some more info about the letter E appearing in the tests result.
I don't want to wait ten minutes.
*** Running FRONTEND component engine specs
Run options: --seed 29704
# Running:
......................................................................................................................................................................................E...........
That's E for "error", so one of your tests is failing. Normally you get output that explains more. Once you identify which test is failing you can run that test in a more focused capacity, like:
ruby test/unit/broken_test.rb --name=test_that_is_broken
Where that is the path to your test script and the name of the testing method that's failing.
You may need to make your tests self-contained, able to be run this way, by using:
require_relative '../test_helper'
Or whatever the helper stub is that kicks off the testing framework. Some skeleton files contain things like require 'test_helper' which won't be found in your current $LOAD_PATH.
Related
Say I have a user_spec.rb for my User model, and I want to run that test inside the rails console.
My first thought is to execute the usual shell command:
exec("./spec/user_spec.rb")
But is there a simpler way to run the spec? I'm trying to automate some of the tests (and reinvent the wheel a little, yes), so being able to trigger an rspec test inside of another Ruby class seems ideal.
Edit:
output = `./spec/user_spec.rb`
This will provide the rspec output and $?.success? will then provide a pass fail value. Is this the best solution here? Or is there a way to call an RSpec class itself?
As pointed out by Anthony in his comment, you can use RSpec::Core::Runner to basically invoke the command line behavior from code or an interactive console. However, if you use something like Rails, consider that your environment is likely going to be set to development (or even production, if this is where you'll execute the code). So make sure that whatever you do doesn't have any unwanted side-effects.
Another thing to consider is that RSpec globally stores its configuration including all example groups that were registerd with it before. That's why you'll need to reset RSpec between subsequent runs. This can be done via RSpec.reset.
So putting it all together, you'll get:
require 'rspec/core'
RSpec::Core::Runner.run(['spec/path/to_spec_1.rb', 'spec/path/to_spec_2.rb'])
RSpec.reset
The call to RSpec::Core::Runner.run will output to standard out and return the exit code as a result (0 meaning no errors, a non-zero exit code means a test failed).
..
Finished in 0.01791 seconds (files took 17.25 seconds to load)
2 example, 0 failures
=> 0
You can pass other IO objects to RSpec::Core::Runner.run to specify where it should output to. And you can also pass other command line parameters to the first array of RSpec::Core::Runner.run, e.g. '--format=json' to output the results in JSON format.
So if you, for example, want to capture the output in JSON format to then further do something with it, you could do the following:
require 'rspec/core'
error_stream = StringIO.new
output_stream = StringIO.new
RSpec::Core::Runner.run(
[
'spec/path/to_spec_1.rb',
'spec/path/to_spec_2.rb',
'--format=json'
],
error_stream,
output_stream
)
RSpec.reset
errors =
if error_stream.string
JSON.parse(error_stream.string)
end
results =
if output_stream.string
JSON.parse(output_stream.string)
end
Run bundle exec rspec to run all tests or bundle exec rspec ./spec/user_spec.rb to run the specific test
I am using Redis + Resque in production and want to test that jobs are getting queued and run properly. I am looking for something like this
Resque.jobs(:queue_name).size.should == 0
post :some_action # This action causes a Resque job to be enqueued
# Test Enqueuing
Resque.jobs(:queue_name).size.should == 1
Resque.jobs(:queue_name).last.klass.should == "MyJob"
Resque.jobs(:queue_name).last.args.should == [1, "Arg_2"]
# Test performing
Resque.jobs(:queue_name).perform_all
# test the effect of running the job
How do I start Redis + Resque in test environment? I don't people to manually run a redis server all the time. I have tried the solution where you try and run the redis server in config.before(:suite) but the redis-server never starts up in time and the Resque complains that it can't connect to Redis.
I have tried using Resque.inline but 1) It doesnt let me test that the Job was enqueued 2) It always enqueues the job inside the :inline queue (I want to test that the job ends up in the correct queue).
Personally, I rely on gems which I include in my project, including Resque and Redis, to be tested by the developers who write them. As a result, I do not include testing them in my test suite. For example, when choosing a gem for my application, I look at the gem's documentation to see if TravisCI / Code Climate / etc. statistics are included and if the project is "green." If it is, I use it. If it's not, I look for an earlier (e.g. more stable) version, or look for alternatives. In the case of Resque and Redis for Rails, both of these are well maintained and popular, thus extremely stable.
For my apps, I simply write tests where I present expectations of messages being called to Resque / Redis. For example:
it "should make a call to Resque for #my_job" do
expect(Resque).to_receive(:enqueue).with(SomeJob, args)
my_method_which_calls_resque
end
Then, assuming that you have a method which you are testing called my_method_which_calls_resque that looks something like:
def my_method_which_calls_resque
...
Resque.enqueue(SomeJob, args)
...
end
This test should be successful.
For additional documentation on messages and setting RSpec expectations, see RelishApp's docs on message expectations.
Then, if you wish to test your code within the Resque job itself, you can create a RSpec test for your job. Example:
# spec/lib/jobs/some_job_spec.rb
describe Jobs::SomeJob do
describe "#perform" do
it "should update someone's account" do
...
end
end
end
I'm trying to test minitest files like this:
COVERAGE=true ruby -Itest test/views/info_pages_test.rb
COVERAGE=true ruby -Itest test/views/errors_test.rb
now my info_pages_test has 97% coverage and my errors_test has 75% coverage. Together they should cover 100%.. but each time I run the above commands, I get one result or the other 75% or 97%. Is there a way to combine the results of the two test files into one coverage report?
Help would be greatly appreciated!
Here is the top of my minitest_helper.rb file
## SimpleCOV
require 'simplecov'
if ENV["COVERAGE"]
SimpleCov.start('rails') do
add_filter "/test/"
end
puts "Started SimpleCOV"
end
I also have a .simplecov file in the application root but using it gives me unpredictable results.. I get 100% coverage once in a while and it is unpredictable..
.simplecov file
SimpleCov.use_merging true
SimpleCov.merge_timeout 3600
The problem you are bumping into is that each of those "test suites" will overwrite each other because the suite name (configurable via SimpleCov.command_name 'xyz'), as opposed to when merging for example cucumber and rspec results.
Preferred solution: Generate the coverage report by running the whole test suite at once, using rake test or some other, similar facility.
If you insist on running individual test files you can trick SimpleCov into merging those results instead of overwriting them by supplying a pseudo-random command name, i.e. SimpleCov.command_name "MiniTest #{Time.now}", or (depending on your setup) using ARGV, i.e. SimpleCov.command_name "Minitest #{File.basename(ARGV[1])}". The latter has the advantage of not duplicating results on re-runs of the same test file since those will be overwritten on merge, but may fail when you run all your tests and do not check for the presence of ARGV correctly, or your test framework tampers with ARGV before you can grab it.
Although you can make this work for individual test runs, in general I'd recommend to base coverage reports off full test suite runs only, as the other approaches leave room for error.
I'm trying to setup SimpleCov to generate reports for 3 applications that share most of their code(models, controllers) from a local gem but the specs for the code that each app uses are inside each ./spec and not on the gem itself.
For a clearer example. When i run bundle exec rspec spec inside app_1 that uses the shared models from the local gem I want to get(accurate) reports for all the specs that this app_1 has inside ./spec.
The local gem also has some models that belong exclusively for app_2, inside a namespace, so i want to skip the report for those files when i run the test suite inside app_1.
I'm trying to achieve this with something like the following code in app_1/spec/spec_helper.
# This couple of lines are needed to generate report for the models, etc. inside the local gem.
SimpleCov.adapters.delete(:root_filter)
SimpleCov.filters.clear
SimpleCov.adapters.define 'my_filter' do
root = SimpleCov.root.split("/")
root.pop
add_filter do |src|
!(src.filename =~ /^#{root.join("/")}/)
end
add_filter "/app_2_namespace/"
end
if ENV["COVERAGE"] == "true"
SimpleCov.start 'rails'
end
This works, until some questions begin to arise.
Why i get a 85% coverage for a model that's inside the gem but the spec is inside app_2(I'm running the spec inside app_1).
The first time that was a problem, was when i tried to improve that model so i clicked on the report for it and saw which lines were uncovered and i tried to fix them writing tests for them on app_2/spec/namespace/my_model_spec.rb.
But that didn't make any difference, i tried a more aggressive test and i erased all the content on the spec file but somehow i still was getting the 85% of coverage, so the my_model_spec.rb is not related to the coverage results of my_model.rb. Kind of unexpected.
But since this file was on app_2 i decided to add a filter on the SimpleCov.start block on app_1 spec_helper, like:
add_filter "/app_2_name_space/"
I moved then to the app_2 folder and started setting up SimpleCov and see what results i would get here. And they turned out weirder.
For the same model i got 100% coverage, i did the same test of empty'ing the my_model_spec.rb file and still got the 100%. So this really f**ed up, or i don't understand something.
How does this work?(with the Ruby 1.9 Coverage module you say, well when i run locally the example on the official documentation i get different results, so i think there's a bug or outdated documentation there)
ruby-doc: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 1, nil, 0, nil]}
locally: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 0, nil, 1, nil]}
I hope the reports don't show positive results for lines that get evaluated somewhere on the app code, no matter where.
I think the expected behavior is that the results for a model for example are related to it's spec, same thing for controllers, etc.
Is this the case? If so, why am i getting this strange results.
Or do you think the structure of my apps could be messing up with SimpleCov and Coverage?
Thank you for taking the time to read this, if you need more detailed info, just ask.
Regarding your confusion with the model being 100% covered, as I'm not sure that I understand correctly: There's no way for Coverage (and therefore SimpleCov) to know whether your code has been executed from a spec or "somewhere else". Say I have a method "foo" and a method "bar" that calls foo. If I invoke bar in my specs, of course foo will also be shown as covered.
As to your general problem: I think it should be possible to have the coverage reported. Just because the source code is at some different point than your project root should not lead to the loss of coverage reporting.
Two things in your base config: Removing the base adapter (line 2) is unneccessary as adapters basically are glorified config chunks and at this point you'll have it already executed (since it gets invoked when Simplecov is loaded). Resetting the filters should be sufficient.
Also, the custom adapter you define is not used. Please refer to the README as to how to properly set up adapters, but I think you'd be fine with simply using this in the SimpleCov config block when you start the coverage run for now:
SimpleCov.start 'rails' do
your_custom_config
end
What you'll probably want though is a merged coverage report for all your apps. For this, you'll have to define a command_name for each of your spec suites first, inside your config block, like this: command_name 'App1 Specs'.
You'll also have to define a central coverage_path, which will store away your coverage reports across your app suites. Say you have ~/projects/my_project/app[1-3], then putting that into my_project/coverage might make sense. This will lead to your different test suite results getting merged into one single report, just like when using SimpleCov with Cucumber & RSpec for example. Merging has a default timeout of ~10 minutes, so you might need to set this to a higher value using merge_timeout 3600 in your config (those are seconds). For the specifics of these configuration options please again check out the README and the SimpleCov::Configuration documentation. These things are outlined there in fair detail.
So, to sum it up, each of your apps should look somewhat like this:
require 'SimpleCov'
SimpleCov.start 'rails' do
reset_filters!
command_name 'App1 Spec'
coverage_path File.dirname(__FILE__) + '../../coverage' # Assuming this is in my_project/app1/spec/spec_helper.rb
merge_timeout 3600
end
Next thing you might want to add filters to reject all non-project gems by path, and you should be up & running.
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.