Trying to run rspec (bin/rspec) from my project root, and suddenly it is not running the full test suite - this is usually over 1,000 examples.
Finished in 1.88 seconds (files took 5.17 seconds to load)
1 example, 0 failures
I can run individual files or sub-directories (e.g. spec/system/*), and it works fine in other projects run as normal.
The only thing I can think of is that I did enter the test console (bin/rails c test) recently where I made a user, then went onto the test server in localhost, afterwards destroying the user when I realised it wasn't what I needed. I've also reset the test database and cleared my tmp cache just in case.
Any ideas on how to troubleshoot the cause would be appreciated!
Have you changed file names? To run rspec by default your filenames should end with _spec.rb
Problem now fixed - solution:
I tried running each folder, and I did notice a difference in the focus: true:
bin/rspec spec/controllers:
Run options: include {:focus=>true}
All examples were filtered out; ignoring {:focus=>true}
bin/rspec:
Run options: include {:focus=>true}
Commented out the lines in spec_helper to check, and it ran everything correctly then:
# config.filter_run_including focus: true
# config.run_all_when_everything_filtered = true
Searching for focus showed nothing in specs, so I added something to print the metadata in my spec_helper
config.before do |x|
p x.metadata
end
and found the one example it was focusing on, and lo and behold the example it happened to have accidentally been pre-pended with 'fit' instead of 'it' from a prior pulled commit which was causing the focus.
Thanks for the help :-)
Related
Say I have a user_spec.rb for my User model, and I want to run that test inside the rails console.
My first thought is to execute the usual shell command:
exec("./spec/user_spec.rb")
But is there a simpler way to run the spec? I'm trying to automate some of the tests (and reinvent the wheel a little, yes), so being able to trigger an rspec test inside of another Ruby class seems ideal.
Edit:
output = `./spec/user_spec.rb`
This will provide the rspec output and $?.success? will then provide a pass fail value. Is this the best solution here? Or is there a way to call an RSpec class itself?
As pointed out by Anthony in his comment, you can use RSpec::Core::Runner to basically invoke the command line behavior from code or an interactive console. However, if you use something like Rails, consider that your environment is likely going to be set to development (or even production, if this is where you'll execute the code). So make sure that whatever you do doesn't have any unwanted side-effects.
Another thing to consider is that RSpec globally stores its configuration including all example groups that were registerd with it before. That's why you'll need to reset RSpec between subsequent runs. This can be done via RSpec.reset.
So putting it all together, you'll get:
require 'rspec/core'
RSpec::Core::Runner.run(['spec/path/to_spec_1.rb', 'spec/path/to_spec_2.rb'])
RSpec.reset
The call to RSpec::Core::Runner.run will output to standard out and return the exit code as a result (0 meaning no errors, a non-zero exit code means a test failed).
..
Finished in 0.01791 seconds (files took 17.25 seconds to load)
2 example, 0 failures
=> 0
You can pass other IO objects to RSpec::Core::Runner.run to specify where it should output to. And you can also pass other command line parameters to the first array of RSpec::Core::Runner.run, e.g. '--format=json' to output the results in JSON format.
So if you, for example, want to capture the output in JSON format to then further do something with it, you could do the following:
require 'rspec/core'
error_stream = StringIO.new
output_stream = StringIO.new
RSpec::Core::Runner.run(
[
'spec/path/to_spec_1.rb',
'spec/path/to_spec_2.rb',
'--format=json'
],
error_stream,
output_stream
)
RSpec.reset
errors =
if error_stream.string
JSON.parse(error_stream.string)
end
results =
if output_stream.string
JSON.parse(output_stream.string)
end
Run bundle exec rspec to run all tests or bundle exec rspec ./spec/user_spec.rb to run the specific test
I'm struggling with this for quite a while now: I'm trying to upgrade an app from Rails 3.2 to Rails 4. While on Rails 3.2 all specs are passing, they fail under certain conditions in Rails 4.
Some specs are passing in isolation while failing when run together with other specs.
Example Video
https://www.wingolf.org/ak-internet-files/Spec_Behaviour.mp4 (4 mins)
This example video shows:
Running 3 specs using :focus–––green.
Running them together with another spec–––two specs passing before now fail.
Running the 3 specs, but inserting two empty lines–––one spec fails.
Undo does not help when using guard.
focus/unfocus does not help.
Restarting guard does not help.
Running all specs and then running the 3 specs again does help and make them green again. But adding the other task makes two specs fail, again.
As one can see, some specs are red when run together with other specs. Even entering blank lines can make a difference.
More Observations
For some specs, passing or failing occurs randomly when run several times.
The behavior is not specific to one development machine but can be reproduced on travis.
To delete the database completely between the specs using database_cleaner does not help.
To Rails.cache.clear between the specs does not help.
Wrapping each spec in an ActiveRecord::Base.transaction does not help.
This does occur in Rails 4.0.0 as well as in Rails 4.1.1.
Using this minimal spec_helper.rb without spring or anything does not help.
Using guard vs. using bundle exec rspec some_spec.rb:123 directly doesn't make a difference.
This behavior goes for model specs, thus doesn't have to do anything with parallel database connections for features specs.
I've already tried to keep as many gems at the same version as in the (green) Rails-3.2 branch, including guard, rspec, factory_girl, etc.–––does not help.
Update: Observations Based on Comments & Answers
Thanks to engineerDave, I've inserted require 'pry'; binding.pry; into one of the concerning specs. Using the cd and show-source of pry, it was ingeniously easy and fun to narrow down the problem: Apparently, this has_many :through relation does not return objects when run together with other specs, even when called with (true).
has_many(:groups,
-> { where('dag_links.descendant_type' => 'User').uniq },
through: :memberships,
source: :ancestor, source_type: 'Group'
)
If I call groups directly, I get an empty result. But if I go through the memberships, the correct groups are returned:
#user.groups # => []
#user.groups(true) # => []
#user.memberships.collect { |m| m.group } # returns the correct groups
Has Rails changed the has many through behavior in Rails 4 in a way that could be responsible? (Remember: The spec works in isolation.)
Any help, insights and experiences are appreciated. Thanks very much in advance!
Code
Current master branch on Rails 3.2––all green.
Rails-4 branch––strange behavior.
The file/commit seen in the video––strange behavior.
All specs passing on travis for Rails 3.2.
Diff of the Gemfile.lock (or use git diff master..sf/rails4-minimal-update Gemfile.lock |grep rspec)
How to Reproduce
This is how one can check if the issue still exists:
Preparation
git clone git#github.com:fiedl/wingolfsplattform.git
cd wingolfsplattform
git checkout sf/rails4-minimal-update
bundle install
# please create `config/database.yml` to your needs.
bundle exec rake db:create db:migrate db:test:prepare
Run the specs
bundle exec rspec ./vendor/engines/your_platform/spec/models/user_group_membership_spec.rb
bundle exec rspec ./vendor/engines/your_platform/spec/models/user_group_membership_spec.rb:213
The problem still exists, if the spec :213 is green in the second call but is red when run together with the other specs in the first call.
Based on that you're using:
the should syntax
that you indicate you've upgraded recently (perhaps a bundle update?)
that your failure messages indicate a NilObject error.
Is something like this perhaps what is causing it?
https://stackoverflow.com/a/16427072/793330
Are you are calling an object in your test which hasn't been instantiated yet?
I think this might be an rspec 3 upgrade issue where should is deprecated.
Have you ruled out an rspec gem upgrade to the new rspec 3 syntax (2.99 or 3.0.0+) as the culprit?
"Remove support for deprecated expect(...).should. (Myron Marston)"
IMO this behavior would not be caused by a Rails 4 update as its centered around your test suite.
Update (with pry debug):
Also you could use the pry gem to get a window into what is going on in your specs.
Essentially you can put a big "stop" sign (similar to a debug break) right before the spec executes to get a handle on the environment at that point.
it {
require 'pry'; binding.pry
should == something
}
Although beaware sometimes these pry calls wreck havoc on guard's threading and you have to kill it with CTRL+Z and then kill -9 PID that shows.
Update #2: Looking at updated answer.
You might be running up against FactoryGirl issues based on your has_many issue
You may need to trigger a before action in your Factory to pre-populate the associated record. Although this could get messy, i.e. here be monsters, you can trigger after and before callbacks in your factory that will bring these objects into being.
after(:create) do |instance|
do stuff here
end
I started along the process of trying to speed up my tests, mostly by following this railscast's suggestions. So you don't have to watch it, those are:
Replaced "bundle exec spec" with "bin/rspec spec"
Got rid of excess before filters
Tagged slow tests as such for separate running
Replaced unnecessary Factory Creates with Builds
Used Zeus to speed up start-up time
Applied VCR to examples that reached out to external APIs
Deferred Garbage Collection
(I skipped parallel testing for now, cause I want to get to the bottom of this issue, and parallel testing seems like it would just minimize/mask it)
At any rate, I can't figure out whether these changes have accomplished much, because the runtimes for my tests are highly variable. (They are variable even when I go outside of the branch I've been working on for these tests -- seems like something preexisting is causing the problem). I've run the exact same suite of tests now and gotten a wide range of runtimes, from 52 seconds to almost 1m45! Again, the EXACT same tests.
Moreover, the general trend seems to be that if I run a test a few times in a row, the runtimes go up by ~20 second intervals each time. Then if I change something small in spec_helper or wait for a while (I think the latter -- I know this has happened when I've made small changes to garbage collection), the times go back down.
My guess is that this problem is related to garbage collection -- but ideally, I'm doing that more efficiently now. I had it collect garbage every ~15 seconds,
spec_helper.rb
config.before(:all) { DeferredGarbageCollection.start }
config.after(:all) { DeferredGarbageCollection.reconsider }
support/deferred_garbage_collection.rb
class DeferredGarbageCollection
DEFERRED_GC_THRESHOLD = (ENV['DEFER_GC'] || 15.0).to_f
##last_gc_run = Time.now
def self.start
GC.disable if DEFERRED_GC_THRESHOLD > 0
end
def self.reconsider
if DEFERRED_GC_THRESHOLD > 0 && Time.now - ##last_gc_run >= DEFERRED_GC_THRESHOLD
GC.enable
GC.start
GC.disable
##last_gc_run = Time.now
end
end
end
and then, when that was causing the aforementioned problems, I switched to a more straightforward every-ten-test collection system:
spec_helper.rb
config.after(:each) do
counter += 1
if counter > 9
GC.enable
GC.start
GC.disable
counter = 0
end
end
config.after(:suite) do
counter = 0
end
I've tried to isolate this to a group of tests (models/requests/controllers), but they all seem to display some amount of variability in relation to roughly how much time they eat up.
Any ideas what's going wrong here?
EDIT -- proof/example of what's going on here:
Finished in 22.88 seconds
48 examples, 0 failures
➜ my_app git:(faster-tests) ✗>zeus test spec/models
................................................
Finished in 34.89 seconds
48 examples, 0 failures
➜ my_app git:(faster-tests) ✗>zeus test spec/models
................................................
Finished in 44.68 seconds
48 examples, 0 failures
➜ my_app git:(faster-tests) ✗>zeus test spec/models
................................................
Finished in 14.36 seconds
48 examples, 0 failures
➜ my_app git:(faster-tests) ✗>zeus test spec/models
................................................
Finished in 18.74 seconds
48 examples, 0 failures
➜ my_app git:(faster-tests) ✗>
Note that it eventually seems to reset.
I'm not sure about all the other things you have done but I can comment on zeus. I did not alter anything in my environment except add zeus and I can run 50 examples in a couple of seconds. For tests to take as long as yours when running zeus seems wrong and I'm guessing thats your problem. After you run zeus start are all the zeus commands green? I also use zeus rspec spec/.... to run my specs instead of zeus test.
I used this railscast and this thoughtbot article along with the zeus README. Remeber to use the appropriate Ruby version mentioned in the railscast and make sure that you dont have spork anymore if you were using spork. Also remember to install zeus outside of your Gemfile, zeus should not use Bundler. Anyway, my suggestion would be to focus on zeus first and get that working correctly. It has really helped for me
I'm trying to setup SimpleCov to generate reports for 3 applications that share most of their code(models, controllers) from a local gem but the specs for the code that each app uses are inside each ./spec and not on the gem itself.
For a clearer example. When i run bundle exec rspec spec inside app_1 that uses the shared models from the local gem I want to get(accurate) reports for all the specs that this app_1 has inside ./spec.
The local gem also has some models that belong exclusively for app_2, inside a namespace, so i want to skip the report for those files when i run the test suite inside app_1.
I'm trying to achieve this with something like the following code in app_1/spec/spec_helper.
# This couple of lines are needed to generate report for the models, etc. inside the local gem.
SimpleCov.adapters.delete(:root_filter)
SimpleCov.filters.clear
SimpleCov.adapters.define 'my_filter' do
root = SimpleCov.root.split("/")
root.pop
add_filter do |src|
!(src.filename =~ /^#{root.join("/")}/)
end
add_filter "/app_2_namespace/"
end
if ENV["COVERAGE"] == "true"
SimpleCov.start 'rails'
end
This works, until some questions begin to arise.
Why i get a 85% coverage for a model that's inside the gem but the spec is inside app_2(I'm running the spec inside app_1).
The first time that was a problem, was when i tried to improve that model so i clicked on the report for it and saw which lines were uncovered and i tried to fix them writing tests for them on app_2/spec/namespace/my_model_spec.rb.
But that didn't make any difference, i tried a more aggressive test and i erased all the content on the spec file but somehow i still was getting the 85% of coverage, so the my_model_spec.rb is not related to the coverage results of my_model.rb. Kind of unexpected.
But since this file was on app_2 i decided to add a filter on the SimpleCov.start block on app_1 spec_helper, like:
add_filter "/app_2_name_space/"
I moved then to the app_2 folder and started setting up SimpleCov and see what results i would get here. And they turned out weirder.
For the same model i got 100% coverage, i did the same test of empty'ing the my_model_spec.rb file and still got the 100%. So this really f**ed up, or i don't understand something.
How does this work?(with the Ruby 1.9 Coverage module you say, well when i run locally the example on the official documentation i get different results, so i think there's a bug or outdated documentation there)
ruby-doc: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 1, nil, 0, nil]}
locally: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 0, nil, 1, nil]}
I hope the reports don't show positive results for lines that get evaluated somewhere on the app code, no matter where.
I think the expected behavior is that the results for a model for example are related to it's spec, same thing for controllers, etc.
Is this the case? If so, why am i getting this strange results.
Or do you think the structure of my apps could be messing up with SimpleCov and Coverage?
Thank you for taking the time to read this, if you need more detailed info, just ask.
Regarding your confusion with the model being 100% covered, as I'm not sure that I understand correctly: There's no way for Coverage (and therefore SimpleCov) to know whether your code has been executed from a spec or "somewhere else". Say I have a method "foo" and a method "bar" that calls foo. If I invoke bar in my specs, of course foo will also be shown as covered.
As to your general problem: I think it should be possible to have the coverage reported. Just because the source code is at some different point than your project root should not lead to the loss of coverage reporting.
Two things in your base config: Removing the base adapter (line 2) is unneccessary as adapters basically are glorified config chunks and at this point you'll have it already executed (since it gets invoked when Simplecov is loaded). Resetting the filters should be sufficient.
Also, the custom adapter you define is not used. Please refer to the README as to how to properly set up adapters, but I think you'd be fine with simply using this in the SimpleCov config block when you start the coverage run for now:
SimpleCov.start 'rails' do
your_custom_config
end
What you'll probably want though is a merged coverage report for all your apps. For this, you'll have to define a command_name for each of your spec suites first, inside your config block, like this: command_name 'App1 Specs'.
You'll also have to define a central coverage_path, which will store away your coverage reports across your app suites. Say you have ~/projects/my_project/app[1-3], then putting that into my_project/coverage might make sense. This will lead to your different test suite results getting merged into one single report, just like when using SimpleCov with Cucumber & RSpec for example. Merging has a default timeout of ~10 minutes, so you might need to set this to a higher value using merge_timeout 3600 in your config (those are seconds). For the specifics of these configuration options please again check out the README and the SimpleCov::Configuration documentation. These things are outlined there in fair detail.
So, to sum it up, each of your apps should look somewhat like this:
require 'SimpleCov'
SimpleCov.start 'rails' do
reset_filters!
command_name 'App1 Spec'
coverage_path File.dirname(__FILE__) + '../../coverage' # Assuming this is in my_project/app1/spec/spec_helper.rb
merge_timeout 3600
end
Next thing you might want to add filters to reject all non-project gems by path, and you should be up & running.
>rails -v
Rails 1.2.6
>ruby -v
ruby 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32]
When I run a test fixture (that tests a rails model class) like this, it takes 20-30 secs to start executing these tests (show the "Loaded suite..."). What gives?
>ruby test\unit\category_test.rb
require File.dirname(__FILE__) + '/../test_helper'
class CategoryTest < Test::Unit::TestCase
def setup
Category.delete_all
end
def test_create
obCategoryEntry = Category.new({:name=>'Apparel'})
assert obCategoryEntry.save, obCategoryEntry.errors.full_messages.join(', ')
assert_equal 1, Category.count
assert_not_nil Category.find(:all, :conditions=>"name='Apparel'")
end
#.. 1 more test here
end
This one is Rails using a MySql DB with no fixtures. This time it clocked 30secs+ to startup.
Take a look at this Rails Test Server.
A quote from the author:
"Every time you run a test in a Rails
application, the whole environment is
loaded, including libraries that don’t
change between two consecutive runs.
That can take a considerable amount of
time. What if we could load the
environment once, and only reload the
changing parts before each run?
Introducing RailsTestServing.
With RailsTestServing, the run time of
a single test file has gone from 8
seconds down to .2 of a second on my
computer. That’s a x40 speed
improvement. Now, I don’t think twice
before hitting ⌘R in TextMate. It
feels liberating!"
(This was featured on the Rails Envy Podcast this past week which is where I found this.)
When starting any tests, Rails first loads any fixtures you have (in test/fixtures) and recreates the database with them.
20-30 seconds sounds very slow though. Do you have a lot of fixtures that need to be loaded before your tests run, or is your database running slow?
Ruby's gem tool follows a path discovery algorithm which, apparently, is not Windows (as I see from your ruby -v) friendly.
You can get a clear picture if you trace, for example, a Rails application loading with ProcMon. Every (I really mean every) require starts a scan over all directories in Ruby's path plus all gem directories. A typical require takes 20 ms on an average machine. Since Rails makes hundreds of requires, those 20 ms easily sum up to seconds every time you launch the Rails environment. Take in the time to initialize the fixtures in the database and you get a better idea of why it takes so much time to just begin running the test-cases.
Perhaps because of each file-system architecture and implementation (path caching etc.), this is less of a problem in Linux than in Windows. I don't know who you should blame, though. It looks like the NTFS file-system could be improved with a better path caching implementation, but clearly the gem tool could implement the caching itself and have its performance not so dependent on the platform.
It seems like Test::Unit is the simplest, but also one of the slowest ways to do unit testing with Ruby. One of alternatives is ZenTest.
Test unit startup isn't particularly slow, and nowhere near 20 seconds.
(11:39) ~/tmp $ cat test_unit.rb
require 'test/unit'
class MyTest < Test::Unit::TestCase
def test_test
assert_equal("this", "that")
end
end
(11:39) ~/tmp $ time ruby test_unit.rb
Loaded suite test_unit
Started
F
Finished in 0.007338 seconds.
1) Failure:
test_test(MyTest) [test_unit.rb:4]:
<"this"> expected but was
<"that">.
1 tests, 1 assertions, 1 failures, 0 errors
real 0m0.041s
user 0m0.027s
sys 0m0.012s
It's probably something you're doing in your tests. Are you doing anything complicated? Setting up a database? Retrieving something from the internet?
Complete shot in the dark, but the majority of the time I see long startup times on things, it is usually due to some sort of reverse DNS lookup happening with some TCP socket communication somewhere along the way.
Try adding:
require 'socket'
Socket.do_not_reverse_lookup = true
at the top of your test file after your other require line.
What does your test_helper.rb look like? Are you using instantiated fixtures?
self.use_instantiated_fixtures = true
[edit]
If this is set to true try setting it to false.