How to unstub Mocha mock? - ruby-on-rails

I have the following mocha mock that works great.
In a test.rb file:
setup do
Date.stubs(:today).returns(Date.new(2011, 7, 19))
Time.stubs(:now).returns(Time.new(2011,1,1,9,0))
end
The problem is that the timing is broken for the tests. After the tests run the date and time objects are still mocked.(!)
Finished in -21949774.01594216 seconds.
I added the following:
teardown do
Date.unstubs(:today)
Time.unstubs(:now)
end
This throws the following error for each test: WARNING: there is already a transaction in progress
Is this the proper way to unstub? Is it better to unstub at the end of the test file or even at the end of unit test suite?
Working in Rails 3.07 and Mocha 0.9.12
Thanks.

I don't know if this is fully your problem, but it is just unstub, not pluralized.
Other than that, there should be no issue. You definitely want to unstub after each test (or set of tests, if a bunch of tests need the stubbing) because once stubbed, it will stay stubbed, and that can screw up other tests.

The accepted answer is spreading misinformation and should be considered harmful.
One of the main purposes of a mocking library like Mocha is to provide automatic mock/stub teardown as part of the integration to various testing libraries. In fact if you look at the GitHub repo for Mocha you will see that significant maintenance effort is put into making Mocha work smoothly with all the versions of several different testing frameworks.
If this isn't working properly then you need to figure out why Mocha's built-in teardown isn't working. Unstubbing manually in your own teardown is just papering over the problem, and could hide subtler issues with stub leakage or Mocha otherwise misbehaving.
If I had to take a wild guess money would be on your stub somehow being run outside of an actual test because that's the most common cause I've seen for this kind of thing in the wild, but there's not enough information from the question to really ascertain.

Related

RSpec "retest broken tests from last time" flag

Is there any way to retest previously broken tests?
So, say, I run rspec and several tests in different files and directories fail.
I fix something and now I have to manually specify all files and folders I want to retest or run tests for whole project again(It takes considerable amount of time for big projets).
What I was looking for is something like a flag
rspec --prev-failed-only
I realize that such flag would require considerable amount of additional actions from rspec, like storing results of previous tests and so on. But I think it would be super convenient to me.
Is there any such(or similar) tool/gem?
rspec-rerun gem does what you want : https://github.com/dblock/rspec-rerun
https://github.com/rspec/rspec-core/issues/456 has a good discussion on the topic of making rspec itself be able to rerun failed tests.
On the Giant Robots podcast, Sam Phippen of the core team mentioned this feature is due to be added to RSpec soon.
In case anyone else finds this, a month after the question was asked (open source <3 ) rspec 3.3 introduced the --only-failures option, also the delightfully handy --next-failure (-n) option. rspec --help for more info.

(Rails) PaperTrail and RSpec

I'm having trouble with PaperTrail (auto-versioning of objects for Rails) being used with RSpec tests. Normally I want my tests to run without PaperTrail versioning, but there are a handful of tests for which I want PaperTrail turned on. I typically run my tests with Guard and Spork, and I can use things like PaperTrail.enabled = true and PaperTrail.enabled = false around a given test and everything works fine.
However, when I run the tests with RSpec, the tests requiring PaperTrail fail. To be more specific, it appears that while code in before filters can produce versions objects, code in the tests cannot. After a considerable amount of digging and tinkering and trying code snippets (I've tried this and this), it looks like the best solution is to use the require "paper_trail/frameworks/rspec" line mentioned in the PaperTrail README.
Unfortunately, each of these keeps me right where I started—tests pass with Guard/Spork but not vanilla RSpec. This is in particular an issue because while I use Spork locally, our continous integration server runs RSpec directly.
Does anyone have any insight?
PaperTrail now has documentation on tests with vanilla rspec
https://github.com/paper-trail-gem/paper_trail#7b-rspec
After including require 'paper_trail/frameworks/rspec' in your spec/rails_helpers.rb
... PaperTrail will be turned off for all tests by default. To enable PaperTrail for a test you can either wrap the test in a with_versioning block, or pass in versioning: true option to a spec block.
Somehow my issue was fixed by changing before(:all) and after(:all) behavior to before(:each) and after(:each).

What testing tools and methods did Corey Haines use at GoGaRuCo 2011?

In this video from GoGaRuCo 2011, Corey Haines shows some techniques for making Rails test suites much faster. I would summarize it as follows:
Put as much of your code as possible outside the Rails app, into other modules and classes
Test those separately, without the overhead of loading up Rails
Use them from within your Rails app
There were a couple of things I didn't understand, though.
He alternates between running tests with rspec and spn or spna (for example, at about 3:50). Is spn a commonly-known tool?
In his tests for non-Rails classes and modules, he includes the module or class being tested, but I don't see him including anything like spec_helper. How does he have Rspec available?
Sorry about the confusion. spn and spna are aliases I have that add my non-rails code to rspec's load path. There isn't anything special about them, other than adding a -I path_to_code on the command-line.
These days, I add something like this to my .rspec file:
-I app/mercury_app
Then I can do simple require 'object_name' at the top of my specs.
As for not including spec_helper: that is true, I don't. When you execute your spec file with rspec <path_to_spec_file>, it gets interpreted, so you don't need to require rspec explicitly.
For my db specs these days, I also have built an active_record_spec_helper which requires active_record, establishes a connection to the test database, and sets up database_cleaner; this allows me to simply require my model at the top of my spec file. This way, I can test the AR code against the db without having to load up my whole app.
A client I am working at where we are using these techniques is interested in supporting some blog posts about this, so hopefully they will start coming out towards the middle of June.

How can I decrease my Rails test overhead?

I'm using Test::Unit on a large app with a large number of gem dependencies (>75). I'm trying to develop using BDD, but it takes minutes for the app to load it's dependencies before it can run the tests. Is there a way to preload the dependencies and just auto-run the test on changes, or a similar solution?
I would look into Spork. It works wonders.
https://github.com/sporkrb/spork
https://github.com/sporkrb/spork-testunit
I am using RSpec and there's a great tool for it, called Spork. It basically loads your app once and then just reloads modified parts. If you combine it with Guard, you get "continuous testing". That is, you hit 'Save' in your editor and tests start executing, giving you instant feedback. This still amazes me after some months :)
Edit
As #THEM points out, there's a plugin for Spork to support TestUnit. You should look into it.
There was also an interesting article about test speed on the 37Signals blog a while back. Might be of interest even if you end up going with Spork or another solution.

How can I have autospec/test not run the full test suite after everything goes green?

Same question as waloeiii in twitter:
How can I have autospec/test not run
the full test suite after everything
goes green? The next test I write will
be red!
I'd rather run the full test suite manually.
By the way, I tried adding a failing spec:
it "should flunk" do
flunk
end
but autospec seems to ignore it when it feels like it.
Bit late but I was looking for this as well so thought I'd post my solution:
Add the following to ~/.autotest:
class Autotest
def rerun_all_tests
end
end
Are you sure you are not confused about the intended behaviour of autotest's heuristics?
My understanding is that it runs tests for what has changed and will keep running failed tests until they pass and then once they pass it runs the whole test suite to make sure nothing else broke.
In effect it is being conservative and making sure you haven't introduced side effects that break other unrelated tests which is probably a good thing. The problem of course is that if you are doing fast red - green cycles you are going to be running your full suite a lot.
If you want to change these behaviours you need to edit the heuristics in the rails_autotest.rb file for zentest.
You can use the following option to avoid this behavior -
autospec --no-full-after-failed
I think that this is by design - if you fix a failing spec, and all other specs in the section are green, then autospec will rerun the entire suite - this will tell you if the fix you applied to one area of your project has b0rked another or not.
If you just want to run the specs you are working on at any one time, then you can do it from the command line:
ruby spec/controllers/my_spec.rb
or from within Textmate by pressing cmd+r from your spec file. You should rerun your entire suite as you go anyway, otherwise you might be missing failing specs.

Resources