I am making a concerted effort to wrap my head around Rspec in order to move towards more of a TDD/BDD development pattern. However, I'm a long way off and struggling with some of the fundamentals:
Like, when exactly should I be using mocks/stubs and when shouldn't I?
Take for example this scenario: I have a Site model that has_many :blogs and the Blog model has_many :articles. In my Site model I have a callback filter that creates a default set of blogs and articles for every new site. I want to test that code, so here goes:
describe Site, "when created" do
include SiteSpecHelper
before(:each) do
#site = Site.create valid_site_attributes
end
it "should have 2 blogs" do
#site.should have(2).blogs
end
it "should have 1 main blog article" do
#site.blogs.find_by_slug("main").should have(1).articles
end
it "should have 2 secondary blog articles" do
#site.blogs.find_by_slug("secondary").should have(2).articles
end
end
Now, if I run that test, everything passes. However, it's also pretty slow as it's creating a new Site, two new Blogs and three new Articles - for every single test! So I wonder, is this a good candidate for using stubs? Let's give it a go:
describe Site, "when created" do
include SiteSpecHelper
before(:each) do
site = Site.new
#blog = Blog.new
#article = Article.new
Site.stub!(:create).and_return(site)
Blog.stub!(:create).and_return(#blog)
Article.stub!(:create).and_return(#article)
#site = Site.create valid_site_attributes
end
it "should have 2 blogs" do
#site.stub!(:blogs).and_return([#blog, #blog])
#site.should have(2).blogs
end
it "should have 1 main blog article" do
#blog.stub!(:articles).and_return([#article])
#site.stub_chain(:blogs, :find_by_slug).with("main").and_return(#blog)
#site.blogs.find_by_slug("main").should have(1).articles
end
it "should have 2 secondary blog articles" do
#blog.stub!(:articles).and_return([#article, #article])
#site.stub_chain(:blogs, :find_by_slug).with("secondary").and_return(#blog)
#site.blogs.find_by_slug("secondary").should have(2).articles
end
end
Now all the tests still pass, and things are a bit speedier too. But, I've doubled the length of my tests and the whole exercise just strikes me as utterly pointless, because I'm no longer testing my code, I'm just testing my tests.
Now, either I've completely missed the point of mocks/stubs, or I'm approaching it fundamentally wrong, but I'm hoping someone might be able to either:
Improve me tests above so it uses stubs or mocks in a way that actually tests my code, rather than my tests.
Or, tell me if I should even be using stubs here - or whether in fact this is completely unnecessary and I should be writing these models to the test database.
But, I've doubled the length of my tests and the whole exercise just strikes me as utterly pointless, because I'm no longer testing my code, I'm just testing my tests.
This is the key right here. Tests that don't test your code aren't useful. If you can negatively change the code that your tests are supposed to be testing, and the tests don't fail, they're not worth having.
As a rule of thumb, I don't like to mock/stub anything unless I have to. For example, when I'm writing a controller test, and I want to make sure that the appropriate action happens when a record fails to save, I find it easier to stub the object's save method to return false, rather than carefully crafting parameters just so in order to make sure a model fails to save.
Another example is for a helper called admin? that just returns true or false based on whether or not the currently logged-in user is an admin or not. I didn't want to go through faking a user login, so I did this:
# helper
def admin?
unless current_user.nil?
return current_user.is_admin?
else
return false
end
end
# spec
describe "#admin?" do
it "should return false if no user is logged in" do
stubs(:current_user).returns(nil)
admin?.should be_false
end
it "should return false if the current user is not an admin" do
stubs(:current_user).returns(mock(:is_admin? => false))
admin?.should be_false
end
it "should return true if the current user is an admin" do
stubs(:current_user).returns(mock(:is_admin? => true))
admin?.should be_true
end
end
As a middle ground, you might want to look into Shoulda. This way you can just make sure your models have an association defined, and trust that Rails is well-tested enough that the association will "just work" without you having to create an associated model and then counting it.
I've got a model called Member that basically everything in my app is related to. It has 10 associations defined. I could test each of those associations, or I could just do this:
it { should have_many(:achievements).through(:completed_achievements) }
it { should have_many(:attendees).dependent(:destroy) }
it { should have_many(:completed_achievements).dependent(:destroy) }
it { should have_many(:loots).dependent(:nullify) }
it { should have_one(:last_loot) }
it { should have_many(:punishments).dependent(:destroy) }
it { should have_many(:raids).through(:attendees) }
it { should belong_to(:rank) }
it { should belong_to(:user) }
it { should have_many(:wishlists).dependent(:destroy) }
This is exactly why I use stubs/mocks very rarely (really only when I'm going to be hitting an external webservice). The time saved just isn't worth the added complexity.
There are better ways to speed up your testing time, and Nick Gauthier gives a good talk covering a bunch of them - see the video and the slides.
Also, I think a good option is to try out an in-memory sqlite database for your test runs. This should cut down on your database time by quite a bit by not having to hit the disk for everything. I haven't tried this myself, though (I primarily use MongoDB, which has the same benefit), so tread carefully. Here's a fairly recent blog post on it.
I'm not so sure with agreeing on the others. The real problem (as I see it) here, is that you're testing multiple pieces of interesting behavior with the same tests (the finding behavior, and the creation). For reasons on why this is bad, see this talk: http://www.infoq.com/presentations/integration-tests-scam. I'm assuming for the rest of this answer that you want to test that creation is what you want to test.
Isolationist tests often seem unwieldy, but that's often because they have design lessons to teach you. Below are some basic things I can see out of this (though without seeing the production code, I can't do too much good).
For starters, to query the design, does having the Site add articles to a blog make sense? What about a class method on Blog called something like Blog.with_one_article. This then means all you have to test is that that class method has been called twice (if [as I understand it for now], you have a "primary" and "secondary" Blog for each Site, and that the associations are set up (I haven't found a great way to do this in rails yet, I usually don't test it).
Furthermore, are you overriding ActiveRecord's create method when you call Site.create? If so, I'd suggest making a new class method on Site named something else (Site.with_default_blogs possibly?). This is just a general habit of mine, overriding stuff generally causes problems later on in projects.
Related
How should Rails named scopes be tested? Do you test the results returned from a scope, or that your query is configured correctly?
If I have a User class with an .admins method like:
class User < ActiveRecord::Base
def self.admins
where(admin: true)
end
end
I would probably spec to ensure I get the results I expect:
describe '.admins' do
let(:admin) { create(:user, admin: true) }
let(:non_admin) { create(:user, admin: false) }
let(:admins) { User.admins }
it 'returns admin users' do
expect(admins).to include(admin)
expect(admins).to_not include(non_admin)
end
end
I know that this incurs hits to the database, but I didn't really see any other choice if I wanted to test the scope's behaviour.
However, recently I've seen scopes being specced by confirming that they're configured correctly, rather than on the result set returned. For this example, something like:
describe '.admins' do
let(:query) { User.admins }
let(:filter) { query.where_values_hash.symbolize_keys }
let(:admin_filter) { { admin: true } }
it 'filters for admin users' do
expect(filter).to eq(admin_filter) # or some other similar assertion
end
end
Testing the direct innards of a query like this hadn't really occurred to me before, and on face value it is appealing to me since it doesn't touch the database, so no speed hit incurred.
However, it makes me uneasy because:
it's making a black-box test grey(er)
I have to make the assumption that because something is configured a certain way, I'll get the results that my business logic requires
The example I've used is so trivial that perhaps I'd be okay with just testing the configuration, but:
where do you draw the line and say 'the content of this named scope is too complex and requires result confirmation tests over and above just scope configuration testing'? Does that line even exist or should it?
Is there a legitimate/well-accepted/'best practice' (sorry) way to test named scopes without touching the database, or at least touching it minimally, or is it just unavoidable?
Do you use either of the above ways to test your scopes, or some other method entirely?
This question(s) is a bit similar to Testing named scopes with RSpec, but I couldn't seem to find answers/opinions about testing scope results vs scope configuration.
I think you have described the problem very well, and that the best answer, in my opinion is - it depends.
If your scope is trivial, run-of-the-mill where, with some order, etc. there is no real need to test ActiveRecord or the database to make sure they work properly - you can safely assume that they have been correctly implemented, and simply test the structure you expect.
If, on the other hand, your scope (or any query) is compound, or uses advanced features in a complex configuration, I believe that setting up tests that assert its behavior, by using a real live database (which is installed locally, with a small custom-tailored data set) can go a long way in assuring you that your code works.
It will also help you, if and when you decide to change strategies (use that cool new mysql feature, or porting to postgresql), to refactor safely, by checking that the functionality is robust.
This is a much better way than to simply verify the the SQL string is what you typed there...
I am writing some test in Rspec and am trying to push a carrier to a user via a has_and_belongs_to_many association. Below is the test I have written, however the line I have indicated with an arrow does not seem to pass. I realized I have mocked the carrier but not the user and I'm wondering if this is causing an issue with the HABTM association. Is this the issue or is there something else I'm missing? I am new to mocking and stubbing, but trying my best!
describe UsersController do
describe 'get #add_carrier' do
let(:user) { build(:approved_user) }
let(:carrier) { mock_model(Carrier).as_null_object }
before{ Carrier.stub(:find).and_return(carrier) }
it 'associates the Carrier to the User' do
expect(user.carriers).to eq []
user.should_receive(:carriers).and_return([])
--> (user.carriers).should_receive(:push).with(carrier).and_return([carrier])
(user.carriers).push(carrier)
(user.carriers).should include carrier
end
end
end
Stubs are generally used when you want to do a proper unit test and stub out anything but the method under test. Mocks (stubs with expectations) are usually used when you are testing a method that calls a command method (ie a method that has some impact, e.g. altering some data or saving a record) and you want to ensure it is called.
This particular test, given its in a controller, seems to be testing things at the wrong level - it's testing stuff inside the method, not the method itself. Take a look at the rspec docs.
Not knowing the code you're testing, it's a bit tricky to identify exactly how to test. #add_carrier sounds like a method that should simply test whether a carrier is added, so presumably we could test the message expectation. This test also appears to be testing the getter method #carriers, which seems to be a bit much for one unit test (but I completely understand the desire to have it there).
Also note that sharing the error you're getting would definitely be helpful.
Anyway, try something like the following:
describe UsersController do
describe 'get #add_carrier' do # Should this really be a GET?
subject { get :add_carrier }
let(:user) { build(:approved_user) }
let(:carrier) { mock_model(Carrier).as_null_object }
before do
controller.stub(:user) { user }
Carrier.stub(:find) { carrier }
end
it "associates the Carrier to the User" do
user.carriers.should_receive(:push).with(carrier).and_call_original
subject
user.carriers.should include carrier
end
end
end
No expectations on the original value of user.carriers (that should be tested in User model). No expectations on the details of how push works - again, should be tested elsewhere. Rather, just confirming that the important command message is called. I'm not 100% sure we should even be doing #and_call_original and confirming the results, as those are things we can also test in model unit tests (results of Carrier#push), but for peace of mind I included here.
Note this was all written from memory, so please let me know if any of it doesn't work.
I'm very new on TDD and unit-testing, and I'm having quite a lot of doubts about the correct approach I should take during the tests of the custom model validations.
Suppose I have a custom validation:
User < ActiveRecord::Base
validate :weird_validation
def weird_validation
# Validate the weird attribute
end
end
Should I take this approach:
context "validation"
it "pass the validation with weird stuff" do
user = User.new weird: "something weird"
user.should be_valid
end
it "should't pass the validation with normal stuff" do
user = User.new weird: "something"
user.should_not be_valid
user.errors[:weird].size.should eq 1
end
end
Or this one:
context "#weird_validation" do
it "should not add an error if weird is weird" do
user = User.new
user.stub(:weird){"something weird"}
user.errors.should_not_receive :add
user.weird_validation.should eq true
end
it "should add an error if weird is not weird" do
user = User.new
user.stub(:weird){"something"}
user.errors.should_receive(:add).with(:weird, anything())
user.weird_validation.should eq false
end
end
So IMHO
The first approach
Pros
It test behaviour
Easy refactoring
Cons
Dependable of other methods
Something unrelated could make the test fail
The second approach
Pros
It doesn't relay on anything else, since everything else is stubbed
It's very specific of all the things the code should do
Cons
It's very specific of all the things the code should do
Refactoring the validations could potentially break the test
I personally think the correct approach should be the first one, but I can't avoid to think that I'm relying too much in other methods rather than the one I want to test, and if the test fails it may be because of any method withing the model. For example, it would not validate if the validation of other attribute failed.
Using the second approach I'm practically writing the code twice, which seems like a waste of time and effort. But I'm unit-testing the isolated method about what it should do. (and I'm personally doing this for every single method, which is very bad and time consuming I think)
Is there any midway when it comes to using stubs and mocks? Because I've taken the second approach and I think I'm abusing it.
IMO the second approach is the best one because you test your model properties and validations one at a time (the "unit" part).
To avoid overhead, you may consider using shoulda. It is really efficient for models unit testing. We're usually using a factory_girl/mocha/shoulda combination for functional testing (factory_girl and mocha are also very helpful to test queries and named scopes). Tests are easy to write, read and maintain :
# class UserTest < ActiveSupport::TestCase
# or
# describe 'User' do
should have_db_column(:weird).of_type(:string).with_options(:limit=>255)
should allow_value("something weird").for(:weird)
should_not allow_value("something").for(:weird)
should ensure_length_of(:weird).is_at_least(1).is_at_most(255)
# end
Shoulda generates positive/negative matchers therefore avoids a lot of code duplication.
When I see in my test code specs like this for every controller:
it "#new displays input controls for topic title and keywords" do
ensure_im_signed_in
get :new
assert_response :success
assert_select "input#topic_title"
assert_select "input#topic_keywords_input"
assert assigns :topic
end
I want to refactor it and replace with some one-liner like this:
its_new_action_displays_input_form :topic, %w(input#topic_title input#topic_keywords_input)
and implementation:
def its_new_Action_displays_input_form field, inputs
it "#new displays input controls for #{inputs.join ", "}" do
ensure_im_signed_in
get :new
assert_response :success
for css in inputs
assert_select css
end
assert assigns field
end
end
What are the advantages of either keeping verbose version or refactoring to terser version?
I see only problem with refactored version is that RSpec does not show backtrace for the failed test.
The biggest question we always ask is "who is testing the tests?" You should definately go with refactoring your unit tests where ever you are and at whatever stage you are as it helps reduce the complexity. Reducing the complexity within your unit tests will yield the same benefits as reducing the complexity within your code base. I can see very little reason not to do it and many reasons to do it.
My very first job out of college was updating a set of unit tests. They were about 7 years old, with many deprecated methods being used. It took me eight months (there were a lot of them!) and I still didn't finish the job. However, I did manage to reduce the file size to about 1/3rd the original size and greatly simplify the work by refactoring much of it, and that helped speed up the work.
So, I would definitely encourage refactoring!
You refactor the spec because of the same reasons that you refactor everything else: It will make your application – be it you production app or tests suit – runs faster.
Also, refactoring a piece of code makes you think about the intention that you had to writing such a thing. This increases your comprehension of the problem and will make you more capable of solving other problems in the same application.
Yet I would strongly disagree about your refactoring. You see, when you are watching a spec you want to see fast and clear what are you testing. If you move the asserts to other place, you are hitting your self, because you need to go and find where did you put that thing.
I suggest you that never move the asserts out of your spec. Even if they are repeated. That's fine. It remarks the intention, so you never forget what you want to test.
Instead, focus your refactoring on boiler code.
For example, you are testing a form and to get to that page you need to click many links. A first approach would be to put everything in the spec. A lot of click_link.
I refactoring of such code will be put the whole bunch of click_link to a before(:each). Use Context as well, to clarify.
Like this:
feature "Course" do
context "Loged in" do
before(:each) do
school = School.make!
switch_to_subdomain(school)
end
context "In the new course form" do
before(:each) do
click_link("Asignaturas")
click_link("Nueva asignatura")
end
scenario "New course" do
fill_in(:name, :with => "Matematicas")
click_button("Create Course")
page.has_content?("Asignatura creada").should == true
dbSchool = School.find(school.id)
dbSchool.courses.count.should == 1
end
scenario "New course" do
fill_in(:name, :with => "")
click_button("Create Course")
page.has_content?("Asignatura creada").should == false
dbSchool = School.find(school.id)
dbSchool.courses.count.should == 0
end
end
end
end
You see? The boiler code is out of the specific spec. But let enough on the spec so that 7 month after that you can understand what you are testing. Never take away the asserts.
So my answer: Refactor Spec is not only a good idea, but a necessity. But don't mix refactoring with hiding code.
You should indeed refactor your tests to remove duplication. However, there are likely ways to do it that that keep the failure backtrace.
Rather than pulling everything out to a single method, you might do well to have a shared
before(:each) section for the common setup, and write your own matchers and/or assertions to combine the checks.
What's the best practices way to test that a model is valid in rails?
For example, if I have a User model that validates the uniqueness of an email_address property, how do I check that posting the form returned an error (or better yet, specifically returned an error for that field).
I feel like this should be something obvious, but as I'm quickly finding out, I still don't quite have the vocabulary required to effectively google ruby questions.
The easiest way would probably be:
class UserEmailAddressDuplicateTest < ActiveSupport::TestCase
def setup
#email = "test#example.org"
#user1, #user2 = User.create(:email => #email), User.new(:email => #email)
end
def test_user_should_not_be_valid_given_duplicate_email_addresses
assert !#user2.valid?
end
def test_user_should_produce_error_for_duplicate_email_address
# Test for the default error message.
assert_equal "has already been taken", #user2.errors.on(:email)
end
end
Of course it's possible that you don't want to create a separate test case for this behaviour, in which case you could duplicate the logic in the setup method and include it in both tests (or put it in a private method).
Alternatively you could store the first (reference) user in a fixture such as fixtures/users.yml, and simply instantiate a new user with a duplicate address in each test.
Refactor as you see fit!
http://thoughtbot.com/projects/shoulda/
Shoulda includes macros for testing things like validators along with many other things. Worth checking out for TDD.
errors.on is what you want
http://api.rubyonrails.org/classes/ActiveRecord/Errors.html#M002496
#obj.errors.on(:email) will return nil if field is valid, and the error messages either in a String or Array of Strings if there are one or more errors.
Testing the model via unit tests is, of course, step one. However, that doesn't necessarily guarantee that the user will get the feedback they need.
Section 4 of the Rails Guide on Testing has a lot of good information on functional testing (i.e. testing controllers and views). You have a couple of basic options here: check that the flash has a message in it about the error, or use assert_select to find the actual HTML elements that should have been generated in case of an error. The latter is really the only way to test that the user will actually get the message.