What are some good approaches for stubbing dependancies in TDD? - ruby-on-rails

Let's say I'm writing a spec in Rspec for a Rails app and I'm stubbing out methods to reduce the number of dependancies in the spec:
# specs/account_statistics_spec.rb
describe AccountStatistics do
it "gets the percentage of users that are active in an account" do
account = Account.new()
account.stub_chain(:users, :size).and_return(80)
account.stub_chain(:users, :active, :size).and_return(20)
stats = AccountStatistics.new(account)
stats.percentage_active.should == 25
end
end
It's now possible for the AccountStatistics spec to pass even if the Account#users and User#active methods are not defined in their respective classes.
What are some good approaches to catch the fact that the stubbed methods may not be implemented? Should it be left up to integration tests to catch the undefined methods? Or should the spec also check that the methods are defined before stubbing them?
It would also be great if someone can link to any good books / presentations which discuss stubbing and mocking in-depth :)

To address your specific concern take a look at https://github.com/xaviershay/rspec-fire to guard against stubbing non-existent methods.
I think the broader problem here is that you are not listening to the feedback trying to write this test is giving you. A test which is difficult to write is a good sign of either a poorly designed test subject or that the testing technique you are using is not a good fit.
What would this class look like if it did followed the law of Demeter (hard with ActiveModel relations)?
What would your test look like if you supplied a test double object instead of attempting to mock every method?
What would your test look like as an integration test?
I think the best resource for writing better tests is to look at the design of the code being tested instead. http://www.poodr.com/ might be a good resource. http://www.martinfowler.com/bliki/TestDouble.html is a good overview of test doubles you might not be considering while http://blakesmith.me/2012/02/29/test-stubbing-as-an-antipattern.html makes an argument for why mocks might be the wrong tool entirely.
Specific to rspec http://betterspecs.org gives some hits what a good spec might look like. If those are hard to write that's a good hint that there's a broader problem.

Related

Automatically Testing Models in Rails

I just, manually, discovered a migration error. I added a new field to a model, and forgot to add it into the model_params method of the controller. As a result the new field wasn't persisted to the database.
Easy enough to fix once I noticed the problem, but it got me to wondering if there was a way to detect this in testing. I would imagine something like a gem that would parse the schema and generate a set of tests to ensure that all of the fields could be written and that the same data could be read back.
Is this something that can be (or is) done? So far, my searches have led me to lots of interesting reading, but not to a gem like this...
It is possible to write what you would want. Iterate through all the fields in the model, generate params that mirrors those fields, and then run functional tests on your controllers. The problem is that the test is brittle. What if you don't actually want all the fields to be writable through params? What if you reference a model in another controller outside of the standard pattern? How will you handle generating data that would pass different validations? You would either have to be sure that your application would only be written in a certain way or this test would become more and more complex to handle additional edge cases.
I think the solution in testing would be to try to keep things simple; realize that you've made a change to the system and as a result of that change, corresponding tests would need to be updated. In this case, you would update the functional and unit tests affected by that model. If you were strictly adhering to Test Driven Design, you would actually update the tests first to produce a failing test and then implement the change. As a result, hopefully the updated functional test would have failed in this case.
Outside of testing, you may want to look into a linter. In essence, you're asking if you can catch an error where the parameters passed to an object's method doesn't match the signature. This is more catchable when parsing the code completely (i.e. compilation in a static type environment).
EDIT - I skipped a step on the linting, as you would also have to write your code a certain way that a linter would catch it, such as being more explicit of the method and parameters passed to it.
You might want to consider that such a gem may not exist because its not that practical or useful in real life.
Getting the columns off a model is pretty simple from the reflection methods that Active Record gives you. And yeah you could use that theoretically to automagically run a bunch of tests in loop.
But in reality its just not going to cut it. In real life you don't want every column to be assignable. Thats why you are using mass assignment protection in the first place.
And add to that the complexity of the different kinds of constraints and data types your models have. You'll end up with something extremely complex with just adds a bunch of tests with limited value.
If you find yourself omitting a property from mass assignment protection than your should try to cover that part of your controller either with a functional test or an integration test.
class ArticlesControllerTest < ActionController::TestCase
def valid_attributes
{
title: 'How to test like a Rockstar',
content: 'Bla bla bla'
}
end
test "created article should have the correct attributes" do
post :create, article: valid_attributes
article = Article.last
valid_attributes.keys.each do |key|
assert_equals article[key], valid_attributes[key]
end
end
end

How to test a Rails Model Class method which calls a method on all members?

In my Rails 4 project I've got a model, EventGenerator with an instance method generate (which creates some records in the database), and a class method generate_all which is intended to be called by a Rake task and looks something like this:
def self.generate_all
all.each(&:generate)
end
I can think of several approaches to testing this (I'm using RSpec 3 and Fabrication):
Don't bother - it's too simple to require tests. DHH says "Don't aim for 100% coverage". On the other hand, this is going to be called by a rake task, so won't be regularly exercised: I feel like that's a good reason to have tests.
Create a couple of EventGenerator instances in the database and use any_instance.should_receive(:generate) as the assertion - but RSpec 3 now recommends against this and requires a fudge to make it work. This is a personal 'showcase project' so if possible I'd like everything to be best-practice. Also (DHH aside) shouldn't it still be possible to create fast model specs which don't touch the database?
Like 2, but stub out EventGenerator.all to return some instances without touching the database. But stubbing the class under test is bad, right? And fragile.
Don't worry about unit testing it and instead cover it with an integration test: create a couple of generators in the database, run the method, check what gets changed/created in the database as a result.
Change the implementation: pass in an array of instances. But that's just pushing the problem back by a layer, and a test-driven change which I can't see benefitting the design.
Since I can't really think of a downside for option 4, perhaps that's the answer, but I feel like I need some science to back that up.
I would actually not bother to test it (so your 1.) as the method is really trivial.
If you would like to have it under test coverage though I'd suggest you to use your 3. My reasons are as follows:
Your test for .generate_all just needs to assert that the method #generate gets call on every instance returned by .all. It this context the actual implementation of .all is irrelevant and can be stubbed.
Your tests for #generate should assert that the method does the right thing. If these tests assert the proper functioning of this method, there's no need for the tests for .generate_all to duplicate any assertion.
Testing the proper functioning of #generate in the tests for .generate_all leads to unnecessary dependencies between the tests.

Is it worth testing low-level code such as scopes?

Is it actually profitable to test "core" features such as scopes and sorting? I have several tests that test functionality similar to what's below. It seems to me that it's just testing Rails core features (scopes and sorting), which are probably very well-tested already.
I know this may seem like an "opinion" question, but what I'm trying to find out is if someone knows anything I would miss if I assume that scopes/sorting are already tested and not valuable for developers to test.
My thoughts are that even if scopes/sorting are "broken", there's nothing I can really do if the Rails core code is broken as I refuse to touch core code without a very, very good reason (makes updating later on a nightmare...).
I feel "safe" with more tests but if these tests aren't actually providing value, then they are just taking up space and time on the test suite.
# user.rb model
class User < ActiveRecord::Base
scope :alphabetical, -> { order("UPPER(first_name), UPPER(last_name)") }
end
# user_spec.rb spec
context "scopes" do
describe ".alphabetical" do
it "sorts by first_name, last_name (A to Z)" do
user_one = create(:user, first_name: "Albert", "Johnson")
user_two = create(:user, first_name: "Bob", "Smith")
...
expect(User.all.alphabetical.first.first_name).to eq("Albert")
end
end
end
Yes, you should test the scope. It's not all built-in functionality; you had to write it. It sorts on two criteria, so it needs two tests, one to prove that it sorts by first name (which your test above shows), and another to prove that it falls back on last name when the first names are the same.
If the scope were simpler, you could skip unit-testing it if it was fully tested in an acceptance or higher-level unit test. But as soon as the scope itself needs more than one test, you should do that in tests of the scope itself, not in tests of higher-level code. If you tried to do it in tests of higher-level code it would be hard for readers to find where the details of the scope are tested, and it could lead to duplication if the tests of each caller tested all of the functionality of the scope.

How can I automatically generate test cases for Ruby on Rails?

Is there any way to auto-generate simple test cases? I found myself spending time writing very simple tests that make sure all controllers and models are working fine. Here is an example of controller test case written with rspec:
machine = FactoryGirl.create(:machine, type: 1)
mac = FactoryGirl.create(:mac, machine_id: m1.id)
win = FactoryGirl.create(:win, machine_id: m4.id)
sign_in user
get :index
get :show, id: machine.id
get :report
I cannot find any tool today that can auto-generate such tests based on new written code. If really nothing exists, I may consider building a solution to this problem.
If a test were predictable enough to generate it wouldn't be worth writing. In your example, you don't assert anything. That's a very weak test, good only for raising code coverage. It would be much stronger if it asserted what should be on the page. You can't generate that. You also can't generate a scenario that traverses multiple pages in a meaningful way. (I think your example wants to be an rspec feature spec or a Cucumber scenario.)
It would make sense to write a generator that creates a skeleton that the developer could fill in with the meaningful parts that can't be generated, however.
To cover basic functionality you could write a specific generator.
You can also redefine standard scaffold templates (for example, by adding your own template to lib/templates/rspec/model/model_spec.rb - this will redefine model scaffolding).
But the real question is why to do so. Following TDD you should write the test and then create your code, not vice versa.

Setting up a test in rspec with multiple "it" blocks

Say I have an instance method that does many different things that I need to test, something like store#process_order. I'd like to test that it sends an email to the customer, adds an entry in the orders table, charges a credit card, etc. What's the best way to set this up in rspec? Currently, I'm using rspec and factory girl I do something like this:
describe Store do
describe "#process_order" do
before do
#store = Factory(:store)
#order = Factory(:order)
# call the process method
#store.process_order(#order)
end
it 'sends customer an email' do
...
end
it 'inserts order to db' do
...
end
it 'charges credit card' do
...
end
end
end
But it feels really tedious. Is this really the right way to write a spec for a method that I need to make sure does several different things?
Note: I'm not interested in answers about whether or not this is good design. It's just an example I made up to help w/ my question - how to write these types of specs.
This is a good method because you can identify which element is broken if something breaks in the future. I am all for testing things individually. I tend not to check things get inserted into the database as you are then rails functionality. I simply check the validity of the object instead.
This is the method that is used in the RSpec book too. I would certainly recommend reading it if you are unsure about anything related to RSpec.
I think what you are doing is fine and I think it's the way rspec is intended to be used. Every statement (specification) about your app gets its own block.
You might consider using before (:all) do so that the order only has to get processed once but this can introduce dependencies on the order the specs are run.
You could combine all the code inside describe "#process_order" into one big it block if you wanted to, but then it would be less readable and rspec would give you less useful error messages when a spec fails. Go head and add raise to one of your tests and see what a nice error message you can get from rspec if you do it the way you are currently doing it.
If you want to test the entire process then we're talking about an integration test, not a unit test. If you want to test #process_order method which does several things, then I'd expect those things mean calling other methods. So, I would add #should_receive expectations and make sure that all paths are covered. Then I would spec all those methods separately so I have a nice unit spec suite for everything. In the end I would definitely write an integration/acceptance spec which checks if all those pieces are working together.
Also, I would use #let to setup test objects which removes dependencies between spec examples (it blocks). Otherwise a failure of one of the examples may cause a failure in other example giving you an incorrect feedback.

Resources