RSpec: Shared examples for Validators? - ruby-on-rails

I have several models that have start_time and end_time and I have a custom validator that has special rules for these fields. For testing, I feel like I've got 3 options:
Have a validator and validator_spec. Then re-test the entire implementation of the validators in every model to ensure that the validator is working with that model.
Have a validator and validator_spec. In each model, somehow check that the already-tested validator is being included in a model. (Maybe this means testing one condition that would only arise from the validator being included.)
Creating a shared example as the validator test, and include it in each model's test (although it_behaves_like SomeValidator looks kind of weird)
Any thoughts? The validator has several conditions so I'd find it taxes and not DRY to implement #1.

i would propose another option:
4.) implement a validator_spec and build a custom matcher which can be reused in every model using this validator
maybe you can find some inspiration at https://github.com/thoughtbot/shoulda-matchers

The 'mockist' approach would be to write a spec for the validator class, so that it can be tested in isolation. Then in your model specs, you can stub the validator to return a pre-determined value.
There is a trade-off here: You're avoiding a combinatory increase in the number of tests, so your test suite will smaller, faster, and easier to maintain. But your introducing a small risk that in some situations a model may not work in combination with the real validator once integrated in the production code.

Related

Automatically Testing Models in Rails

I just, manually, discovered a migration error. I added a new field to a model, and forgot to add it into the model_params method of the controller. As a result the new field wasn't persisted to the database.
Easy enough to fix once I noticed the problem, but it got me to wondering if there was a way to detect this in testing. I would imagine something like a gem that would parse the schema and generate a set of tests to ensure that all of the fields could be written and that the same data could be read back.
Is this something that can be (or is) done? So far, my searches have led me to lots of interesting reading, but not to a gem like this...
It is possible to write what you would want. Iterate through all the fields in the model, generate params that mirrors those fields, and then run functional tests on your controllers. The problem is that the test is brittle. What if you don't actually want all the fields to be writable through params? What if you reference a model in another controller outside of the standard pattern? How will you handle generating data that would pass different validations? You would either have to be sure that your application would only be written in a certain way or this test would become more and more complex to handle additional edge cases.
I think the solution in testing would be to try to keep things simple; realize that you've made a change to the system and as a result of that change, corresponding tests would need to be updated. In this case, you would update the functional and unit tests affected by that model. If you were strictly adhering to Test Driven Design, you would actually update the tests first to produce a failing test and then implement the change. As a result, hopefully the updated functional test would have failed in this case.
Outside of testing, you may want to look into a linter. In essence, you're asking if you can catch an error where the parameters passed to an object's method doesn't match the signature. This is more catchable when parsing the code completely (i.e. compilation in a static type environment).
EDIT - I skipped a step on the linting, as you would also have to write your code a certain way that a linter would catch it, such as being more explicit of the method and parameters passed to it.
You might want to consider that such a gem may not exist because its not that practical or useful in real life.
Getting the columns off a model is pretty simple from the reflection methods that Active Record gives you. And yeah you could use that theoretically to automagically run a bunch of tests in loop.
But in reality its just not going to cut it. In real life you don't want every column to be assignable. Thats why you are using mass assignment protection in the first place.
And add to that the complexity of the different kinds of constraints and data types your models have. You'll end up with something extremely complex with just adds a bunch of tests with limited value.
If you find yourself omitting a property from mass assignment protection than your should try to cover that part of your controller either with a functional test or an integration test.
class ArticlesControllerTest < ActionController::TestCase
def valid_attributes
{
title: 'How to test like a Rockstar',
content: 'Bla bla bla'
}
end
test "created article should have the correct attributes" do
post :create, article: valid_attributes
article = Article.last
valid_attributes.keys.each do |key|
assert_equals article[key], valid_attributes[key]
end
end
end

How to test a Rails Model Class method which calls a method on all members?

In my Rails 4 project I've got a model, EventGenerator with an instance method generate (which creates some records in the database), and a class method generate_all which is intended to be called by a Rake task and looks something like this:
def self.generate_all
all.each(&:generate)
end
I can think of several approaches to testing this (I'm using RSpec 3 and Fabrication):
Don't bother - it's too simple to require tests. DHH says "Don't aim for 100% coverage". On the other hand, this is going to be called by a rake task, so won't be regularly exercised: I feel like that's a good reason to have tests.
Create a couple of EventGenerator instances in the database and use any_instance.should_receive(:generate) as the assertion - but RSpec 3 now recommends against this and requires a fudge to make it work. This is a personal 'showcase project' so if possible I'd like everything to be best-practice. Also (DHH aside) shouldn't it still be possible to create fast model specs which don't touch the database?
Like 2, but stub out EventGenerator.all to return some instances without touching the database. But stubbing the class under test is bad, right? And fragile.
Don't worry about unit testing it and instead cover it with an integration test: create a couple of generators in the database, run the method, check what gets changed/created in the database as a result.
Change the implementation: pass in an array of instances. But that's just pushing the problem back by a layer, and a test-driven change which I can't see benefitting the design.
Since I can't really think of a downside for option 4, perhaps that's the answer, but I feel like I need some science to back that up.
I would actually not bother to test it (so your 1.) as the method is really trivial.
If you would like to have it under test coverage though I'd suggest you to use your 3. My reasons are as follows:
Your test for .generate_all just needs to assert that the method #generate gets call on every instance returned by .all. It this context the actual implementation of .all is irrelevant and can be stubbed.
Your tests for #generate should assert that the method does the right thing. If these tests assert the proper functioning of this method, there's no need for the tests for .generate_all to duplicate any assertion.
Testing the proper functioning of #generate in the tests for .generate_all leads to unnecessary dependencies between the tests.

Do shoulda-matchers' ActiveRecord matchers violate the "test behavior not implementation" rule?

For example, if I am using should validate_presence_of in my spec, that's only testing that I have the validate_presence_of piece of code inside my model, and that's testing implementation. More importantly, isn't that spec totally useless for testing the real problem, which is "if I don't fill out a certain field, will the model be saved successfully?"
Some of shoulda-matchers' matchers don't test implementation, they test behavior. For example, look at the source for allow_value (which validate_presence_of uses): #matches? actually sets the instance's attribute to the value and checks to see if that causes an error. All of shoulda-matchers' matchers that test validations (its ActiveModel matchers) that I've looked at work the same way; they actually test that the model rejects bad values.
Note that if you trust ActiveModel and ActiveRecord to be fully tested, it shouldn't matter whether the matchers test behavior or just test that the macros are used.
Unit-testing a model's validations is definitely useful. Suppose you're doing BDD and implementing a simple form that creates an instance of a model. You would first write an acceptance test (a Cucumber or rspec scenario) that tests the happy path of filling out the form correctly and successfully creating the instance. You would then write a second acceptance test with an error in the form that demonstrates that when there's an error in the form no instance is saved and the form is redisplayed with the appropriate error message.
Once you've got that error-path scenario for one of the errors it is possible to make in the form, you will find that if you write more error-path scenarios for the other errors they will be very repetitive -- the only things that will be different are the erroneous field value and the error message. And then you'll have a lot of full-stack scenarios, which take a long time to run. So don't write more than that first error-path scenario. Instead, just write unit tests for the validations that would catch each error. Now most of your tests are simple and fast. (This is a specific example of the general BDD technique of dropping down from acceptance tests to unit tests to handle details.)
However, I don't find shoulda-matchers' ActiveRecord matchers very useful. Considering the matchers that test associations, I find that my acceptance tests always force me to add all the associations to my models that I need, and there's nothing left to do in unit tests. The ActiveRecord matchers that test database features that are invisible to the app (e.g. have_db_index) are useful if you're strictly test-driven, but I tend to slack off there. Also, for what it's worth, the ActiveRecord matchers don't test behavior (which would be hard to implement), only that the corresponding macros are used.
One exception where I do find a shoulda-matchers ActiveRecord matcher useful is deletion of dependent objects. I sometimes find that no acceptance spec has already forced me to handle what happens to associated objects when an object is deleted. The ActiveRecord way to make that happen is to add the :dependent option to the belongs_to, has_many or has_one association. Writing an example that uses shoulda-matchers' belong_to or have_many or have_one matcher with the .dependent option is the most convenient way I know to test that.

What are some good approaches for stubbing dependancies in TDD?

Let's say I'm writing a spec in Rspec for a Rails app and I'm stubbing out methods to reduce the number of dependancies in the spec:
# specs/account_statistics_spec.rb
describe AccountStatistics do
it "gets the percentage of users that are active in an account" do
account = Account.new()
account.stub_chain(:users, :size).and_return(80)
account.stub_chain(:users, :active, :size).and_return(20)
stats = AccountStatistics.new(account)
stats.percentage_active.should == 25
end
end
It's now possible for the AccountStatistics spec to pass even if the Account#users and User#active methods are not defined in their respective classes.
What are some good approaches to catch the fact that the stubbed methods may not be implemented? Should it be left up to integration tests to catch the undefined methods? Or should the spec also check that the methods are defined before stubbing them?
It would also be great if someone can link to any good books / presentations which discuss stubbing and mocking in-depth :)
To address your specific concern take a look at https://github.com/xaviershay/rspec-fire to guard against stubbing non-existent methods.
I think the broader problem here is that you are not listening to the feedback trying to write this test is giving you. A test which is difficult to write is a good sign of either a poorly designed test subject or that the testing technique you are using is not a good fit.
What would this class look like if it did followed the law of Demeter (hard with ActiveModel relations)?
What would your test look like if you supplied a test double object instead of attempting to mock every method?
What would your test look like as an integration test?
I think the best resource for writing better tests is to look at the design of the code being tested instead. http://www.poodr.com/ might be a good resource. http://www.martinfowler.com/bliki/TestDouble.html is a good overview of test doubles you might not be considering while http://blakesmith.me/2012/02/29/test-stubbing-as-an-antipattern.html makes an argument for why mocks might be the wrong tool entirely.
Specific to rspec http://betterspecs.org gives some hits what a good spec might look like. If those are hard to write that's a good hint that there's a broader problem.

Testing rails STI subclasses with rspec

Given a class that inherits from ActiveRecord::Base, lets call it Task, I have two subclasses that specialize some aspects of a task, Activity and Training, using standard Rails single table inheritance.
I'm confident in that choice after looking at other available since the actual data for the models are the same, it's just the behavior that differs. A perfect fit for STI.
A task can be created, started, progressed and finished. That is some logic involved in these transitions, especially start(), that calls for the specialization of the base class.
Since I'm doing this TDD and started out with a working Task calls with full test coverage I'm now wondering how to proceed. I have a few senarios that I have thought about:
Duplicate the tests for Task and test both Activity and Training end to end with some small modifications to test their specialization. Pro: it's quick and easy. Con: it duplicates code and while that might not be a big problem here it will be when the number of specializations grows.
Split tests and keep most of the testing code in a task_spec.rb while moving specialization testing into new specs for the respective subclasses. Pro: keeps the tests DRY. Con: what class do I instantiate in the base test?
That last question is what is nagging at me. Right now I have the base class test set up to randomly create a class from one of the concert subclasses, but is this good form? It almost makes me want to go with approach 1 just to keep the test runs consistent or I'll have to find a way to base the randomness of my class selection of the random seed for the test suite so that I at least have a repeatable random selection.
I'm guessing this must be a common problem people run in to but I can't find any good information on the subject. Do you have any resources or thoughts on the matter?
Using an rspec shared example (mentioned by Renato Zannon) would look like this:
Create a file in spec/support/. I'll call it shared_examples_for_sti.rb.
require 'spec_helper'
shared_examples "an STI class" do
it "should have attribute type" do
expect(subject).to have_attribute :type
end
it "should initialize successfully as an instance of the described class" do
expect(subject).to be_a_kind_of described_class
end
end
In your _spec.rb file for each STI class and it's subclasses, add this line:
it_behaves_like "an STI class"
If you have any other tests that you want to be shared across STI classes and subclasses just add them to the shared_examples.
You could use rspec shared examples to test the behavior that is shared among all of them (basically, the inherited behavior, or places where you want to guard against LSP violations).

Resources