How to test a skinny controller without retesting the fat model? - ruby-on-rails

Before I start, I'm using rails with rspec, shoulda-matchers, and factorygirl to ease testing, so if those libraries contain something helpful to solve the problem below, please let me know.
I was designing my controller and model, and wasn't sure how to go about testing this situation.
I have a game model, which can contain a bunch of players. Players can only be removed if the game hasn't started yet, so I added a remove_player method to my game model and tested it thoroughly.
In my player controller, I have the delete action calling the remove_player method to actually do the work, and then the action causes a redirect.
How can I make a test that the controller is actually doing what it's supposed to (calling remove_player) without retesting the method? I can test to do a redirect, but that won't fail if in the future, the call to remove_player gets deleted.
I know shoulda-matchers has "should_validate_presence_of" and stuff like that, which is the same idea behind this test, but doesn't work with my own methods.

I tried out mocks and stubs last night, and they did exactly what I wanted. RSpec's mocks were pretty easy to pick up and are included with the rspec gem.
The basic idea of a stub is a replacement for a real object that you're using to test another object. In the example in my question, I could make a stub of the game model when testing my player controller, and say gamestub.remove_player returns true always. In RSpec you could do that like
#game = double("game") #a double is rspec's mock/stub object
#game.stub(:remove_player?).and_return(true)
A mock is like a little different in that it is not just a replacement for the object you're mocking, but also a part of your test conditions. You can say not only what your fake object would return if it were called, but that it should be called n times (or the test will fail), and what parameters you expect it to be called with. In the example above, i could mock a game and say gamemock.remove_player should be called once and be passed in the current player and return true (or false if i wanted to test what happened if the test failed). Here's an example kind of like what I did for the players controller.
#game = double("game")
Game.should_receive(:find_by_id).with("123").and_return { #game }
#game.should_receive(:remove_player?).with(current_user).and_return(true)
delete :destroy, {:id => 123}
response.should do_whatever
Mocks and stubs are pretty similar. This article tries to highlight differences. To me it seems it's basically how you use them, since mocks have all of the capability of stubs and can be used as one. That's probably why RSpec treats them as one type (double)

Related

Maintianing ActiveRecord associations created using FactoryBot in controller test

I'm attempting to speed up some tests for a Rails controller and a bottleneck has to do with large numbers of objects being created and persisted to the database. I'm attempting to replace most of those create calls with build calls to address this.
Running Rails 5.1, and using MiniTest 5.10.3, with FactoryBot 5.0.2.
I'm attempting to go from this
#user = create(:user)
#item1 = create(:item)
#item1 = create(:item)
#transaction1 = create(:transaction, buyer: #buyer, item: #item1)
#transaction2 = create(:transaction, buyer: #buyer, item: #item2)
In this application item represents a sellable object, user represents a purchaser and transaction is the object that creates the two. The User class also has an association added to it transaction_checkout_items which returns all Transaction items which are in a state where the purchase can be completed.
So, with each test we're creating a myriad of objects and saving them all to the database. It's slow but it works. Still, I want it to be faster, so I've tried replacing the existing setup with something like this:
#user = create(:user)
def build_transaction_checkout_items(user, item)
user.transaction_checkout_items.build(attributes_for(:transaction,
buyer: user,
sale_price: item.sale_price,
item: item))
end
#item1 = build_stubbed(:item)
#item2 = build_stubbed(:item)
#transaction1 = build_transaction_checkout_items(#buyer, #item1)
#transaction2 = build_transaction_checkout_items(#buyer, #item2)
This seems to work as long as I'm in the test. If I drop a binding in my test and check the objects #user returns the user object, #user.transaction_checkout_items returns a Transaction::ActiveRecord_Associations_CollectionProxy object containing all my associated transactions, and the individual transactions have their associated items attached. However, If I put a binding.pry into the controller method which actually does the work, and look at the User object I see the correct one, but user.transaction_checkout_items now returns an empty Transaction::ActiveRecord_Associations_CollectionProxy object with nothing in it. Essentially the associations vanish, and this makes sense to me as the controller is pulling the User object from the database and going to work on it, and this new object is missing the associations. I've considered trying to stub out an any_instance method on the User class so that whenever #transaction_checkout_items is called it returns the collection of Transaction objects but I don't see any way to create a new ::ActiveRecord_Associations_CollectionProxy object. I can't simply use an array or other collection for this as there are methods on the ::ActiveRecord_Associations_CollectionProxy that need to be called for the controller logic to work.
So here I am on a Friday blocked. Is my idea of stubbing transaction_checkout_items on any User instance a good one, and if so how do I do it? Or is there an alternate strategy anyone can suggest that will allow the MiniTest stubbed associations to persist and be available when the controller code runs?
Is my idea of stubbing transaction_checkout_items on any User instance a good one, and if so how do I do it?
Stubbing out ActiveRecord methods is almost always a bad idea. It will couple your tests heavily to the implementation and potentially will make it difficult to update Rails / ActiveRecord as if anything changes in the framework your tests start breaking. There might be also lots of funny side effects you haven't thought about.
The question is also what do you actually want to test? If you start stubbing out these methods, what are you testing? In a controller / integration test, I would expect you want to test to fetch the correct records from the database.
Using build vs. create to improve test performance is a good trick but, as you already discovered, is only valuable if you use the same objects in the test. This is unfortunately not possible for integration tests and you need / should persists the records.
Or is there an alternate strategy anyone can suggest that will allow the MiniTest stubbed associations to persist and be available when the controller code runs?
I would think about why this is slow and if this is actually really a problem. How long does your test & whole test suite run or is this just a premature optimisation?
If the reason it's slow is solely because you need to create a lot of test data you could use fixtures or seed your database instead. This would both be faster than using FactoryBot although brings different issues (e.g. MysteryGuest)

Automatically Testing Models in Rails

I just, manually, discovered a migration error. I added a new field to a model, and forgot to add it into the model_params method of the controller. As a result the new field wasn't persisted to the database.
Easy enough to fix once I noticed the problem, but it got me to wondering if there was a way to detect this in testing. I would imagine something like a gem that would parse the schema and generate a set of tests to ensure that all of the fields could be written and that the same data could be read back.
Is this something that can be (or is) done? So far, my searches have led me to lots of interesting reading, but not to a gem like this...
It is possible to write what you would want. Iterate through all the fields in the model, generate params that mirrors those fields, and then run functional tests on your controllers. The problem is that the test is brittle. What if you don't actually want all the fields to be writable through params? What if you reference a model in another controller outside of the standard pattern? How will you handle generating data that would pass different validations? You would either have to be sure that your application would only be written in a certain way or this test would become more and more complex to handle additional edge cases.
I think the solution in testing would be to try to keep things simple; realize that you've made a change to the system and as a result of that change, corresponding tests would need to be updated. In this case, you would update the functional and unit tests affected by that model. If you were strictly adhering to Test Driven Design, you would actually update the tests first to produce a failing test and then implement the change. As a result, hopefully the updated functional test would have failed in this case.
Outside of testing, you may want to look into a linter. In essence, you're asking if you can catch an error where the parameters passed to an object's method doesn't match the signature. This is more catchable when parsing the code completely (i.e. compilation in a static type environment).
EDIT - I skipped a step on the linting, as you would also have to write your code a certain way that a linter would catch it, such as being more explicit of the method and parameters passed to it.
You might want to consider that such a gem may not exist because its not that practical or useful in real life.
Getting the columns off a model is pretty simple from the reflection methods that Active Record gives you. And yeah you could use that theoretically to automagically run a bunch of tests in loop.
But in reality its just not going to cut it. In real life you don't want every column to be assignable. Thats why you are using mass assignment protection in the first place.
And add to that the complexity of the different kinds of constraints and data types your models have. You'll end up with something extremely complex with just adds a bunch of tests with limited value.
If you find yourself omitting a property from mass assignment protection than your should try to cover that part of your controller either with a functional test or an integration test.
class ArticlesControllerTest < ActionController::TestCase
def valid_attributes
{
title: 'How to test like a Rockstar',
content: 'Bla bla bla'
}
end
test "created article should have the correct attributes" do
post :create, article: valid_attributes
article = Article.last
valid_attributes.keys.each do |key|
assert_equals article[key], valid_attributes[key]
end
end
end

How to test a Rails Model Class method which calls a method on all members?

In my Rails 4 project I've got a model, EventGenerator with an instance method generate (which creates some records in the database), and a class method generate_all which is intended to be called by a Rake task and looks something like this:
def self.generate_all
all.each(&:generate)
end
I can think of several approaches to testing this (I'm using RSpec 3 and Fabrication):
Don't bother - it's too simple to require tests. DHH says "Don't aim for 100% coverage". On the other hand, this is going to be called by a rake task, so won't be regularly exercised: I feel like that's a good reason to have tests.
Create a couple of EventGenerator instances in the database and use any_instance.should_receive(:generate) as the assertion - but RSpec 3 now recommends against this and requires a fudge to make it work. This is a personal 'showcase project' so if possible I'd like everything to be best-practice. Also (DHH aside) shouldn't it still be possible to create fast model specs which don't touch the database?
Like 2, but stub out EventGenerator.all to return some instances without touching the database. But stubbing the class under test is bad, right? And fragile.
Don't worry about unit testing it and instead cover it with an integration test: create a couple of generators in the database, run the method, check what gets changed/created in the database as a result.
Change the implementation: pass in an array of instances. But that's just pushing the problem back by a layer, and a test-driven change which I can't see benefitting the design.
Since I can't really think of a downside for option 4, perhaps that's the answer, but I feel like I need some science to back that up.
I would actually not bother to test it (so your 1.) as the method is really trivial.
If you would like to have it under test coverage though I'd suggest you to use your 3. My reasons are as follows:
Your test for .generate_all just needs to assert that the method #generate gets call on every instance returned by .all. It this context the actual implementation of .all is irrelevant and can be stubbed.
Your tests for #generate should assert that the method does the right thing. If these tests assert the proper functioning of this method, there's no need for the tests for .generate_all to duplicate any assertion.
Testing the proper functioning of #generate in the tests for .generate_all leads to unnecessary dependencies between the tests.

Testing service classes - Rails

I was wondering how you guys would test a Service Object class in Rails? Let's say a User signs up. A user is created in the database, is added to the email list, and other stuff happens. How do you test this?
class UserRegistrar
def sign_up(user)
User.create(user) # or something to this effect
EmailMarketing.add_to_email_list(user)
SuperSecretClass.do_secret_stuff(user)
LoggingThing.new.log_stuff_about(user)
end
end
(Controller action)
def create
UserRegistrar.sign_up(params)
# stuff for the strong params, etc...
end
What I do is to just make sure that the methods are called, with the correct arguments. The results of the methods (like making sure that a user is really added to the list) are tested in their respective classes. Am I doing it right?
Yes, if I needed to write a unit test for a class like the one you show, I'd do it the way you say, with mocks. In your example, all of the work is delegated to high-level model methods which will need their own tests and might be used in more than one place, so it doesn't make sense to test the functionality of those methods in tests of the service. And there are not that many method calls to mock, so it won't be too painful.
However,
if any of those model methods were used only in one service, I'd consider moving those to the service to slim down the model and make the service more coherent. If I ended up with methods on the service that did a lot of work themselves, rather than just delegating, I'd test their functionality in tests of the service, by creating database objects and asserting how the service changes them.
on the other hand, if I had a service that only delegated, it might already be fully tested by my acceptance test (since I'm doing BDD and write acceptance tests first), and there would be no need to unit-test the service at all.
There's a danger to stubbing everything in your test, because then you're only testing that the class fits the test, instead of fitting in with the rest of your code. If EmailMarketing.add_to_email_list one day becomes EmailMarketing.add_to(:email_list ...) in a refactoring, your test wouldn't pick it up.
You can test the effects of the code like this, using User as an example:
expect {
UserRegistrar.sign_up(user)
}.to change{
user.persisted?
}.from(false).to(true)

Setting up a test in rspec with multiple "it" blocks

Say I have an instance method that does many different things that I need to test, something like store#process_order. I'd like to test that it sends an email to the customer, adds an entry in the orders table, charges a credit card, etc. What's the best way to set this up in rspec? Currently, I'm using rspec and factory girl I do something like this:
describe Store do
describe "#process_order" do
before do
#store = Factory(:store)
#order = Factory(:order)
# call the process method
#store.process_order(#order)
end
it 'sends customer an email' do
...
end
it 'inserts order to db' do
...
end
it 'charges credit card' do
...
end
end
end
But it feels really tedious. Is this really the right way to write a spec for a method that I need to make sure does several different things?
Note: I'm not interested in answers about whether or not this is good design. It's just an example I made up to help w/ my question - how to write these types of specs.
This is a good method because you can identify which element is broken if something breaks in the future. I am all for testing things individually. I tend not to check things get inserted into the database as you are then rails functionality. I simply check the validity of the object instead.
This is the method that is used in the RSpec book too. I would certainly recommend reading it if you are unsure about anything related to RSpec.
I think what you are doing is fine and I think it's the way rspec is intended to be used. Every statement (specification) about your app gets its own block.
You might consider using before (:all) do so that the order only has to get processed once but this can introduce dependencies on the order the specs are run.
You could combine all the code inside describe "#process_order" into one big it block if you wanted to, but then it would be less readable and rspec would give you less useful error messages when a spec fails. Go head and add raise to one of your tests and see what a nice error message you can get from rspec if you do it the way you are currently doing it.
If you want to test the entire process then we're talking about an integration test, not a unit test. If you want to test #process_order method which does several things, then I'd expect those things mean calling other methods. So, I would add #should_receive expectations and make sure that all paths are covered. Then I would spec all those methods separately so I have a nice unit spec suite for everything. In the end I would definitely write an integration/acceptance spec which checks if all those pieces are working together.
Also, I would use #let to setup test objects which removes dependencies between spec examples (it blocks). Otherwise a failure of one of the examples may cause a failure in other example giving you an incorrect feedback.

Resources