Why does Rubocop prefer `have_received` to `receive`? - ruby-on-rails

I have tests of the form:
expect(ClassA).to receive(:method)
ClassB.perform
Rubocop would prefer if I refactored this to use have_received, which requires ClassA to be mocked. In other words, I need to set up:
allow(ClassA).to receive(:method)
ClassB.perform
expect(ClassA).not_to have_received(:method)
What's the point? Just following the Arrange Act Assert format?

Refactoring to used have_received allowed me to move a lot of the set-up into a before block, and put tests after the action following the Arrange Ace Assert format.
The code noticeably reads better.

Related

RSpec + Rubocop - why receive_message_chain is a code smell?

I am about to write specs for my custom validator, that uses this chain to check if a file attach with ActiveStorage is a txt:
return if blob.filename.extension.match?('txt')
Normally, I would be able to stub it with this call:
allow(attached_file).to receive_message_chain(:blob, :byte_size) { file_size }
Rubocop says it is an offence and points me to docs: https://www.rubydoc.info/gems/rubocop-rspec/1.7.0/RuboCop/Cop/RSpec/MessageChain
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1. Am I missing something here?
Why should I avoid stubbing message chains?
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1.
This is, in fact, the point. Having those 5 lines there likely will make you feel slightly uneasy. This can be thought of as positive design pressure. Your test setup being complex is telling you to have a look at the implementation. Using #receive_message_chains allows us to feel good about designs that expose complex interactions up front.
One of the authors of RSpec explains some of this in a GitHub issue.
What can I do instead?
One option is to attach a fixture file to the record in the setup phase of your test:
before do
file_path = Rails.root.join("spec", "fixtures", "files", "text.txt")
record.attribute.attach(io: File.open(file_path), filename: "text.txt")
end
This will test the validator end-to-end, without any stubbing.
Another option is to extract a named method, and then stub that instead.
In your validator:
def allowed_file_extension?
blob.filename.extension.match?("txt")
end
In your test:
before do
allow(validator).to receive(:allowed_file_extension?).and_return(true)
end
This has the added benefit of making the code a little clearer by naming a concept. (There's nothing preventing you from adding this method even if you use a test fixture.)
Just as a counterpoint, I frequently get this rubocop violation with tests around logging like:
expect(Rails).to receive_message_chain(:logger, :error).with('limit exceeded by 1')
crank_it_up(max_allowed + 1)
I could mock Rails to return a double for logger, then check that the double receives :error. But that's a bit silly, IMO. Rails.logger.error is more of an idiom than a message chain.
I could create a log_error method in my model or a helper (and sometimes I do), but often that's just a pointless wrapper for Rails.logger.error
So, I either end up disabling RSpec/MessageChain for that line, or perhaps for the entire project (since I would never abuse it for real...right?) It would be nice if there was a way to be more selective about disabling/muting this cop across the project...but I'm not sure how that could work, in any case.

What's the "right" way to test functions that call methods on new instances?

I seem to have encountered literature alluding to it being bad practice to use RSpec's any_instance_of methods (e.g. expect_any_instance_of). The relish docs even list these methods under the "Working with legacy code" section (http://www.relishapp.com/rspec/rspec-mocks/v/3-4/docs/working-with-legacy-code/any-instance) which implies I shouldn't be writing new code leveraging this.
I feel that I am routinely writing new specs that rely on this capability. A great example is any method that creates a new instance and then calls a method on it. (In Rails where MyModel is an ActiveRecord) I routinely write methods that do something like the following:
def my_method
my_active_record_model = MyModel.create(my_param: my_val)
my_active_record_model.do_something_productive
end
I generally write my specs looking for the do_something_productive being called with use of the expect_any_instance_of. e.g.:
expect_any_instance_of(MyModel).to receive(:do_something_productive)
subject.my_method
The only other way I can see to spec this would be with a bunch of stubs like this:
my_double = double('my_model')
expect(MyModel).to receive(:create).and_return(my_double)
expect(my_double).to receive(:do_something_productive)
subject.my_method
However, I consider this worse because a) it's longer and slower to write, and b) it's much more brittle and white box than the first way. To illustrate the second point, if I change my_method to the following:
def my_method
my_active_record_model = MyModel.new(my_param: my_val)
my_active_record_model.save
my_active_record_model.do_something_productive
end
then the double version of the spec breaks, but the any_instance_of version works just fine.
So my questions are, how are other developers doing this? Is my approach of using any_instance_of frowned upon? And if so, why?
This is kind of a rant, but here are my thoughts:
The relish docs even list these methods under the "Working with legacy code" section (http://www.relishapp.com/rspec/rspec-mocks/v/3-4/docs/working-with-legacy-code/any-instance) which implies I shouldn't be writing new code leveraging this.
I don't agree with this. Mocking/stubbing is a valuable tool when used effectively and should be used in tandem with assertion style testing. The reason for this is that mocking/stubbing enables an "outside-in" testing approach where you can minimize coupling and test high level functionality without needing to call every little db transaction, API call, or helper method in your stack.
The question really is do you want to test state or behavior? Obviously, your app involves both so it doesn't make sense to tether yourself to a single paradigm of testing. Traditional testing via assertions/expectations is effective for testing state and is seldom concerned with how state is changed. On the other hand, mocking forces you to think about interfaces and interactions between objects, with less burden on the mutation of state itself since you can stub and shim return values, etc.
I would, however, urge caution when using *_any_instance_of and avoid it if possible. It's a very blunt instrument and can be easy to abuse especially when a project is small only to become a liability when the project is larger. I usually take *_any_instance_of as a smell that either my code or tests, can be improved, but there are times wen it's necessary to use.
That being said, between the two approaches you propose, I prefer this one:
my_double = double('my_model')
expect(MyModel).to receive(:create).and_return(my_double)
expect(my_double).to receive(:do_something_productive)
subject.my_method
It's explicit, well-isolated, and doesn't incur overhead with database calls. It will likely require a rewrite if the implementation of my_method changes, but that's OK. Since it's well-isolated it probably won't need to be rewritten if any code outside of my_method changes. Contrast this with assertions where dropping a column in a database can break almost the entire test suite.
I don't have a better solution to testing code like that than either of the two you gave. In the stubbing/mocking solution I'd use allow rather than expect for the create call, since the create call isn't the point of the spec, but that's a side issue. I agree that the stubbing and mocking is painful, but that's usually what I do.
However, that code has just a bit of Feature Envy. Extracting a method onto MyModel clears up the smell and eliminates the testing issue:
class MyModel < ActiveRecord::Base
def self.create_productively(attrs)
create(attrs).do_something_productive
end
end
def my_method
MyModel.create_productively(attrs)
end
# in the spec
expect(MyModel).to receive(:create_productively)
subject.my_method
create_productively is a model method, so it can and should be tested with real instances and there's no need to stub or mock.
I often notice that the need to use less-commonly-used features of RSpec means that my code could use a little refactoring.
def self.my_method(attrs)
create(attrs).tap {|m| m.do_something_productive}
end
# Spec
let(:attrs) { # valid hash }
describe "when calling my_method with valid attributes" do
it "does something productive" do
expect(MyModel.my_method(attrs)).to have_done_something_productive
end
end
Naturally, you will have other tests for #do_something_productive itself.
The trade-off is always the same: mocks and stubs are fast, but brittle. Real objects are slower but less brittle, and generally require less test maintenance.
I tend to reserve mocks/stubs for external dependencies (e.g. API calls), or when using interfaces that have been defined but not implemented.

Grails splitting geb tests

I am new to selenium tests in grails. I am using geb and spock for testing.
I want to split my test plans into some smaller test plans. I want to know if it's possible to make a spec which calls other specs?
What you should do it have a 'Spec' for each area of the application. If that area has more than one scenarios then include them in the 'Spec' that matches that area.
For Example.
LogInSpec
Has a log in scenario.
Form validation scenario.
Log out scenario.
This way things stay organized and its easy to see what sections of your application are failing.
If your goal is to run these in parallel then I recommend that you try and keep tests even across the different test classes. This way they all take around the same amount of time.
You can create traits for each of the modules ,
Exp: consider validations of entering details in form:
Name,Contact details and other comments
Create a trait which has methods to fill these details and verify after saving these details
Use this trait in your spec.
This will make your code more readable and clear
I found now another solution.
I make a simple groovy class:
class ReUseTest {
def myTest(def spec) {
when:
spec.at ConnectorsPage
then:
spec.btnNewConnector.click()
}
In my spec I can call this this like:
def "MyTest"() {
specs.reuseable.ReUseTest myTest = new specs.reuseable.ReUseTest()
specs.myTest(this)
}
I can now use this part in every spec.

Understanding Mutant Failures

I have the following ActiveRecord model class method:
def self.find_by_shortlink(shortlink)
find_by!(shortlink: shortlink)
end
When I run Mutant against this method, I'm told there were 17 mutations and 16 are still "alive" after the test has run.
Here's one of the "live" mutations:
-----------------------
evil:Message.find_by_shortlink:/home/peter/projects/kaboom/app/models/message.rb:29:3f9f2
## -1,4 +1,4 ##
def self.find_by_shortlink(shortlink)
- find_by!(shortlink: shortlink)
+ find_by!(shortlink: self)
end
If I manually make this same change, my tests fail - as expected.
So my question is: how do I write a unit test that "kills" this mutation?
Disclaimer, mutant author speaking.
Mini cheat sheet for such situations:
Make sure your specs are green right now.
Change the code as the diff shows
Try to observe an unwanted behavior change.
Impossible?
(likely) Take the mutation as better code.
(unlikely) Report a bug to mutant
Found a behavior change: Encode it as a test, or change a test to cover that behavior.
Rerun mutant to verify the death of the mutation.
Make sure mutant actually lists the tests you added as used for that mutation. If not restructure the tests to cover the subject of the mutation in the selected tests.
Now to your case: If you apply the mutation to your code. The argument gets ignored and essentially hardcoded (the value for key :shortlink used in your finder does not change depending on argument shortlink). So the only thing you need to do in your test is adding a case where the argument shortlink matters to the expectation you place in the test.
If passing self as value for the :shortlink finder has the same effect as passing in the current argument you test, try to use a different argument. Coercion of values in finders can be tricky in AR, there is the chance your model coerces to the same value you are testing as argument.

How to test a random uniq values with rspec

I have this code:
def self.generate_random_uniq_code
code = sprintf("%06d", SecureRandom.random_number(999999))
code = self.generate_random_uniq_code if self.where(code: code).count > 0
code
end
The goal is create random codes for a new register, the code can't exist already in the registers
I'm trying test this way, but when I mock the SecureRandom it always return the same value:
it "code is unique" do
old_code = Code.new
old_code.code = 111111
new_code = Code.new
expect(SecureRandom).to receive(:random_number) {old_code.code}
new_code.code = Code.generate_random_uniq_code
expect(new_code.code).to_not eq old_code.code
end
I was trying to find if there is a way to enable and disable the mock behavior, but I could not find it, I'm not sure I'm doing the test the right way, the code seems no work fine to me.
Any help is welcome, thanks!
TL;DR
Generally, unless you are actually testing a PRNG that you wrote, you're probably testing the wrong behavior. Consider what behavior you're actually trying to test, and examine your alternatives. In addition, a six-digit number doesn't really have enough of a key space to ensure real randomness for most purposes, so you may want to consider something more robust.
Some Alternatives
One should always test behavior, rather than implementation. Here are some alternatives to consider:
Use a UUID instead of a six-digit number. The UUID is statistically less likely to encounter collisions than your current solution.
Enforce uniqueness in your database column by adjusting the schema.
Using a Rails uniqueness validator in your model.
Use FactoryGirl sequences or lambdas to return values for your test.
Fix Your Spec
If you really insist on testing this piece of code, you should at least use the correct expectations. For example:
# This won't do anything useful, if it even runs.
expect(new_code.code).to_not old_code.code
Instead, you should check for equality, with something like this:
old_code = 111111
new_code = Code.generate_random_uniq_code
new_code.should_not eq old_code
Your code may be broken in other ways (e.g. the code variable in your method doesn't seem to be an instance or class variable) so I won't guarantee that the above will work, but it should at least point you in the right direction.

Resources