I have the following ActiveRecord model class method:
def self.find_by_shortlink(shortlink)
find_by!(shortlink: shortlink)
end
When I run Mutant against this method, I'm told there were 17 mutations and 16 are still "alive" after the test has run.
Here's one of the "live" mutations:
-----------------------
evil:Message.find_by_shortlink:/home/peter/projects/kaboom/app/models/message.rb:29:3f9f2
## -1,4 +1,4 ##
def self.find_by_shortlink(shortlink)
- find_by!(shortlink: shortlink)
+ find_by!(shortlink: self)
end
If I manually make this same change, my tests fail - as expected.
So my question is: how do I write a unit test that "kills" this mutation?
Disclaimer, mutant author speaking.
Mini cheat sheet for such situations:
Make sure your specs are green right now.
Change the code as the diff shows
Try to observe an unwanted behavior change.
Impossible?
(likely) Take the mutation as better code.
(unlikely) Report a bug to mutant
Found a behavior change: Encode it as a test, or change a test to cover that behavior.
Rerun mutant to verify the death of the mutation.
Make sure mutant actually lists the tests you added as used for that mutation. If not restructure the tests to cover the subject of the mutation in the selected tests.
Now to your case: If you apply the mutation to your code. The argument gets ignored and essentially hardcoded (the value for key :shortlink used in your finder does not change depending on argument shortlink). So the only thing you need to do in your test is adding a case where the argument shortlink matters to the expectation you place in the test.
If passing self as value for the :shortlink finder has the same effect as passing in the current argument you test, try to use a different argument. Coercion of values in finders can be tricky in AR, there is the chance your model coerces to the same value you are testing as argument.
Related
I have an existing application (Rails 6) with a set of tests (minitest). I've just converted my tests to use factory_bot instead of fixtures but I'm having a strange problem with records created and confirmed in the test and controller being unavailable in a PORO that does the actual work. This problem occurs inconsistently and never seems to happen when I run an individual test, only when tests are run in bulk (e.g. a whole file or directory). The more tests are run, the more likely the failure, it seems.
(NB I've never seen this code fail in actual use - it only happens during tests.)
Summary
Previously, when using fixtures, every test ran successfully both individually and when run all together with rails t. Now, with factory_bot, a few of my tests often (but not always) fail, all related to the use of the same object that is defined as a PORO.
Drilling down, I have found that there's an issue with records sometimes mysteriously going missing or being unavailable within the PORO during the test, even though they're confirmed as present in the test and in the controller that calls the PORO!
Details
In my application, I have a RichText object that receives some text and processes it, highlighting words in the text that match those stored in a Dictionary table. In my tests, I create several test Dictionary records, and expect the RichText object to perform appropriately when passed test data. And it does, when the individual test files are run (and always did when I used fixtures).
However, now, the records are created and available in the test and in the controller it calls, but then are not available within the RichText object created by the controller. With no Dictionary records available in the RichText object, the test naturally fails because no words are highlighted in the text. And, again, this only seems to happen when I run the tests as a group rather than running just a single test file (e.g. rails t test/objects/rich_text.rb passes, but rails t test/objects will fail within the same rich_text.rb test file).
It doesn't seem to matter whether I create the records with factory_bot#create or directly with Dictionary#create, which suggests it's nothing to do with factory_bot - but then why has this just started happening?
I do have parallelisation enabled in minitest but disabling it makes no difference - the tests still fail the same way.
Code
Example test code that runs and passes, up to the last assertion here, which sometimes fails as described above:
test 'can create new content' do
create(:dictionary, word_root: 'word_1')
create(:dictionary, word_root: 'word_2')
create(:dictionary, word_root: 'word_3')
assert_equal 3, Dictionary.all.count
...
# This next line is the one that calls the relevant controller code below
post '/api/v0/content', headers: #auth_headers, params: #new_content_params
...
# This assertion passes, as it did above, even though the error's already happened after the post above
assert_equal 3, Dictionary.all.count
# This assertion checks the response from the above post and fails under certain circumstances, as described above
assert_equal #new_content_output, response.body
...
end
I've added checks to the controller as below and, again, everything's fine through this code, which is called by the post line in the test above (i.e. the database records are present and correct just before the RichText object is called):
def create
...
byebug unless Dictionary.all.count == 3
rich_text = RichText::Basic.new(#organisation, new_version[:content])
...
end
However, the RichText object's initialize method immediately fails the same check for these records - but only if the test is being run in bulk rather than individually:
class RichText::Basic
def initialize(organisation, text)
byebug unless Dictionary.all.count == 3
...
end
end
Rails 6.1.4, ruby 2.7.1
Having tried various things (like disabling transactions in the affected tests), I found that the root cause was a constant defined in the RichText class (a line I didn't include in the question!). It looks like there was a race condition or similar that meant that the RichText class sometimes ran before the database was populated, leaving it with an empty constant.
Replacing the constant with a direct database call resolved the problem. It does mean slightly more database calls but, on the flip side, does mean it's slightly easier to update the Dictionary table. (This happens rarely - on the order of once a month - which is why I'd put it into a constant.)
From:
class RichTest::Basic
WORDS = Dictionary.standard
def some_method
WORDS.each do...
end
end
to
class RichTest::Basic
def some_method
Dictionary.standard.each do...
end
end
I am about to write specs for my custom validator, that uses this chain to check if a file attach with ActiveStorage is a txt:
return if blob.filename.extension.match?('txt')
Normally, I would be able to stub it with this call:
allow(attached_file).to receive_message_chain(:blob, :byte_size) { file_size }
Rubocop says it is an offence and points me to docs: https://www.rubydoc.info/gems/rubocop-rspec/1.7.0/RuboCop/Cop/RSpec/MessageChain
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1. Am I missing something here?
Why should I avoid stubbing message chains?
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1.
This is, in fact, the point. Having those 5 lines there likely will make you feel slightly uneasy. This can be thought of as positive design pressure. Your test setup being complex is telling you to have a look at the implementation. Using #receive_message_chains allows us to feel good about designs that expose complex interactions up front.
One of the authors of RSpec explains some of this in a GitHub issue.
What can I do instead?
One option is to attach a fixture file to the record in the setup phase of your test:
before do
file_path = Rails.root.join("spec", "fixtures", "files", "text.txt")
record.attribute.attach(io: File.open(file_path), filename: "text.txt")
end
This will test the validator end-to-end, without any stubbing.
Another option is to extract a named method, and then stub that instead.
In your validator:
def allowed_file_extension?
blob.filename.extension.match?("txt")
end
In your test:
before do
allow(validator).to receive(:allowed_file_extension?).and_return(true)
end
This has the added benefit of making the code a little clearer by naming a concept. (There's nothing preventing you from adding this method even if you use a test fixture.)
Just as a counterpoint, I frequently get this rubocop violation with tests around logging like:
expect(Rails).to receive_message_chain(:logger, :error).with('limit exceeded by 1')
crank_it_up(max_allowed + 1)
I could mock Rails to return a double for logger, then check that the double receives :error. But that's a bit silly, IMO. Rails.logger.error is more of an idiom than a message chain.
I could create a log_error method in my model or a helper (and sometimes I do), but often that's just a pointless wrapper for Rails.logger.error
So, I either end up disabling RSpec/MessageChain for that line, or perhaps for the entire project (since I would never abuse it for real...right?) It would be nice if there was a way to be more selective about disabling/muting this cop across the project...but I'm not sure how that could work, in any case.
It said here https://www.relishapp.com/rspec/rspec-core/v/3-5/docs/helper-methods/let-and-let what variable defined by let is changing across examples.
I've made the same simple test as in the docs but with the AR model:
RSpec.describe Contact, type: :model do
let(:contact) { FactoryGirl.create(:contact) }
it "cached in the same example" do
a = contact
b = contact
expect(a).to eq(b)
expect(Contact.count).to eq(1)
end
it "not cached across examples" do
a = contact
expect(Contact.count).to eq(2)
end
end
First example passed, but second failed (expected 2, got 1). So contacts table is empty again before second example, inspite of docs.
I was using let and was sure it have the same value in each it block, and my test prove it. So suppose I misunderstand docs. Please explain.
P.S. I use DatabaseCleaner
P.P.S I turn it off. Nothing changed.
EDIT
I turned off DatabaseCleaner and transational fixtures and test pass.
As I can understand (new to programming), let is evaluated once for each it block. If I have three examples each calling on contact variable, my test db will grow to three records at the end (I've tested and so it does).
And for right test behevior I should use DatabaseCleaner.
P.S. I use DatabaseCleaner
That's why your database is empty in the second example. Has nothing to do with let.
The behaviour you have shown is the correct behaviour. No example should be dependant on another example in setting up the correct environment! If you did rely on caching then you are just asking for trouble later down the line.
The example in that document is just trying to prove a point about caching using global variables - it's a completely different scenario to unit testing a Rails application - it is not good practice to be reliant on previous examples to having set something up.
Lets, for example, assume you then write 10 other tests that follow on from this, all of which rely on the fact that the previous examples have created objects. Then at some point in the future you delete one of those examples ... BOOM! every test after that will suddenly fail.
Each test should be able to be tested in isolation from any other test!
In RSpec (specifically rspec-mocks), what is the difference between Message Expectations and Test Spies? They seem similar, and appear right next to each other as seperate sections in the readme.
i.e. what is the difference between:
expect(validator).to receive(:validate) # message expectation
and
expect(validator).to have_received(:validate) # test spy
Message expectations can be set on any object and represent a declaration that something is going to happen (or not happen) to that object in the future. If the expectation is violated during subsequent test execution, the test will fail at the time the violation occurs. If the expectation has not been met by the end of the test, the test will fail as well.
The have_received family of methods only works on test doubles and examines what has happened to the double in the past, from the time of the double's creation up through the current method call. It succeeds or fails at that point in time. The term "test spy" is a little misleading, because the support for this backward-looking mechanism is a standard part of rspec-mocks at this point. You don't do anything "special" to create a "test spy".
You can't always use spies when you test, basically for expectations on classes.
Example:
expect(User).to receive(:new)
there is no way to do this with a spy (unless you do dependency injection).
Now, you could do the following:
user = double('user', save: true)
expect(User).to receive(:new).and_return user
User.new.save
expect(user).to have_received(:save)
You see clearly that:
you have to set expectations on real object before you run the real code (it kind of looks weird to set expectations before triggering the code)
you can set expectation on spies after the real code, which is more natural
I have this code:
def self.generate_random_uniq_code
code = sprintf("%06d", SecureRandom.random_number(999999))
code = self.generate_random_uniq_code if self.where(code: code).count > 0
code
end
The goal is create random codes for a new register, the code can't exist already in the registers
I'm trying test this way, but when I mock the SecureRandom it always return the same value:
it "code is unique" do
old_code = Code.new
old_code.code = 111111
new_code = Code.new
expect(SecureRandom).to receive(:random_number) {old_code.code}
new_code.code = Code.generate_random_uniq_code
expect(new_code.code).to_not eq old_code.code
end
I was trying to find if there is a way to enable and disable the mock behavior, but I could not find it, I'm not sure I'm doing the test the right way, the code seems no work fine to me.
Any help is welcome, thanks!
TL;DR
Generally, unless you are actually testing a PRNG that you wrote, you're probably testing the wrong behavior. Consider what behavior you're actually trying to test, and examine your alternatives. In addition, a six-digit number doesn't really have enough of a key space to ensure real randomness for most purposes, so you may want to consider something more robust.
Some Alternatives
One should always test behavior, rather than implementation. Here are some alternatives to consider:
Use a UUID instead of a six-digit number. The UUID is statistically less likely to encounter collisions than your current solution.
Enforce uniqueness in your database column by adjusting the schema.
Using a Rails uniqueness validator in your model.
Use FactoryGirl sequences or lambdas to return values for your test.
Fix Your Spec
If you really insist on testing this piece of code, you should at least use the correct expectations. For example:
# This won't do anything useful, if it even runs.
expect(new_code.code).to_not old_code.code
Instead, you should check for equality, with something like this:
old_code = 111111
new_code = Code.generate_random_uniq_code
new_code.should_not eq old_code
Your code may be broken in other ways (e.g. the code variable in your method doesn't seem to be an instance or class variable) so I won't guarantee that the above will work, but it should at least point you in the right direction.