How do I test `rand()` with RSpec? - ruby-on-rails

I have a method that does something like this:
def some_method
chance = rand(4)
if chance == 1 do
# logic here
else
# another logic here
end
end
When I use RSpec to test this method, rand(4) inside it always generates 0. I am not testing rand method of Rails, I am testing my method.
What is a common practice to test my method?

There are two approaches I would consider:
Approach 1:
Use a known value of seed in srand( seed ) in a before :each block:
before :each do
srand(67809)
end
This works across Ruby versions, and gives you control in case you want to cover particular combinations. I use this approach a lot - thinking about it, that's because the code I was testing uses rand() primarily as a data source, and only secondarily (if at all) for branching. Also it gets called a lot, so exerting call-by-call control over returned values would be counter-productive, I would end up shovelling in lots of test data that "looked random", probably generating it in the first place by calling rand()!
You may wish to call your method multiple times in at least one test scenario to ensure you have reasonable coverage of combinations.
Approach 2:
If you have branch points due to values output from rand() and your assertions are of the type "if it chooses X, then Y should happen", then it is also reasonable in the same test suite to mock out rand( n ) with something that returns the values you want to make assertions about:
require 'mocha/setup'
Kernel.expects(:rand).with(4).returns(1)
# Now run your test of specific branch
In essence these are both "white box" test approaches, they both require you to know that your routine uses rand() internally.
A "black box" test is much harder - you would need to assert that behaviour is statistically OK, and you would also need to accept a very wide range of possibilities since valid random behaviour could cause phantom test failures.

I'd extract the random number generation:
def chance
rand(4)
end
def some_method
if chance == 1 do
# logic here
else
# another logic here
end
end
And stub it:
your_instance.stub(:chance) { 1 }
This doesn't tie your test to the implementation details of rand and if you ever decide to use another random number generator, your test doesn't break.

It seems that best idea is to use stub, instead of real rand. This way you would be able to test all values that you are interested in. As rand is defined in Kernel module you should stub it using:
Kernel.stub(:rand).with(anything) { randomized_value }
In particular contexts you can define randomized_value with let method.

I found that just stubbing rand ie. using Kernel.stub(:rand) as answered by Samuil did not initially work. My code to be tested called rand directly e.g
random_number = rand
However, if I changed the code to
random_number = Kernel.rand
then the stubbing worked.

This works in RSpec:
allow_any_instance_of(Object).to receive(:rand).and_return(1)

Related

RSpec + Rubocop - why receive_message_chain is a code smell?

I am about to write specs for my custom validator, that uses this chain to check if a file attach with ActiveStorage is a txt:
return if blob.filename.extension.match?('txt')
Normally, I would be able to stub it with this call:
allow(attached_file).to receive_message_chain(:blob, :byte_size) { file_size }
Rubocop says it is an offence and points me to docs: https://www.rubydoc.info/gems/rubocop-rspec/1.7.0/RuboCop/Cop/RSpec/MessageChain
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1. Am I missing something here?
Why should I avoid stubbing message chains?
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1.
This is, in fact, the point. Having those 5 lines there likely will make you feel slightly uneasy. This can be thought of as positive design pressure. Your test setup being complex is telling you to have a look at the implementation. Using #receive_message_chains allows us to feel good about designs that expose complex interactions up front.
One of the authors of RSpec explains some of this in a GitHub issue.
What can I do instead?
One option is to attach a fixture file to the record in the setup phase of your test:
before do
file_path = Rails.root.join("spec", "fixtures", "files", "text.txt")
record.attribute.attach(io: File.open(file_path), filename: "text.txt")
end
This will test the validator end-to-end, without any stubbing.
Another option is to extract a named method, and then stub that instead.
In your validator:
def allowed_file_extension?
blob.filename.extension.match?("txt")
end
In your test:
before do
allow(validator).to receive(:allowed_file_extension?).and_return(true)
end
This has the added benefit of making the code a little clearer by naming a concept. (There's nothing preventing you from adding this method even if you use a test fixture.)
Just as a counterpoint, I frequently get this rubocop violation with tests around logging like:
expect(Rails).to receive_message_chain(:logger, :error).with('limit exceeded by 1')
crank_it_up(max_allowed + 1)
I could mock Rails to return a double for logger, then check that the double receives :error. But that's a bit silly, IMO. Rails.logger.error is more of an idiom than a message chain.
I could create a log_error method in my model or a helper (and sometimes I do), but often that's just a pointless wrapper for Rails.logger.error
So, I either end up disabling RSpec/MessageChain for that line, or perhaps for the entire project (since I would never abuse it for real...right?) It would be nice if there was a way to be more selective about disabling/muting this cop across the project...but I'm not sure how that could work, in any case.

How does Rspec 'let' helper work with ActiveRecord?

It said here https://www.relishapp.com/rspec/rspec-core/v/3-5/docs/helper-methods/let-and-let what variable defined by let is changing across examples.
I've made the same simple test as in the docs but with the AR model:
RSpec.describe Contact, type: :model do
let(:contact) { FactoryGirl.create(:contact) }
it "cached in the same example" do
a = contact
b = contact
expect(a).to eq(b)
expect(Contact.count).to eq(1)
end
it "not cached across examples" do
a = contact
expect(Contact.count).to eq(2)
end
end
First example passed, but second failed (expected 2, got 1). So contacts table is empty again before second example, inspite of docs.
I was using let and was sure it have the same value in each it block, and my test prove it. So suppose I misunderstand docs. Please explain.
P.S. I use DatabaseCleaner
P.P.S I turn it off. Nothing changed.
EDIT
I turned off DatabaseCleaner and transational fixtures and test pass.
As I can understand (new to programming), let is evaluated once for each it block. If I have three examples each calling on contact variable, my test db will grow to three records at the end (I've tested and so it does).
And for right test behevior I should use DatabaseCleaner.
P.S. I use DatabaseCleaner
That's why your database is empty in the second example. Has nothing to do with let.
The behaviour you have shown is the correct behaviour. No example should be dependant on another example in setting up the correct environment! If you did rely on caching then you are just asking for trouble later down the line.
The example in that document is just trying to prove a point about caching using global variables - it's a completely different scenario to unit testing a Rails application - it is not good practice to be reliant on previous examples to having set something up.
Lets, for example, assume you then write 10 other tests that follow on from this, all of which rely on the fact that the previous examples have created objects. Then at some point in the future you delete one of those examples ... BOOM! every test after that will suddenly fail.
Each test should be able to be tested in isolation from any other test!

How to test a random uniq values with rspec

I have this code:
def self.generate_random_uniq_code
code = sprintf("%06d", SecureRandom.random_number(999999))
code = self.generate_random_uniq_code if self.where(code: code).count > 0
code
end
The goal is create random codes for a new register, the code can't exist already in the registers
I'm trying test this way, but when I mock the SecureRandom it always return the same value:
it "code is unique" do
old_code = Code.new
old_code.code = 111111
new_code = Code.new
expect(SecureRandom).to receive(:random_number) {old_code.code}
new_code.code = Code.generate_random_uniq_code
expect(new_code.code).to_not eq old_code.code
end
I was trying to find if there is a way to enable and disable the mock behavior, but I could not find it, I'm not sure I'm doing the test the right way, the code seems no work fine to me.
Any help is welcome, thanks!
TL;DR
Generally, unless you are actually testing a PRNG that you wrote, you're probably testing the wrong behavior. Consider what behavior you're actually trying to test, and examine your alternatives. In addition, a six-digit number doesn't really have enough of a key space to ensure real randomness for most purposes, so you may want to consider something more robust.
Some Alternatives
One should always test behavior, rather than implementation. Here are some alternatives to consider:
Use a UUID instead of a six-digit number. The UUID is statistically less likely to encounter collisions than your current solution.
Enforce uniqueness in your database column by adjusting the schema.
Using a Rails uniqueness validator in your model.
Use FactoryGirl sequences or lambdas to return values for your test.
Fix Your Spec
If you really insist on testing this piece of code, you should at least use the correct expectations. For example:
# This won't do anything useful, if it even runs.
expect(new_code.code).to_not old_code.code
Instead, you should check for equality, with something like this:
old_code = 111111
new_code = Code.generate_random_uniq_code
new_code.should_not eq old_code
Your code may be broken in other ways (e.g. the code variable in your method doesn't seem to be an instance or class variable) so I won't guarantee that the above will work, but it should at least point you in the right direction.

Multiple should statements in one rspec it clause - bad idea?

Here's my rspec test:
it "can release an idea" do
james.claim(si_title)
james.release(si_title)
james.ideas.size.should eq 0
si_title.status.should eq "available"
end
Are the two should lines at the end a really bad idea? I read somewhere that you should only test for one thing per it block, but it seems silly to make a whole test just to make sure that the title status changes (the same function does both things in my code).
My interpretation of this isn't so much that there should be precisely one assertion / call to should per spec, but that there should only be one bit of behaviour tested per spec, so for example
it 'should do foo and bar' do
subject.do_foo.should be_true
subject.do_bar.should be_true
end
is bad - you're speccing 2 different behaviours at the same time.
on the other hand if your 2 assertions are just verifying different aspects of one thing then I'm ok with that, for example
it 'should return a prime integer' do
result = subject.do_x
result.should be_a(Integer)
result.foo.should be_prime
end
For me it wouldn't make too much sense to have one spec that checks that it returns an integer and a separate one that it returns a prime.
Of course in this case, the be_prime matcher could easily do both of these checks - perhaps a good rule of thumb is that multiple assertions are ok if you could sensibly reduce them to 1 with a custom matcher (whether actually doing this is actually worthwhile probably depends on your situation)
In your particular case, it could be argued that there are 2 behaviours at play - one is changing the status and other is mutating the ideas collection. I would reword your specs to say what the release method should do -
it 'should change the status to available'
it 'should remove the idea from the claimants ideas'
At the moment those things always happen simultaneously but I would argue they are separate behaviours - you could easily imagine a system where multiple people can claim/release an idea and the status only change when the last person releases the idea.
I have the same problem...
It ought to be one should per it (my boss says) by it makes testing time consuming and, as you said, silly.
Tests require good sense and flexibility or they might end up enslaving you.
Anyway, I agree with your test.
My policy is to always consider multiple assertions to be a sign of a potential problem and worth a second thought, but not necessarily wrong. Probably 1/3 of the specs I write end up having multiple assertions in them for one reason or another.
One of the problems with having multiple assertions is that when one fails, you can't see if the other one passed or not. This can sometimes be worked around by building an array of results and asserting the value of the array.
In the case you are asking about, I don't think that multiple assertions are a problem, but I do see something else that seems an issue to me. It looks like your spec may be overly coupled.
I think that you're trying to assert the behavior of whatever kind of a thing james is, but the test is also depending on behavior of si_title in order to tell us what was done to it (by way of its eventual #status value). What I would usually do instead is make si_title a test-double and use #should_receive to directly specify the messages that it should expect.
I think Frederick Cheung gave a very good answer (+1) especially the why, but I'd also like to give a comparison bit of code for you to look at which uses the its, lets, before and context syntax:
context "Given a si_title" do
let(:james) { James.new } # or whatever
before do
james.claim(si_title)
james.release(si_title)
end
context "That is valid" do
let(:si_title) { Si_title.new } # or whatever
describe "James' ideas" do
subject { james.ideas }
its(:size) { should == 0 }
its(:whatever) { should == "Whatever" }
end
describe "Si title" do
subject { si_title }
its(:status) { should == "available" }
end
end
context "That is invalid" do
# stuff here
end
end
I would even go further and make the expected values lets and then make the examples shared_examples, so that they could then be use to check the different aspects (null arguments, invalid arguments, an incorrect object… ), but I believe that this is a far better way to spec your intentions and still cut down on any repetition. Taking the example from Frederick's answer:
it 'should return a prime integer' do
result = subject.do_x
result.should be_a(Integer)
result.foo.should be_prime
end
Using RSpec's syntax to full effect you get this:
let(:result) { 1 }
subject{ result }
its(:do_x) { should be_a(Integer) }
its(:foo) { should be_prime }
Which means you can check multiple aspects of a subject.

How can I check Rails record caching in unit tests?

I'm trying to make a unit test to ensure that certain operations do / do not query the database. Is there some way I can watch for queries, or some counter I can check at the very worst?
If your intent is to discern whether or not Rails (ActiveRecord) actually caches queries, you don't have to write a unit test for those - they already exist and are part of Rails itself.
Edit:
In that case, I would probably see if I could adapt one of the strategies the rails team uses to test ActiveRecord itself. Check the following test from my link above:
def test_middleware_caches
mw = ActiveRecord::QueryCache.new lambda { |env|
Task.find 1
Task.find 1
assert_equal 1, ActiveRecord::Base.connection.query_cache.length
}
mw.call({})
end
You may be able to do something like the following:
def check_number_of_queries
mw = ActiveRecord::QueryCache.new lambda { |env|
# Assuming this object is set up to perform all its operations already
MyObject.first.do_something_and_perform_side_operations
puts ActiveRecord::Base.connection.query_cache.length.to_s
}
end
I haven't tried such a thing, but it might be worth investigating further. If the above actually does return the number of cached queries waiting to happen, it should be trivial to change the puts to an assert for your test case.

Resources