When to use RSpec's let! instead of before? - ruby-on-rails

I have been told that it makes sense to use let!() instead of before(:each) block. However, I can't see much logic in doing that. Does it actually make any sense to make something like in the example below:
context 'my super context' do
let!(:something) do
Model.create(subject: "Hello", body: "World")
end
it '...' do
# We never call something
# All we want is just to evaluate and create the record
end
end
Keep in mind that you won't call something anywhere in the example at all. Technically I don't see much difference between that and this, unless it's actually more strict and easy to understand without documentation just like plain English:
context 'my super context' do
before(:each) do
Model.create(subject: "Hello", body: "World")
end
it '...' do
# We never call something
# All we want is just evaluate and create the record
end
end
Could someone help me to understand the idea behind the rule I was given?
I can understand why you would use let instead of before(:all) and before(:each) with the variable inside. But here it just feels like they just blindly follow the phrase "use a let helper instead of a before block" found somewhere in the blog post.
Thank you very much!

No, it is not a best practice to always use let! rather than a before block. You're correct:
If you want to run a block of code before every example, and you need that code to return a value, use let!.
If you want to run a block of code before every example, but you don't need to pass a value from that code directly to your example, use before. (Using let! would mislead the reader into thinking that its value was being used.)
Note that both let! and before blocks should be used sparingly in any case. They create the risk of later test writers adding tests in the same block that don't need the result of the let! or before, making those tests harder to understand and slower.

Related

How do RSpec's let and let! replace the original parameters?

In the process of writing tests in Rspec, if you encounter repeated required parameters {...}, you can use let to write it. This avoids writing a bunch of parameter preparations in advance for each example.
However, I don't quite understand the paradigm of Better Specs. His original code is this:
describe '#type_id' do
before { #resource = FactoryBot.create :device }
before { #type = Type.find #resource.type_id }
it 'sets the type_id field' do
expect(#resource.type_id).to eq(#type.id)
end
end
After using let it becomes the following
describe '#type_id' do
let(:resource) { FactoryBot.create :device }
let(:type) { Type.find resource.type_id }
it 'sets the type_id field' do
expect(resource.type_id).to eq(type.id)
end
end
It looks like the way to call a resource is pretty much the same, what's the benefit of using let ? What is the function of FactoryBot.create:device? And I can't see where type is being called?
The difference is that lets are lazily evaluated and then memoized for the rest of the spec.
So in the first example, first the before blocks run and set the values of #resource and #type, and then the spec runs.
In the second example, the spec runs and when it references 'resource' the let block runs and returns a value, then when 'type' is referenced is runs the let block for that. The let block for type itself references 'resource' so it gets the value for resource that was memoized from the first time resource was referenced.
For what its worth, I disagree that lets are 'better'. My team and I have found that all they do is make specs much harder to understand for very little benefit, and we have removed all use of them in all our projects.
In fact, I consider that most of 'better specs' is actually poor advice, so if you are struggling to understand why something is 'better', you are very much not alone :)

Why isn't the args parameter used in ActionController::Instrumentation::render?

I am new to Ruby and to Rails, and am trying to understand fully what I'm reading.
I am looking at some of the Rails source code, in this case action_controller/metal/instrumentation.rb.
def render(*args)
render_output = nil
self.view_runtime = cleanup_view_runtime do
Benchmark.ms { render_output = super }
end
render_output
end
I understand that *args is using the splat operator to collect the arguments together into an array. But after that, it stops making much sense to me.
I can't fathom why render_output is set to nil before being reassigned to equal super and then called with no arguments. I gather that some speedtest is being done, but coming from other languages I'd expect this to just be something more like Benchmark.ms(render_output) or perhaps Benchmark.start followed by render_output followed by Benchmark.end. I'm having a hard time following the way it works here.
But more importantly, I don't really follow why args isn't used again. Why bother defining a param that isn't used? And I mean, clearly it is getting used-- I just don't see how. There's some hidden mechanism here that I haven't learned about yet.
In this context, it is important to note how super works, because in some cases it passes implicitly arguments and you might not expect that.
When you have method like
def method(argument)
super
end
then super is calling the overridden implementation of method implicitly with the exact same arguments as the current method was called. That means in this example super will actually call super(argument).
Of course, you can still define a method call that explicitly sends other arguments to the original implementation, like in this example:
def method(argument)
super(argument + 1)
end
Another important edge-case is when you want to explicitly call super without any arguments although the current method was called with arguments then you need to be very explicit like this
def method(argument)
super() # note the empty parentheses
end
Let me try to describe you what I think this code does.
*args*
is using the splat operator to collect the arguments together into an array
that is totally correct, however they don't use it, and if you will go to master branch, they just changed it to *. Asking why it is defined and not used, I think that's question about bad design. They should have called it _args or at least like it is now just single splat *.
render_output is set to nil because of scopes, it has to be explicitly defined out block, lambda, proc in order to store value in it, otherwise its visibility will be locked only to those lambda, proc, block execution. Refer to this article
Benchmark.start. Blocks are great ruby construction. You are totally correct that speedtest is done, we can see it is just decorator for benchmark library.
source.
You are wondering why we cannot just pass it as Benchmark.ms(render_output), that's because what will be given to benchmark ms function? It will be given result, like <div> my html </div. And how we can measure this string result - no how. That's why we calling super right in this block, we want to access parent class function and wrap it inside block, so we are not calling it, we just construct it, and it will be called inside benchmark lib, and measured execution like
class Benchmark
...
def realtime # :yield:
r0 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
yield
Process.clock_gettime(Process::CLOCK_MONOTONIC) - r0
end
...
end
So here we can count realtime of function execution, this is the code from original library

RSpec + Rubocop - why receive_message_chain is a code smell?

I am about to write specs for my custom validator, that uses this chain to check if a file attach with ActiveStorage is a txt:
return if blob.filename.extension.match?('txt')
Normally, I would be able to stub it with this call:
allow(attached_file).to receive_message_chain(:blob, :byte_size) { file_size }
Rubocop says it is an offence and points me to docs: https://www.rubydoc.info/gems/rubocop-rspec/1.7.0/RuboCop/Cop/RSpec/MessageChain
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1. Am I missing something here?
Why should I avoid stubbing message chains?
I would have to declare double for blob and byte_size and stub them in separate lines, ending up with 5 lines of code instead of 1.
This is, in fact, the point. Having those 5 lines there likely will make you feel slightly uneasy. This can be thought of as positive design pressure. Your test setup being complex is telling you to have a look at the implementation. Using #receive_message_chains allows us to feel good about designs that expose complex interactions up front.
One of the authors of RSpec explains some of this in a GitHub issue.
What can I do instead?
One option is to attach a fixture file to the record in the setup phase of your test:
before do
file_path = Rails.root.join("spec", "fixtures", "files", "text.txt")
record.attribute.attach(io: File.open(file_path), filename: "text.txt")
end
This will test the validator end-to-end, without any stubbing.
Another option is to extract a named method, and then stub that instead.
In your validator:
def allowed_file_extension?
blob.filename.extension.match?("txt")
end
In your test:
before do
allow(validator).to receive(:allowed_file_extension?).and_return(true)
end
This has the added benefit of making the code a little clearer by naming a concept. (There's nothing preventing you from adding this method even if you use a test fixture.)
Just as a counterpoint, I frequently get this rubocop violation with tests around logging like:
expect(Rails).to receive_message_chain(:logger, :error).with('limit exceeded by 1')
crank_it_up(max_allowed + 1)
I could mock Rails to return a double for logger, then check that the double receives :error. But that's a bit silly, IMO. Rails.logger.error is more of an idiom than a message chain.
I could create a log_error method in my model or a helper (and sometimes I do), but often that's just a pointless wrapper for Rails.logger.error
So, I either end up disabling RSpec/MessageChain for that line, or perhaps for the entire project (since I would never abuse it for real...right?) It would be nice if there was a way to be more selective about disabling/muting this cop across the project...but I'm not sure how that could work, in any case.

Set an expectation without mocking anything

Using MiniTest::Spec and Mocha:
describe "#devices" do
it "scopes the devices by the provided :ip_list" do
ips = 'fast tests ftw!'
ds = DeviceSearch.new ip_list: ips
Device.expects(:scope_by_ip_list).once.with(ips)
ds.devices
end
end
When I make the code work correctly, this test will fail, because calling Device.expects(:scope_by_ip_list) also stubs Device.scope_by_ip_list, and since I don't specify a .returns(Devices.scoped) or some such, it stubs out the method with nil. So, in my code which properly scopes a list of devices and then does further operations, the further operations blow up.
I don't want to have to specify a .returns parameter, though, because I totally don't care what it returns. I don't want to stub the method at all! I just want to set up an expectation on it, and leave it functioning just the way it is.
Is there a way to do that?
(To me, it seems very awkward to say something like Device.expects(:foo).returns('bar')—when I say that Model expects method, I'm not saying to stub that method! We can say Device.stubs(:foo), if we want to stub it.)
The behavior is intended and can't be changed. Look at the following post to see how it can be circumwented:
rspec 2: detect call to method but still have it perform its function

How does instance_eval work and why does DHH hate it?

At about the 19:00 mark in his RailsConf presentation, David Heinemeier Hansson talks about the downsides of instance_eval:
For a long time I ranted and raved
against instance_eval, which is the
concept of not using a yielded
parameter (like do |people|) and
just straight do something and then
evaluate what's in that block within
the scope of where you came from (I
don't even know if that's a coherent
explanation)
For a long time I didn't like that
because it felt more complex in some
sense. If you wanted to put your own
code in there were you going to
trigger something that was already
there? Were you going to override
something? When you're yielding a
specific variable you can chain
everything off that and you can know
[you're] not messing with anybody
else's stuff
This sounded interesting, but a) I don't know how how instance_eval works in the first place and b) I don't understand why it can be bad / increase complexity.
Can someone explain?
The thing that instance_eval does is that it runs the block in the context of a different instance. In other words, it changes the meaning of self which means it changes the meaning of instance methods and instance variables.
This creates a cognitive disconnect: the context in which the block runs is not the context in which it appears on the screen.
Let me demonstrate that with a slight variation of #Matt Briggs's example. Let's say we're building an email instead of a form:
def mail
builder = MailBuilder.new
yield builder
# executed after the block
# do stuff with builder
end
mail do |f|
f.subject #subject
f.name name
end
In this case, #subject is an instance variable of your object and name is a method of your class. You can use nice object-oriented decomposition and store your subject in a variable.
def mail &block
builder = MailBuilder.new
builder.instance_eval &block
# do stuff with builder
end
mail do
subject #subject
name name # Huh?!?
end
In this case, #subject is an instance variable of the mail builder object! It might not even exist! (Or even worse, it might exist and contain some completely stupid value.) There is no way for you to get access to your object's instance variables. And how do you even call the name method of your object? Everytime you try to call it, you get the mail builder's method.
Basically, instance_eval makes it hard to use your own code inside the DSL code. So, it should really only be used in cases where there is very little chance that this might be needed.
Ok, so the idea here is instead of something like this
form_for #obj do |f|
f.text_field :field
end
you get something like this
form_for #obj do
text_field :field
end
the first way is pretty straight forward, you end up with a pattern that looks like this
def form_for
b = FormBuilder.new
yield b
b.fields.each |f|
# do stuff
end
end
you yield out a builder object that the consumer calls methods on, and afterwards you call methods on the builder object to actually build the form (or whatever)
the second one is a bit more magical
def form_for &block
b = FormBuilder.new
b.instance_eval &block
b.fields.each |f|
#do stuff
end
end
in this one, instead of yielding the builder to the block, we take the block and evaluate it in the context of the builder
The second one increases complexity because you are sort of playing games with scope, you need to understand that, and the consumer needs to understand that, and whoever wrote your builder needs to understand that. If everyone is on the same page, I don't know that it is nessicarily a bad thing, but i do question the benefits vs the costs, i mean, how hard is it to just tack on an f. in front of your methods?
The idea is that it's a little dangerous in that you can never be quite sure you're not going to break something without reading all the code that deals with the object your using instance_eval on.
Also if you , say, updated a library that didn't change the interface much but changed a lot of the object internals you could really do some damage.

Resources