How can I make my specs run faster? - ruby-on-rails

I have several spec files that look like the following:
describe "My DSL" do
before :each do
#object = prepare_my_object
end
describe "foo" do
before :each do
#result = #object.read_my_dsl_and_store_stuff_in_database__this_is_expensive
end
it "should do this" do
#result.should be_this
end
it "should not do that" do
#result.should_not be_that
end
# ... several more tests about the state of #result
end
# ...
end
These tests take a long time, essentially because the second before :each block runs every time. Using before :all instead does not really help, because it gets called before the outer before :each. Putting all expectations in one single it block would help, but this is considered bad style.
What is best practice to have my expensive method being executed only once?

The fastest way to speed up rspec is to completely decouple the database. The DSL problem is a different problem from the get stuff in to and out of a db problem. If you have one method doing both, is it is possible to break the method into pieces?
Ideally, your DSL would be cached locally, so it wouldn't have to be pulled from the db on every request anyway. It could get loaded once in memory and held there before refreshing.
If you run against a local, in-memory cache, and decouple the db, does that speed things up? If yes, then it's the db call that's slow. If your DSL is completely loaded up in memory and the tests are still slow, then the problem is your DSL itself.

Related

Rspec tests failing when using Rails.cache, but pass if I do a binding.pry

I'm having a weird issue where I'm testing a controller that has a procedure that uses caching. The test is failing, but if I do a binding.pry inside the method that does the caching, the test passes.
example of the method containing the caching and the binding.pry:
def method_example:
data = Rails.cache.fetch(cache_key) do
ProcedureService.new(params).generate
end
binding.pry
data
end
Example of the test:
it "reverts record amount" do
expect(record.amount).to eq((original_amount + other_amount).to_d)
end
The caching is done via redis_store.
When done in the development environment, it works fine. What I don't understand is why it is failing but passing when adding a stopper? It seems it could be something about the time it takes to fetch the cache
UPDATE
Using sleep, instead of binding.pry also makes the test pass, so I can assume this is a timing issue. What is the problem exactly? and how could I manage it?
I think this has to do with cashing enabled or not enabled in your tests:
you can set expectations like this with the current implementation in your example method:
expect(Rails).to receive_massage_change(:cashe, :fetch).and_return(expected_value)
you can also inject the ProcedureService instance to the method and set expectation on it like this:
procedure_service_instance = instance_double('ProcedureService', generate: some_value_you_want_to_be_returned)
expect(procedure_service_instance).to receive(:generate)
if you make your example method like this:
def method_example
data = Constant.fetch_from_cashe(cache_key)
procedure_service.generate
data
end
then you could git rid of receive_message_chain expectation and use:
expect(Constant).to receive(:fetch_from_cashe).with(cashe_key).and_return(expected_value)
expect_any_instance_of(ProcedureService).to receive(:generate){ some_fake_return_value }
also you can enable caching in your tests, check these links: link1, link2, link3
I do not know exactly where, and how your original is written, but based on the example method you provided, I think setting expectation on the methods that get sent would do the trick. and note that your goal is not to test rails cashing but to test that your code does use it as you want.

Load file in rspec before block VS load them once

Today I'm tryng to speed up my tests suite. My application is basically a big integrator between systems so most of my tests are using Savon mocks like this
RSpec.describe MyClass do
describe 'a function which sends a SOAP request'do
before do
savon.mock!
savon.expects(action).returns(File.read("spec/fixtures/somefile.xml"))
end
after { savon.unmock! }
it 'checks something'
it 'checks something else'
it 'checks something more'
it 'checks something different'
end
end
Obviously most of those tests are quite slow as they are loading a file. Moreover sometimes these mocks are inside nested contexts in order to combine multiple shared examples which increases the amount of loads.
Thinking to speed up some of these tests I tried to reduce the number of file loads moving them outside the before block. like this
RSpec.describe MyClass do
describe 'a function which sends a SOAP request'do
the_file = File.read("spec/fixtures/somefile.xml")
before do
savon.mock!
savon.expects(action).returns(the_file)
end
after { savon.unmock! }
it 'checks something'
it 'checks something else'
it 'checks something more'
it 'checks something different'
end
end
Indeed, the speed does not change; I have blocks of 96 tests with multiples, nested contexts and checks and I haven't gained not even 0.01 seconds. So my questions are:
I supposed the before block loads the file for each it, am I
right?
Does Rspec or Savon have some kind of cache?
How can I track the number of times I'm really loading my example file?
Thank you!
Perhaps you should look into the hooks order and specify something that suits you better like before (:suite) or before (:context). Depending the one you use it will be executed
https://relishapp.com/rspec/rspec-core/docs/hooks/before-and-after-hooks
Using let let(:the_file) { File.read("spec/fixtures/somefile.xml") } will solve your problem as let is lazy evaluated

Use RSpec let(:foo) across examples

I'm using let(:foo) { create_foo() } inside my tests. create_foo is a test helper that does some fairly time expensive setup.
So everytime a test is run, foo is created, and that takes some time. However, the foo object itself does not change, I just want to unit test methods on that object, one by one, seperated into single tests.
So, is there a RSpec equivalent of let to share the variable across multiple examples, but keep the nice things like lazy loading (if foo isn't needed) and also the automatic method definition of foo so that it can be used in shared examples, without referencing it with a #foo?
Or do I have to simply define a
def foo
create_foo()
end
Can you just put it in shared examples but use memoization?
def foo
#foo ||= create_foo()
end
Using let in this way goes against what it was designed for. You should consider using before_all which runs once per test group:
before :all do
#instancevar = create_object()
end
Keep in mind that this may not be wise if create_object() hits a database since it may introduce coupling and maintain state between your tests.

Ruby Stubbing Best Practices for Time Consuming Operation

Curious about best practices for testing a particular situation.
I have a model that requires some time consuming operations to set up- reaching out to external services, parsing, kernel stuff, etc. One particular part of the set up is basically optional- I'd like to check that it's been run, but the result won't matter for almost all tests.
This model is used as input to many other classes, so I want to avoid a lengthy test suite and overbearing setup for a relatively unimportant step.
I'd like to know if this covers my bases, or if I'm going about this all wrong.
Currently, I am:
Stubbing out the operation globally
config.before(:each) do
LongOperation.any_instance.stub(:the_operation)
end
Testing that it gets called in my background job
code:
class BackgroundSetupWorker
def perform
LongOperation.the_operation
end
end
and test:
LongOperation.should_receive(:the_operation)
Unit testing the long-running operation
before(:each) do
LongOperation.unstub(:the_operation)
end
it "works preoperly" do
expect(LongOperation.the_operation).to ...
end
I think the ideal thing would be to take the LongOperation class as a param so you can switch it out in the tests however you like.
class BackgroundSetupWorker
def initialize(op_provider = LongOperation)
#op_provider = op_provider
end
def perform
#op_provider.the_operation
end
end
#in spec
describe BackgroundSetupWorker do
let(:op_provider){ double(the_operation: nil) }
subject(:worker){ BackgroundSetupWorker.new(op_provider) }
it 'should call op_provider' do
worker.perform
expect(op_provider).to have_received(:the_operation)
end
end

What is the best practice when it comes to testing "infinite loops"?

My basic logic is to have an infinite loop running somewhere and test it as best as possible. The reason for having an infinite loop is not important (main loop for games, daemon-like logic...) and I'm more asking about best practices regarding a situation like that.
Let's take this code for example:
module Blah
extend self
def run
some_initializer_method
loop do
some_other_method
yet_another_method
end
end
end
I want to test the method Blah.run using Rspec (also I use RR, but plain rspec would be an acceptable answer).
I figure the best way to do it would be to decompose a bit more, like separating the loop into another method or something:
module Blah
extend self
def run
some_initializer_method
do_some_looping
end
def do_some_looping
loop do
some_other_method
yet_another_method
end
end
end
... this allows us to test run and mock the loop... but at some point the code inside the loop needs to be tested.
So what would you do in such a situation?
Simply not testing this logic, meaning test some_other_method & yet_another_method but not do_some_looping ?
Have the loop break at some point via a mock?
... something else?
What might be more practical is to execute the loop in a separate thread, assert that everything is working correctly, and then terminate the thread when it is no longer required.
thread = Thread.new do
Blah.run
end
assert_equal 0, Blah.foo
thread.kill
in rspec 3.3, add this line
allow(subject).to receive(:loop).and_yield
to your before hook will simple yield to the block without any looping
How about having the body of the loop in a separate method, like calculateOneWorldIteration? That way you can spin the loop in the test as needed. And it doesn’t hurt the API, it’s quite a natural method to have in the public interface.
You can not test that something that runs forever.
When faced with a section of code that is difficult (or impossible) to test you should:-
Refactor to isolate the difficult to test part of the code. Make the untestable parts tiny and trivial. Comment to ensure they are not later expanded to become non-trivial
Unit test the other parts, which are now separated from the difficult to test section
The difficult to test part would be covered by an integration or acceptance test
If the main loop in your game only goes around once, this will be immediately obvious when you run it.
What about mocking the loop so that it gets executed only the number of times you specify ?
Module Object
private
def loop
3.times { yield }
end
end
Of course, you mock this only in your specs.
I know this is a little old, but you can also use the yields method to fake a block and pass a single iteration to a loop method. This should allow you to test the methods you're calling within your loop without actually putting it into an infinite loop.
require 'test/unit'
require 'mocha'
class Something
def test_method
puts "test_method"
loop do
puts String.new("frederick")
end
end
end
class LoopTest < Test::Unit::TestCase
def test_loop_yields
something = Something.new
something.expects(:loop).yields.with() do
String.expects(:new).returns("samantha")
end
something.test_method
end
end
# Started
# test_method
# samantha
# .
# Finished in 0.005 seconds.
#
# 1 tests, 2 assertions, 0 failures, 0 errors
I almost always use a catch/throw construct to test infinite loops.
Raising an error may also work, but that's not ideal especially if your loop's block rescue all errors, including Exceptions. If your block doesn't rescue Exception (or some other error class), then you can subclass Exception (or another non-rescued class) and rescue your subclass:
Exception example
Setup
class RspecLoopStop < Exception; end
Test
blah.stub!(:some_initializer_method)
blah.should_receive(:some_other_method)
blah.should_receive(:yet_another_method)
# make sure it repeats
blah.should_receive(:some_other_method).and_raise RspecLoopStop
begin
blah.run
rescue RspecLoopStop
# all done
end
Catch/throw example:
blah.stub!(:some_initializer_method)
blah.should_receive(:some_other_method)
blah.should_receive(:yet_another_method)
blah.should_receive(:some_other_method).and_throw :rspec_loop_stop
catch :rspec_loop_stop
blah.run
end
When I first tried this, I was concerned that using should_receive a second time on :some_other_method would "overwrite" the first one, but this is not the case. If you want to see for yourself, add blocks to should_receive to see if it's called the expected number of times:
blah.should_receive(:some_other_method) { puts 'received some_other_method' }
Our solution to testing a loop that only exits on signals was to stub the exit condition method to return false the first time but true the second time, ensuring the loop is only executed once.
Class with infinite loop:
class Scheduling::Daemon
def run
loop do
if daemon_received_stop_signal?
break
end
# do stuff
end
end
end
spec testing the behaviour of the loop:
describe Scheduling::Daemon do
describe "#run" do
before do
Scheduling::Daemon.should_receive(:daemon_received_stop_signal?).
and_return(false, true) # execute loop once then exit
end
it "does stuff" do
Scheduling::Daemon.run
# assert stuff was done
end
end
end
:) I had this query a few months ago.
The short answer is there is no easy way to test that. You test drive the internals of the loop. Then you plonk it into a loop method & do a manual test that the loop works till the terminating condition occurs.
The easiest solution I found is to yield the loop one time and than return. I've used mocha here.
require 'spec_helper'
require 'blah'
describe Blah do
it 'loops' do
Blah.stubs(:some_initializer_method)
Blah.stubs(:some_other_method)
Blah.stubs(:yet_another_method)
Blah.expects(:loop).yields().then().returns()
Blah.run
end
end
We're expecting that the loop is actually executed and it's ensured it will exit after one iteration.
Nevertheless as stated above it's good practice to keep the looping method as small and stupid as possible.
Hope this helps!

Resources