Background
I have a rails model that contains an ActiveRecord::Enum. I have a view helper that takes a value of this enum, and returns one of several possible responses. Suppose the cases were called enum_cases, for example:
enum_cases = [:a, :b, :c]
def foo(input)
case input
when :a then 1
when :b then 2
when :c then 3
else raise NotImplementedError, "Unhandled new case: #{input}"
end
end
I want to unit-test this code. Checking the happy paths is trivial:
class FooHelperTests < ActionView::TestCase
test "foo handles all enum cases" do
assert_equal foo(:a), 1
assert_equal foo(:b), 2
assert_equal foo(:c), 3
assert_raises NotImplementedError do
foo(:d)
end
end
end
However, this has a flaw. If new cases are added (e.g. :z), foo will raise an error to bring our attention to it, and add it as a new case. But nothing stops you from forgetting to update the test to test for the new behaviour for :z. Now I know that's mainly the job of code coverage tools, and we do use one, but just not to such a strict level that single-line gaps will blow up. Plus this is kind of a learning exercise, anyway.
So I amended my test:
test "foo handles all enum cases" do
remaining_cases = enum_cases.to_set
tester = -> (arg) do
remaining_cases.delete(arg)
foo(arg)
end
assert_equal tester.call(:a), 1
assert_equal tester.call(:b), 2
assert_equal tester.call(:c), 3
assert_raises NotImplementedError do
tester.call(:d)
end
assert_empty remaining_cases, "Not all cases were tested! Remaining: #{remaining_cases}"
end
This works great, however it's got 2 responsibilities, and it's a pattern I end up copy/pasting (I have multiple functions to test like this):
Perform the actual testing of foo
Do book keeping to ensure all params were exhausitvely checked.
I would like to make this test more focused by removing as much boiler plate as possible, and extracting it out to a place where it can easily be reused.
Attempted solution
In another language, I would just extract a simple test helper:
class ExhaustivityChecker
def initialize(all_values, proc)
#remaining_values = all_values.to_set
#proc = proc
end
def run(arg, allow_invalid_args: false)
assert #remaining_values.include?(arg) unless allow_invalid_args
#remaining_values.delete(arg)
#proc.call(arg)
end
def assert_all_values_checked
assert_empty #remaining_values, "Not all values were tested! Remaining: #{#remaining_values}"
end
end
Which I could easily use like:
test "foo handles all enum cases" do
tester = ExhaustivityChecker.new(enum_cases, -> (arg) { foo(arg) })
assert_equal tester.run(:a), 1
assert_equal tester.run(:b), 2
assert_equal tester.run(:c), 3
assert_raises NotImplementedError do
tester.run(:d, allow_invalid_args: true)
end
tester.assert_all_values_checked
end
I could then reuse this class in other tests, just by passing it different all_values and proc arguments, and remembering to call assert_all_values_checked.
Issue
However, this breaks because I can't call assert and assert_empty from a class that isn't a subclass of ActionView::TestCase. Is it possible to subclass/include some class/module to gain access to these methods?
enum_cases must be kept up to date when the production logic changes violating the DRY principle. This makes it more likely for there to be a mistake. Furthermore it is test code living in production, another red flag.
We can solve this by refactoring the case into a Hash lookup making it data driven. And also giving it a name describing what it's associated with and what it does, these are "handlers". I've also turned it into a method call making it easier to access and which will bear fruit later.
def foo_handlers
{
a: 1,
b: 2,
c: 3
}.freeze
end
def foo(input)
foo_handlers.fetch(input)
rescue KeyError
raise NotImplementedError, "Unhandled new case: #{input}"
end
Hash#fetch is used to raise a KeyError if the input is not found.
Then we can write a data driven test by looping through, not foo_handlers, but a seemingly redundant expected Hash defined in the tests.
class FooHelperTests < ActionView::TestCase
test "foo handles all expected inputs" do
expected = {
a: 1,
b: 2,
c: 3
}.freeze
# Verify expect has all the cases.
assert_equal expect.keys.sort, foo_handlers.keys.sort
# Drive the test with the expected results, not with the production data.
expected.keys do |key|
# Again, using `fetch` to get a clear KeyError rather than nil.
assert_equal foo(key), expected.fetch(value)
end
end
# Simplify the tests by separating happy path from error path.
test "foo raises NotImplementedError if the input is not handled" do
assert_raises NotImplementedError do
# Use something that obviously does not exist to future proof the test.
foo(:does_not_exist)
end
end
end
The redundancy between expected and foo_handlers is by design. You still need to change the pairs in both places, there's no way around that, but now you'll always get a failure when foo_handlers changes but the tests do not.
When a new key/value pair is added to foo_handlers the test will fail.
If a key is missing from expected the test will fail.
If someone accidentally wipes out foo_handlers the test will fail.
If the values in foo_handlers are wrong, the test will fail.
If the logic of foo is broken, the test will fail.
Initially you're just going to copy foo_handlers into expected. After that it becomes a regression test testing that the code still works even after refactoring. Future changes will incrementally change foo_handlers and expected.
But wait, there's more! Code which is hard to test is probably hard to use. Conversely, code which is easy to test is easy to use. With a few more tweaks we can use this data-driven approach to make production code more flexible.
If we make foo_handlers an accessor with a default that comes from a method, not a constant, now we can change how foo behaves for individual objects. This may or may not be desirable for your particular implementation, but its in your toolbox.
class Thing
attr_accessor :foo_handlers
# This can use a constant, as long as the method call is canonical.
def default_foo_handlers
{
a: 1,
b: 2,
c: 3
}.freeze
end
def initialize
#foo_handlers = default_foo_handlers
end
def foo(input)
foo_handlers.fetch(input)
rescue KeyError
raise NotImplementedError, "Unhandled new case: #{input}"
end
end
Now individual objects can define their own handlers or change the values.
thing = Thing.new
puts thing.foo(:a) # 1
puts thing.foo(:b) # 2
thing.foo_handlers = { a: 23 }
puts thing.foo(:a) # 23
puts thing.foo(:b) # NotImplementedError
And, more importantly, a subclass can change their handlers. Here we add to the handlers using Hash#merge.
class Thing::More < Thing
def default_foo_handlers
super.merge(
d: 4,
e: 5
)
end
end
thing = Thing.new
more = Thing::More.new
puts more.foo(:d) # 4
puts thing.foo(:d) # NotImplementedError
If a key requires more than a simple value, use method names and call them with Object#public_send. Those methods can then be unit tested.
def foo_handlers
{
a: :handle_a,
b: :handle_b,
c: :handle_c
}.freeze
end
def foo(input)
public_send(foo_handlers.fetch(input), input)
rescue KeyError
raise NotImplementedError, "Unhandled new case: #{input}"
end
def handle_a(input)
...
end
def handle_b(input)
...
end
def handle_c(input)
...
end
Related
I'm in Rails 6 with Rspec 3.8.0.
I have a model A which belongs_to B. And I'm trying to write a unit test with A as subject:
expect(subject.b).to receive(:to_s)
subject.my_fn
Yet this spec always fails, saying that the instance of B did not receive the message, notwithstanding I have put binding.pry in the actual code to run and verified that a.b.to_s gets called:
class A
def my_fn
binding.pry
b.to_s
end
end
I have even tried:
expect(a).to receive(:b).and_return(b)
expect(b).to receive(:to_s)
And:
expect_any_instance_of(b.class).to receive(:to_s)
Yet all expectations for to_s fail. Why is this?
It's not shown in your code, but I have a feeling that you are calling the code before you set up your "receive" expectations. Simply put, the code execution should be like below:
it 'something' do
expect(subject.b).to receive(:to_s)
# write code here that would eventually call `a.b.to_s` (as you have said)
# i.e.
# `subject.some_method` (assuming `some_method` is your method that calls `a.b.to_s`
# don't call `subject.some_method` before the `expect` block above.
end
Also, just in case you don't know yet, make sure that it's the same object instance that you pass in to expect: expect(THE_ARG) ... receive() and the object that you are testing to be called. You can verify that they are the same if they have the same object_id:
it 'something' do
puts subject.b.object_id
# => 123456789
subject.some_method
end
# the class/method you're unit-testing:
class Foo
def some_method
# ...
puts b.object_id
# => 123456789
# ^ should also be the same
Otherwise if it's not the same object (object_id does not match), you would have to either use expect_any_instance_of (which I only use at the last resort as it is potentially dangerous as it is expecting "any instance")... or you could stub the chain a.b.to_s objects in your spec file.
If it's hard to stub the whole chain but at the same time, avoid the pitfalls of using expect_any_instance_of, there's another variant that I use which I use to balance convenience and spec-accuracy:
it 'something' do
expect_any_instance_of(subject.b.class).to receive(:to_s).once do |b|
expect(b.id).to eq(subject.b.id)
# the above just compares the `id` of the records (even if they are different objects in different memory-space)
# to demonstrate, say I do puts here:
puts b
# => #<SomeRecord:0x00005600e7a6f3b8 id:1 ...>
puts subject.b
# => #<SomeRecord:0x00005600e4f04138 id:1 ...>
puts b.id
# => 1
puts subject.b.id
# => 1
# notice that they are different objects (0x00005600e7a6f3b8 vs 0x00005600e4f04138)
# but the attribute id is the same (1 == 1)
end
subject.some_method
end
Seems that makes more sense you stub the b relation. It will looks like:
expect(a).to receive(:b).and_return(stub(:b, to_s: 'foo_bar')
I've realized that the way I've been writing tests is producing false positives.
Say I have this source code
class MyClass
def foo
end
def bar
1
end
end
The foo method does nothing, but say I want to write a test that makes sure it calls bar under the hood (even though it doesn't). Furthermore, I want to ensure that the result of calling bar directly is 1.
it "test" do
inst = MyClass.new
expect(inst).to receive(:bar).and_call_original
inst.foo
expect(inst.bar).to eq(1)
end
So this is returning true, but I want it to return false.
I want this line:
expect(inst).to receive(:bar).and_call_original
to not take into account the fact that in my test case I'm calling inst.bar directly. I want it to only look at the internal of the foo method.
You'r defining 2 separate test cases within one test case. You should change it to 2 separate tests.
describe '#bar' do
it "uses #foo" do
inst = MyClass.new
allow(inst).to receive(:foo).and_call_original
inst.bar
expect(inst).to have_received(:foo)
end
it "returns 1" do
inst = MyClass.new
# if you don't need to mock it, don't do it
# allow(inst).to receive(:foo).and_call_original
expect(inst.bar).to eq(1)
end
# if you really, really wan't to do it your way, you can specify the amount of calls
it "test" do
inst = MyClass.new
allow(inst).to receive(:foo).and_call_original
inst.foo
expect(inst.bar).to eq(1)
expect(inst).to have_received(:foo).twice # or replace .twice with .at_least(2).times
end
end
Stubs are typically used in two ways:
Check if the method was called i.e. expect_any_instance_of(MyClass).to receive(:foo) in this case what it returns is not really imortant
To simulate behaviour allow_any_instance_of(MyClass).to receive(:method).and_return(fake_response). This is a great way to avoid database interactions and or isolate out other dependencies in tests.
For example in a test that requires data setup of a Rails ActiveRecord model Product that has a has many association comments:
let(:product) { Product.new }
let(:comments) { [Comment.new(text: "Foo"), Comment.new(text: "Bar")] }
before :each do
allow_any_instnace_of(Product).to recieve(:comments).and_return(comments)
Now in any of your it blocks when you call product.comments you will get back an array of comments you can use in the tests without having gone near your database which makes the test orders of magnitudes faster.
When you are using the stub to check if the method was called the key is to declare the expectation before you perform the opreation that calls the method. For example:
expect_any_instance_of(Foo).to recieve(:bar).exactly(1).times.with('hello')
Foo.new.bar('hello') # will return true
The puts statement must be having some kind of weird effect that I'm not seeing here...
I have an Order model. There's a callback on the model where the callback requires the model to be fully committed; i.e., I need to use an after_commit. However, the determinant of if the callback should run or not requires ActiveRecord::Dirty and therefore requires a before_save (or after_save, but I use before_save based on some other non-essential info).
I have combined the two thusly:
class Order
# not stored in DB, used solely to help the before_save to after_commit transition
attr_accessor :calendar_alert_type, :twilio_alerter
before_save
if self.calendar_alert_type.nil?
if self.new_record?
self.calendar_alert_type = "create, both"
elsif self.email_changed?
self.calendar_alert_type = "update, both"
elsif self.delivery_start_changed? || self.delivery_end_changed? || (type_logistics_attributes_modified.include? "delivery")
self.calendar_alert_type = "update, start"
elsif self.pickup_start_changed? || self.pickup_end_changed? || (type_logistics_attributes_modified.include? "pickup")
self.calendar_alert_type = "update, end"
end
end
puts "whatever"
end
after_commit do
if self.calendar_alert_type.present?
calendar_alert(self.calendar_alert_type)
end
end
end
def calendar_alert(alert_info)
puts "whatever"
alert_type = alert_info.split(",")[0].strip
start_or_end = alert_info.split(",")[1].strip
if start_or_end == "both"
["start","end"].each do |which_end|
Calendar.send(alert_type, which_end, self.id)
end
else
Calendar.send(alert_type, start_or_end, self.id)
end
end
All of the private methods and the ActiveRecord::Dirty statements are working appropriately. This is an example of a spec:
it "email is updated" do
Calendar.should_receive(:send).with("update", "start", #order.id).ordered
Calendar.should_receive(:send).with("update", "end", #order.id).ordered
find("[name='email']").set("nes#example.com")
find(".submit-changes").click
sleep(1)
end
it "phone is updated" do
... #same format as above
end
Literally all the specs like the above pass ONLY when EITHER puts statements is present. I feel like I'm missing something very basic here, just can't put my finger on it. It's super weird because the puts statement is spitting out random text...
*Note, I'm totally aware that should_receive should be expect_to_receive and that I shouldn't use sleep and that expectation mocks on feature tests aren't good. Working on updating the specs separately from bad code days, but these shouldn't be causing this issue... (feel free to correct me)
This behavior depends on your Rails version. Before Rails 5 you can return anything except false value to keep on running. A false will abort the before_* callback chain. puts 'whatever' returns a nil. So every thing works. Your if block seems to return a false (custom implemation for calendar_alert_type?). In this case the chain is holded.
With Rails 5 you have to throw(:abort) to stop callback handling.
So based on my understanding, I beleive when you do
Resque.inline = Rails.env.test?
Your resque tasks will run synchronously. I am writing a test on resque task that gets enqueue during an after_commit callback.
after_commit :enqueue_several_jobs
#class PingsEvent < ActiveRecord::Base
...
def enqueue_several_jobs
Resque.enqueue(PingFacebook, self.id)
Resque.enqueue(PingTwitter, self.id)
Resque.enqueue(PingPinterest, self.id)
end
In the .perform methd of my Resque task class, I am doing a Rails.logger.info and in my test, I am doing something like
..
Rails.logger.should_receive(:info).with("PingFacebook sent with id #{dummy_event.id}")
PingsEvent.create(params)
And I have the same test for PingTwitter and PingPinterest.
I am getting failure on the 2nd and third expectation because it seems like the tests actually finish before all the resque jobs get run. Only the first test actually passes. RSpec then throws a MockExpectationError telling me that Rails.logger did not receive .info for the other two tests. Anyone has had experience with this before?
EDIT
Someone mentioned that should_receive acts like a mock and that I should do .exactly(n).times instead. Sorry for not making this clear earlier, but I have my expectations in different it blocks and I don't think a should_receive in one it block will mock it for the next it block? Let me know if i'm wrong about this.
class A
def bar(arg)
end
def foo
bar("baz")
bar("quux")
end
end
describe "A" do
let(:a) { A.new }
it "Example 1" do
a.should_receive(:bar).with("baz")
a.foo # fails 'undefined method bar'
end
it "Example 2" do
a.should_receive(:bar).with("quux")
a.foo # fails 'received :bar with unexpected arguments
end
it "Example 3" do
a.should_receive(:bar).with("baz")
a.should_receive(:bar).with("quux")
a.foo # passes
end
it "Example 4" do
a.should_receive(:bar).with(any_args()).once
a.should_receive(:bar).with("quux")
a.foo # passes
end
end
Like a stub, a message expectation replaces the implementation of the method. After the expectation is fulfilled, the object will not respond to the method call again -- this results in 'undefined method' (as in Example 1).
Example 2 shows what happens when the expectation fails because the argument is incorrect.
Example 3 shows how to stub multiple invocations of the same method -- stub out each call with the correct arguments in the order they are received.
Example 4 shows that you can reduce this coupling somewhat with the any_args() helper.
Using should_receive behaves like a mock. Having multiple expectations on the same object with different arguments won't work. If you change the expectation to Rails.logger.should_receive(:info).exactly(3).times your spec will probably past.
All that said, you may want to assert something more pertinent than what is being logged for these specs, and then you could have multiple targeted expectations.
The Rails.logger does not get torn down between specs, so it doesn't matter if the expectations are in different examples. Spitting out the logger's object id for two separate examples illustrates this:
it 'does not tear down rails logger' do
puts Rails.logger.object_id # 70362221063740
end
it 'really does not' do
puts Rails.logger.object_id # 70362221063740
end
I have a test more or less like this:
class FormDefinitionTest < ActiveSupport::TestCase
context "a form_definition" do
setup do
#definition = SeedData.form_definition
# ...
I've purposely added a
raise "blah"
somewhere down the road and I get this error:
RuntimeError: blah
test/unit/form_definition_test.rb:79:in `__bind_1290079321_362430'
when I should be getting something along:
/Users/pupeno/projectx/db/seed/sheet_definitions.rb:17:in `sheet_definition': blah (RuntimeError)
from /Users/pupeno/projectx/db/seed/form_definitions.rb:4:in `form_definition'
from /Users/pupeno/projectx/test/unit/form_definition_test.rb:79
Any ideas what is sanitizing/destroying my backtraces? My suspicious is shoulda because the when the exception happens inside a setup or should is whet it happens.
This is a Rails 3 project, in case that's important.
That is because the shoulda method #context is generating code for you. for each #should block it generates a completely separate test for you so e.g.
class FormDefinitionTest < ActiveSupport::TestCase
context "a form_definition" do
setup do
#definition = SeedData.form_definition
end
should "verify some condition" do
assert something
end
should "verify some other condition" do
assert something_else
end
end
end
Then #should will generate two completely independent tests (for the two invocations of #should), one that executes
#definition = SeedData.form_definition
assert something
and another one that executes
#definition = SeedData.form_definition
assert something_else
It is worth noting that it does not generate one single test executing all three steps in a sequence.
These generated blocks of codes have method names like _bind_ something and the generated test have name that is a concatenation of all names of the contexts traversed to the should block plus the string provided by the should block (prefixed with "should "). There is another example in the documentation for shoulda-context.
I think this will give you the backtrace that you want. I haven't tested it, but it should work:
def exclude_backtrace_from_location(location)
begin
yeild
rescue => e
puts "Error of type #{e.class} with message: #{e.to_s}.\nBacktrace:"
back=e.backtrace
back.delete_if {|b| b~=/\A#{location}.+/}
puts back
end
end
exclude_backrace_from_location("test/unit") do
#some shoulda code that raises...
end
Have you checked config/initializers/backtrace_silencers.rb? That is the entry point to customize that behavior. With Rails.backtrace_cleaner.remove_silencers! you can cleanup the silencers stack.
More informations about ActiveSupport::BacktraceCleaner can be found here.