I'm pretty new to rspec and the whole TDD methodology. Can someone please explain the difference between mock and stub. When do we use them and when do we use Factory Girl to create objects in test cases?
You could think of a mock (or double) as a fake object. When you're testing and need to work with an object that isn't easily usable inside your test, you could use a mock as an approximation for how you expect that object to behave and work around it. Stubs can be used in a similar manner but on an individual method on an object.
Here's a rather contrived example of using a lot of both:
class Client
def connect_to_server
if Server.connect.status == 'bad'
show_an_error
else
do_something_else
end
end
def do_something_else; end
def show_an_error; end
end
context "failure" do
it "displays an error" do
bad_network_response = double("A bad response from some service", :status => 'bad')
Server.should_receive(:connect).and_return(bad_network_response)
client = Client.new
client.should_receive(:show_an_error)
client.connect_to_server
end
end
You can imagine that using a lot of mocks or stubbing is a bad idea; this is basically masking out parts of your code in your test but, it's an easy solution for some difficult/rare testing scenarios.
Factory Girl is useful for generating data for tests. You would use factories as recipes for creating instances for your models, you might need to test something involving a lot of test data and this would be a situation where using fixtures won't work and creating complicated objects explicitly can be tedious.
Your first stop is Martin Fowler's famous article : Mocks are not Stubs
Edit
Mocks and Stubs are two of the types of Test Doubles (Mezaros terminology). Test doubles are typically used to simulate the dependencies needed by a System Under Test (or Class Under Test), so that the SUT / CUT may be tested in isolation from its dependencies. (Caveat - precise terminology can be quite a touchy subject e.g. as mentioned by Jeff here)
From wikipedia:
"Test stubs provide canned answers"
"mock objects can simulate the behavior of complex, real objects"
Examples
A stub method may just return a constant value when called by the SUT, e.g. for conducting a specific test case of the SUT.
*Frameworks like Mockito (Java) and Moq (.Net) allow you to build mock classes against a dependency's interface on the fly with a minimum of code, and provide the ability to verify that the SUT interacted correctly with the mock, e.g. by checking that the SUT invoked the mock's methods the correct number of times, with the correct parameters.
* Disclaimer - I'm not a ruby dev
Related
I seem to have encountered literature alluding to it being bad practice to use RSpec's any_instance_of methods (e.g. expect_any_instance_of). The relish docs even list these methods under the "Working with legacy code" section (http://www.relishapp.com/rspec/rspec-mocks/v/3-4/docs/working-with-legacy-code/any-instance) which implies I shouldn't be writing new code leveraging this.
I feel that I am routinely writing new specs that rely on this capability. A great example is any method that creates a new instance and then calls a method on it. (In Rails where MyModel is an ActiveRecord) I routinely write methods that do something like the following:
def my_method
my_active_record_model = MyModel.create(my_param: my_val)
my_active_record_model.do_something_productive
end
I generally write my specs looking for the do_something_productive being called with use of the expect_any_instance_of. e.g.:
expect_any_instance_of(MyModel).to receive(:do_something_productive)
subject.my_method
The only other way I can see to spec this would be with a bunch of stubs like this:
my_double = double('my_model')
expect(MyModel).to receive(:create).and_return(my_double)
expect(my_double).to receive(:do_something_productive)
subject.my_method
However, I consider this worse because a) it's longer and slower to write, and b) it's much more brittle and white box than the first way. To illustrate the second point, if I change my_method to the following:
def my_method
my_active_record_model = MyModel.new(my_param: my_val)
my_active_record_model.save
my_active_record_model.do_something_productive
end
then the double version of the spec breaks, but the any_instance_of version works just fine.
So my questions are, how are other developers doing this? Is my approach of using any_instance_of frowned upon? And if so, why?
This is kind of a rant, but here are my thoughts:
The relish docs even list these methods under the "Working with legacy code" section (http://www.relishapp.com/rspec/rspec-mocks/v/3-4/docs/working-with-legacy-code/any-instance) which implies I shouldn't be writing new code leveraging this.
I don't agree with this. Mocking/stubbing is a valuable tool when used effectively and should be used in tandem with assertion style testing. The reason for this is that mocking/stubbing enables an "outside-in" testing approach where you can minimize coupling and test high level functionality without needing to call every little db transaction, API call, or helper method in your stack.
The question really is do you want to test state or behavior? Obviously, your app involves both so it doesn't make sense to tether yourself to a single paradigm of testing. Traditional testing via assertions/expectations is effective for testing state and is seldom concerned with how state is changed. On the other hand, mocking forces you to think about interfaces and interactions between objects, with less burden on the mutation of state itself since you can stub and shim return values, etc.
I would, however, urge caution when using *_any_instance_of and avoid it if possible. It's a very blunt instrument and can be easy to abuse especially when a project is small only to become a liability when the project is larger. I usually take *_any_instance_of as a smell that either my code or tests, can be improved, but there are times wen it's necessary to use.
That being said, between the two approaches you propose, I prefer this one:
my_double = double('my_model')
expect(MyModel).to receive(:create).and_return(my_double)
expect(my_double).to receive(:do_something_productive)
subject.my_method
It's explicit, well-isolated, and doesn't incur overhead with database calls. It will likely require a rewrite if the implementation of my_method changes, but that's OK. Since it's well-isolated it probably won't need to be rewritten if any code outside of my_method changes. Contrast this with assertions where dropping a column in a database can break almost the entire test suite.
I don't have a better solution to testing code like that than either of the two you gave. In the stubbing/mocking solution I'd use allow rather than expect for the create call, since the create call isn't the point of the spec, but that's a side issue. I agree that the stubbing and mocking is painful, but that's usually what I do.
However, that code has just a bit of Feature Envy. Extracting a method onto MyModel clears up the smell and eliminates the testing issue:
class MyModel < ActiveRecord::Base
def self.create_productively(attrs)
create(attrs).do_something_productive
end
end
def my_method
MyModel.create_productively(attrs)
end
# in the spec
expect(MyModel).to receive(:create_productively)
subject.my_method
create_productively is a model method, so it can and should be tested with real instances and there's no need to stub or mock.
I often notice that the need to use less-commonly-used features of RSpec means that my code could use a little refactoring.
def self.my_method(attrs)
create(attrs).tap {|m| m.do_something_productive}
end
# Spec
let(:attrs) { # valid hash }
describe "when calling my_method with valid attributes" do
it "does something productive" do
expect(MyModel.my_method(attrs)).to have_done_something_productive
end
end
Naturally, you will have other tests for #do_something_productive itself.
The trade-off is always the same: mocks and stubs are fast, but brittle. Real objects are slower but less brittle, and generally require less test maintenance.
I tend to reserve mocks/stubs for external dependencies (e.g. API calls), or when using interfaces that have been defined but not implemented.
I'm using let(:foo) { create_foo() } inside my tests. create_foo is a test helper that does some fairly time expensive setup.
So everytime a test is run, foo is created, and that takes some time. However, the foo object itself does not change, I just want to unit test methods on that object, one by one, seperated into single tests.
So, is there a RSpec equivalent of let to share the variable across multiple examples, but keep the nice things like lazy loading (if foo isn't needed) and also the automatic method definition of foo so that it can be used in shared examples, without referencing it with a #foo?
Or do I have to simply define a
def foo
create_foo()
end
Can you just put it in shared examples but use memoization?
def foo
#foo ||= create_foo()
end
Using let in this way goes against what it was designed for. You should consider using before_all which runs once per test group:
before :all do
#instancevar = create_object()
end
Keep in mind that this may not be wise if create_object() hits a database since it may introduce coupling and maintain state between your tests.
I am new to selenium tests in grails. I am using geb and spock for testing.
I want to split my test plans into some smaller test plans. I want to know if it's possible to make a spec which calls other specs?
What you should do it have a 'Spec' for each area of the application. If that area has more than one scenarios then include them in the 'Spec' that matches that area.
For Example.
LogInSpec
Has a log in scenario.
Form validation scenario.
Log out scenario.
This way things stay organized and its easy to see what sections of your application are failing.
If your goal is to run these in parallel then I recommend that you try and keep tests even across the different test classes. This way they all take around the same amount of time.
You can create traits for each of the modules ,
Exp: consider validations of entering details in form:
Name,Contact details and other comments
Create a trait which has methods to fill these details and verify after saving these details
Use this trait in your spec.
This will make your code more readable and clear
I found now another solution.
I make a simple groovy class:
class ReUseTest {
def myTest(def spec) {
when:
spec.at ConnectorsPage
then:
spec.btnNewConnector.click()
}
In my spec I can call this this like:
def "MyTest"() {
specs.reuseable.ReUseTest myTest = new specs.reuseable.ReUseTest()
specs.myTest(this)
}
I can now use this part in every spec.
We're doing a rails 3.2.1 project with RSpec 2. My question is, should i be testing the basic persistence of each of my activerecord models? I used to do this in my C# / NHibernate days, to ensure the correct tables/mappings were there.
So if i have a customer with name, address and phone fields, I might write an rspec like this:
describe Customer do
it "saves & retrieves its fields to and from the db"
c = Customer.new
c.name = "Bob Smith"
c.address = "123 some street"
c.phone = "555-555-5555"
or = Order.new
c.orders << or
c.save
found = Customer.find(c.id)
found.should_not be(c)
found.name.should == c.name
found.address.should == c.address
found.phone.should == c.phone
found.orders.count.should == 1
found.orders[0].id.should == or.id
end
end
Is this "best practice" or common in the ruby/rails/rspec world? I should also note that the point is not to test what rails is doing natively, but instead test that the correct fields and relationships are setup in the db and models.
No. You shouldn't be unit testing the persistence. A unit test validates the unit works in isolation, and you should only test your code. The persistence functionality is a part of Rails, and since it is not your code, you shouldn't write unit tests for it.
You could be interested in testing the mappings, but not in a unit test. You would write an integration test for that. An integration test would test your module, integrating with another part of the system, perhaps all the way to the database. Running those tests would validate that your module works with the database, i.e. the mappings are good.
In short - you don't test persistence in unit tests; you test them in integration tests.
No, I don't believe it is a best practice to do this sort of lower level testing as the majority of these tests would be built into to the testing for Rails and the ORM you are using.
However, if you are overriding any methods or performing complex association logic in your models it'd be best to have your own tests.
I'm very new on TDD and unit-testing, and I'm having quite a lot of doubts about the correct approach I should take during the tests of the custom model validations.
Suppose I have a custom validation:
User < ActiveRecord::Base
validate :weird_validation
def weird_validation
# Validate the weird attribute
end
end
Should I take this approach:
context "validation"
it "pass the validation with weird stuff" do
user = User.new weird: "something weird"
user.should be_valid
end
it "should't pass the validation with normal stuff" do
user = User.new weird: "something"
user.should_not be_valid
user.errors[:weird].size.should eq 1
end
end
Or this one:
context "#weird_validation" do
it "should not add an error if weird is weird" do
user = User.new
user.stub(:weird){"something weird"}
user.errors.should_not_receive :add
user.weird_validation.should eq true
end
it "should add an error if weird is not weird" do
user = User.new
user.stub(:weird){"something"}
user.errors.should_receive(:add).with(:weird, anything())
user.weird_validation.should eq false
end
end
So IMHO
The first approach
Pros
It test behaviour
Easy refactoring
Cons
Dependable of other methods
Something unrelated could make the test fail
The second approach
Pros
It doesn't relay on anything else, since everything else is stubbed
It's very specific of all the things the code should do
Cons
It's very specific of all the things the code should do
Refactoring the validations could potentially break the test
I personally think the correct approach should be the first one, but I can't avoid to think that I'm relying too much in other methods rather than the one I want to test, and if the test fails it may be because of any method withing the model. For example, it would not validate if the validation of other attribute failed.
Using the second approach I'm practically writing the code twice, which seems like a waste of time and effort. But I'm unit-testing the isolated method about what it should do. (and I'm personally doing this for every single method, which is very bad and time consuming I think)
Is there any midway when it comes to using stubs and mocks? Because I've taken the second approach and I think I'm abusing it.
IMO the second approach is the best one because you test your model properties and validations one at a time (the "unit" part).
To avoid overhead, you may consider using shoulda. It is really efficient for models unit testing. We're usually using a factory_girl/mocha/shoulda combination for functional testing (factory_girl and mocha are also very helpful to test queries and named scopes). Tests are easy to write, read and maintain :
# class UserTest < ActiveSupport::TestCase
# or
# describe 'User' do
should have_db_column(:weird).of_type(:string).with_options(:limit=>255)
should allow_value("something weird").for(:weird)
should_not allow_value("something").for(:weird)
should ensure_length_of(:weird).is_at_least(1).is_at_most(255)
# end
Shoulda generates positive/negative matchers therefore avoids a lot of code duplication.