Rails3 - Testing - what is the relationship errors and invalid? - ruby-on-rails

In the following code, do the errors support invalid?
in other words is invalid? true or contingent upon the list of errors being true
or is invalid? working on its own?
test "product attributes must not be empty" do product = Product.new
assert product.invalid? assert product.errors[:title].any?
assert product.errors[:description].any?
assert product.errors[:price].any?
assert product.errors[:image_url].any?
end
Also, may I assume that:
Functional Testing (for controllers) perform at run time for the user,
while Unit Testing (for Models / Database) are for use during development
THANKS!strong text

The tests are run during development, to try and ensure the code you produce is as error-free as possible.
Unit tests check small units of code (e.g. a single model), while functional tests check "functions" that take several steps (such as the "signing" up process).
The valid? function essentially makes the model go through the defined validators, and checks whether there where errors. In other words, if the #user.errors array (for example) contains entires, valid? will return false.
But once again, tests are used to check you're developing your code properly, and will NOT run in production.

Related

In RSpec, what is the difference between a message expectation (receive) and a test spy (have_received)?

In RSpec (specifically rspec-mocks), what is the difference between Message Expectations and Test Spies? They seem similar, and appear right next to each other as seperate sections in the readme.
i.e. what is the difference between:
expect(validator).to receive(:validate) # message expectation
and
expect(validator).to have_received(:validate) # test spy
Message expectations can be set on any object and represent a declaration that something is going to happen (or not happen) to that object in the future. If the expectation is violated during subsequent test execution, the test will fail at the time the violation occurs. If the expectation has not been met by the end of the test, the test will fail as well.
The have_received family of methods only works on test doubles and examines what has happened to the double in the past, from the time of the double's creation up through the current method call. It succeeds or fails at that point in time. The term "test spy" is a little misleading, because the support for this backward-looking mechanism is a standard part of rspec-mocks at this point. You don't do anything "special" to create a "test spy".
You can't always use spies when you test, basically for expectations on classes.
Example:
expect(User).to receive(:new)
there is no way to do this with a spy (unless you do dependency injection).
Now, you could do the following:
user = double('user', save: true)
expect(User).to receive(:new).and_return user
User.new.save
expect(user).to have_received(:save)
You see clearly that:
you have to set expectations on real object before you run the real code (it kind of looks weird to set expectations before triggering the code)
you can set expectation on spies after the real code, which is more natural

Should one unit test persistence in rails?

We're doing a rails 3.2.1 project with RSpec 2. My question is, should i be testing the basic persistence of each of my activerecord models? I used to do this in my C# / NHibernate days, to ensure the correct tables/mappings were there.
So if i have a customer with name, address and phone fields, I might write an rspec like this:
describe Customer do
it "saves & retrieves its fields to and from the db"
c = Customer.new
c.name = "Bob Smith"
c.address = "123 some street"
c.phone = "555-555-5555"
or = Order.new
c.orders << or
c.save
found = Customer.find(c.id)
found.should_not be(c)
found.name.should == c.name
found.address.should == c.address
found.phone.should == c.phone
found.orders.count.should == 1
found.orders[0].id.should == or.id
end
end
Is this "best practice" or common in the ruby/rails/rspec world? I should also note that the point is not to test what rails is doing natively, but instead test that the correct fields and relationships are setup in the db and models.
No. You shouldn't be unit testing the persistence. A unit test validates the unit works in isolation, and you should only test your code. The persistence functionality is a part of Rails, and since it is not your code, you shouldn't write unit tests for it.
You could be interested in testing the mappings, but not in a unit test. You would write an integration test for that. An integration test would test your module, integrating with another part of the system, perhaps all the way to the database. Running those tests would validate that your module works with the database, i.e. the mappings are good.
In short - you don't test persistence in unit tests; you test them in integration tests.
No, I don't believe it is a best practice to do this sort of lower level testing as the majority of these tests would be built into to the testing for Rails and the ORM you are using.
However, if you are overriding any methods or performing complex association logic in your models it'd be best to have your own tests.

During TDD, should I create tests for custom validations? Or I should test the validity of the entire object?

I'm very new on TDD and unit-testing, and I'm having quite a lot of doubts about the correct approach I should take during the tests of the custom model validations.
Suppose I have a custom validation:
User < ActiveRecord::Base
validate :weird_validation
def weird_validation
# Validate the weird attribute
end
end
Should I take this approach:
context "validation"
it "pass the validation with weird stuff" do
user = User.new weird: "something weird"
user.should be_valid
end
it "should't pass the validation with normal stuff" do
user = User.new weird: "something"
user.should_not be_valid
user.errors[:weird].size.should eq 1
end
end
Or this one:
context "#weird_validation" do
it "should not add an error if weird is weird" do
user = User.new
user.stub(:weird){"something weird"}
user.errors.should_not_receive :add
user.weird_validation.should eq true
end
it "should add an error if weird is not weird" do
user = User.new
user.stub(:weird){"something"}
user.errors.should_receive(:add).with(:weird, anything())
user.weird_validation.should eq false
end
end
So IMHO
The first approach
Pros
It test behaviour
Easy refactoring
Cons
Dependable of other methods
Something unrelated could make the test fail
The second approach
Pros
It doesn't relay on anything else, since everything else is stubbed
It's very specific of all the things the code should do
Cons
It's very specific of all the things the code should do
Refactoring the validations could potentially break the test
I personally think the correct approach should be the first one, but I can't avoid to think that I'm relying too much in other methods rather than the one I want to test, and if the test fails it may be because of any method withing the model. For example, it would not validate if the validation of other attribute failed.
Using the second approach I'm practically writing the code twice, which seems like a waste of time and effort. But I'm unit-testing the isolated method about what it should do. (and I'm personally doing this for every single method, which is very bad and time consuming I think)
Is there any midway when it comes to using stubs and mocks? Because I've taken the second approach and I think I'm abusing it.
IMO the second approach is the best one because you test your model properties and validations one at a time (the "unit" part).
To avoid overhead, you may consider using shoulda. It is really efficient for models unit testing. We're usually using a factory_girl/mocha/shoulda combination for functional testing (factory_girl and mocha are also very helpful to test queries and named scopes). Tests are easy to write, read and maintain :
# class UserTest < ActiveSupport::TestCase
# or
# describe 'User' do
should have_db_column(:weird).of_type(:string).with_options(:limit=>255)
should allow_value("something weird").for(:weird)
should_not allow_value("something").for(:weird)
should ensure_length_of(:weird).is_at_least(1).is_at_most(255)
# end
Shoulda generates positive/negative matchers therefore avoids a lot of code duplication.

Railstutorial.org Validating unique email

In section 6.2.4 of Ruby on Rails 3 Tutorial, Michael Hartl describes a caveat about checking uniqueness for email addresses: If two identical requests come close in time, request A can pass validation, then B pass validation, then A get saved, then B get saved, and you get two records with the same value. Each was valid at the time it was checked.
My question is not about the solution (put a unique constraint on the database so B's save won't work). It's about writing a test to prove the solution works. I tried writing my own, but whatever I came up with only turned out to mimic the regular, simple uniqueness tests.
Being completely new to rspec, my naive approach was to just write the scenario:
it 'should reject duplicate email addresses with caveat' do
A = User.new( #attr )
A.should be_valid # always valid
B = User.new( #attr )
B.should be_valid # always valid, as expected
A.save.should == true # save always works fine
B.save.should == false # this is the problem case
# B.should_not be_valid # ...same results as "save.should"
end
but this test passes/fails in exactly the same cases as the regular uniqueness test; the B.save.should == false passes when my code is written so that the regular uniqueness test passes and fails when the regular test fails.
So my question is "how can I write an rspec test that will verify I'm solving this problem?" If the answer turns out to be "it's complicated", is there a different Rails testing framework I should look at?
It's complicated. Race conditions are so nasty precisely because they are so difficult to reproduce. Internally, save goes something like this:
Validate.
Write to database.
So, to reproduce the timing problem, you'd need to arrange the two save calls to overlap like this (pseudo-Rails):
a.validate # first half of a.save
b.validate # first half of b.save
a.write_to_db # second half of a.save
b.write_to_db # second half of b.save
but you can't open up the save method and fiddle with its internals quite so easily.
But (and this is a big but), you can skip the validations entirely:
Note that save also has the ability to skip validations if passed :validate => false as argument. This technique should be used with caution.
So if you use
b.save(:validate => false)
you should get just the "write to database" half of b's save and send your data to the database without validation. That should trigger a constraint violation in the database and I'm pretty sure that will raise an ActiveRecord::StatementInvalid exception so I think you'll need to look for an exception rather than just a false return from save:
b.save(:validate => false).should raise_exception(ActiveRecord::StatementInvalid)
You can tighten that up to look for the specific exception message as well. I don't have anything handy to test this test with so try it out in the Rails console and adjust your spec appropriately.

Rails assert that form is valid

What's the best practices way to test that a model is valid in rails?
For example, if I have a User model that validates the uniqueness of an email_address property, how do I check that posting the form returned an error (or better yet, specifically returned an error for that field).
I feel like this should be something obvious, but as I'm quickly finding out, I still don't quite have the vocabulary required to effectively google ruby questions.
The easiest way would probably be:
class UserEmailAddressDuplicateTest < ActiveSupport::TestCase
def setup
#email = "test#example.org"
#user1, #user2 = User.create(:email => #email), User.new(:email => #email)
end
def test_user_should_not_be_valid_given_duplicate_email_addresses
assert !#user2.valid?
end
def test_user_should_produce_error_for_duplicate_email_address
# Test for the default error message.
assert_equal "has already been taken", #user2.errors.on(:email)
end
end
Of course it's possible that you don't want to create a separate test case for this behaviour, in which case you could duplicate the logic in the setup method and include it in both tests (or put it in a private method).
Alternatively you could store the first (reference) user in a fixture such as fixtures/users.yml, and simply instantiate a new user with a duplicate address in each test.
Refactor as you see fit!
http://thoughtbot.com/projects/shoulda/
Shoulda includes macros for testing things like validators along with many other things. Worth checking out for TDD.
errors.on is what you want
http://api.rubyonrails.org/classes/ActiveRecord/Errors.html#M002496
#obj.errors.on(:email) will return nil if field is valid, and the error messages either in a String or Array of Strings if there are one or more errors.
Testing the model via unit tests is, of course, step one. However, that doesn't necessarily guarantee that the user will get the feedback they need.
Section 4 of the Rails Guide on Testing has a lot of good information on functional testing (i.e. testing controllers and views). You have a couple of basic options here: check that the flash has a message in it about the error, or use assert_select to find the actual HTML elements that should have been generated in case of an error. The latter is really the only way to test that the user will actually get the message.

Resources