Is there a smart way to test a scopes without hitting the db?
I am asking because I am dealing with scopes on multi-level of embedded documents, and to actually create the objects in db would require a long chain of objects to be created first.
This seems inefficient performance wise, and really cumbersome to write to test a single scope.
I'm not sure of the implications of your multi-level of embedded documents, but you can assert that a scope is a Mongoid::Criteria, and further assert that it is a criteria with certain properties.
RSpec examples:
expect(Charge.successful).to be_a Mongoid::Criteria
expect(Charge.successful).to eq Mongoid::Criteria.new(Charge).where(state: 'successful')
Note that this isn't testing behaviour, which would be best practice.
A better approach might simply be to bite the bullet and use something like FactoryGirl to create your objects and test behaviour. Presumably to fully acceptance test your system you'll need these objects anyway.
If you'd like to avoid 'hitting the DB', you could use EmbeddedMongo. This would probably speed up your tests and mean you wouldn't need MongoDB on your CI server.
Related
I just, manually, discovered a migration error. I added a new field to a model, and forgot to add it into the model_params method of the controller. As a result the new field wasn't persisted to the database.
Easy enough to fix once I noticed the problem, but it got me to wondering if there was a way to detect this in testing. I would imagine something like a gem that would parse the schema and generate a set of tests to ensure that all of the fields could be written and that the same data could be read back.
Is this something that can be (or is) done? So far, my searches have led me to lots of interesting reading, but not to a gem like this...
It is possible to write what you would want. Iterate through all the fields in the model, generate params that mirrors those fields, and then run functional tests on your controllers. The problem is that the test is brittle. What if you don't actually want all the fields to be writable through params? What if you reference a model in another controller outside of the standard pattern? How will you handle generating data that would pass different validations? You would either have to be sure that your application would only be written in a certain way or this test would become more and more complex to handle additional edge cases.
I think the solution in testing would be to try to keep things simple; realize that you've made a change to the system and as a result of that change, corresponding tests would need to be updated. In this case, you would update the functional and unit tests affected by that model. If you were strictly adhering to Test Driven Design, you would actually update the tests first to produce a failing test and then implement the change. As a result, hopefully the updated functional test would have failed in this case.
Outside of testing, you may want to look into a linter. In essence, you're asking if you can catch an error where the parameters passed to an object's method doesn't match the signature. This is more catchable when parsing the code completely (i.e. compilation in a static type environment).
EDIT - I skipped a step on the linting, as you would also have to write your code a certain way that a linter would catch it, such as being more explicit of the method and parameters passed to it.
You might want to consider that such a gem may not exist because its not that practical or useful in real life.
Getting the columns off a model is pretty simple from the reflection methods that Active Record gives you. And yeah you could use that theoretically to automagically run a bunch of tests in loop.
But in reality its just not going to cut it. In real life you don't want every column to be assignable. Thats why you are using mass assignment protection in the first place.
And add to that the complexity of the different kinds of constraints and data types your models have. You'll end up with something extremely complex with just adds a bunch of tests with limited value.
If you find yourself omitting a property from mass assignment protection than your should try to cover that part of your controller either with a functional test or an integration test.
class ArticlesControllerTest < ActionController::TestCase
def valid_attributes
{
title: 'How to test like a Rockstar',
content: 'Bla bla bla'
}
end
test "created article should have the correct attributes" do
post :create, article: valid_attributes
article = Article.last
valid_attributes.keys.each do |key|
assert_equals article[key], valid_attributes[key]
end
end
end
In my Rails 4 project I've got a model, EventGenerator with an instance method generate (which creates some records in the database), and a class method generate_all which is intended to be called by a Rake task and looks something like this:
def self.generate_all
all.each(&:generate)
end
I can think of several approaches to testing this (I'm using RSpec 3 and Fabrication):
Don't bother - it's too simple to require tests. DHH says "Don't aim for 100% coverage". On the other hand, this is going to be called by a rake task, so won't be regularly exercised: I feel like that's a good reason to have tests.
Create a couple of EventGenerator instances in the database and use any_instance.should_receive(:generate) as the assertion - but RSpec 3 now recommends against this and requires a fudge to make it work. This is a personal 'showcase project' so if possible I'd like everything to be best-practice. Also (DHH aside) shouldn't it still be possible to create fast model specs which don't touch the database?
Like 2, but stub out EventGenerator.all to return some instances without touching the database. But stubbing the class under test is bad, right? And fragile.
Don't worry about unit testing it and instead cover it with an integration test: create a couple of generators in the database, run the method, check what gets changed/created in the database as a result.
Change the implementation: pass in an array of instances. But that's just pushing the problem back by a layer, and a test-driven change which I can't see benefitting the design.
Since I can't really think of a downside for option 4, perhaps that's the answer, but I feel like I need some science to back that up.
I would actually not bother to test it (so your 1.) as the method is really trivial.
If you would like to have it under test coverage though I'd suggest you to use your 3. My reasons are as follows:
Your test for .generate_all just needs to assert that the method #generate gets call on every instance returned by .all. It this context the actual implementation of .all is irrelevant and can be stubbed.
Your tests for #generate should assert that the method does the right thing. If these tests assert the proper functioning of this method, there's no need for the tests for .generate_all to duplicate any assertion.
Testing the proper functioning of #generate in the tests for .generate_all leads to unnecessary dependencies between the tests.
On several recent projects I've felt that model associations can get very complex very fast. It feels like testing is way harder than it needs to be because of this complexity. For example, I need to create an instance of model A for a test. Many times, it looks something like this (this is taken from the app I'm working on right now):
Create model A, but model A relies on B.
Model B relies on model C
Model C relies on D, E, and F. Specifically, it needs 6 F's to be attached for it to be considered valid.
Model D, E, and F may have a dependency or two.
Finally, A is created. I am using factories on this application and that helps a bit, but it still feels like too much when I need to satisfy so many validations in order to create a simple model that is not related to any of that.
Stubs might help, but I feel like that requirement represents something wrong with the modelling at it's core.
Are there any patterns designed to help with dependencies like this? One thing I've been thinking about is to make most of the validation conditional based on context. The controllers will save the models in that validation context but that lets my test suite create objects that would otherwise be "invalid" in the live app or full integration test suite. The problem with that is that I feel like it alters my codebase for the sake of testing and I think that is generally a bad idea.
Like you I'd rather not have test-specific code in my application. I'd also rather have validations in place all the time, because if I didn't who knows what test might construct invalid data and thereby pass when it shouldn't?
But it's better when tests run fast. So:
I trust you've already made everything in the database a seed that should be, i.e. model instances that don't need to change more often than you deploy.
If you're using factory_girl, rather than just defining associations and letting factory_girl create them all every time, you could make your factory reuse dependencies:
factory :a do
before :create do |a, _|
a.b ||= B.first || FactoryGirl.create(:b)
end
end
(Haven't tested that but I trust the intention is clear.)
If you need to reuse a B sometimes and make a new one sometimes, obviously you have to write some code sometimes. The most sensible thing to do is probably to create a B in the test and pass it in to the A's that should reuse it and not to the ones that don't.
This might be a basic misunderstanding on my part. I have a bunch of logic in my app which collects sets of data from several tables and combines them into memory structures. I want to write tests for that logic.
It seems to me that fixtures, factory girl and similar tools build in-memory model instances. If I do activerecord calls, like Model.find(foo: 12) won't those apply only against records that were saved?
In most cases, I agree with #mrbrdo's opinion: prefer rpsec's stub method. but as a rails programmer, I think you have to know both the "fixture" and the "stub".
Fixtures, whatever the yaml file or the factory girl data , will both save into database. see your config/database.yml file for the location where they are. Actually this is useful when you want to make sure that there is ALWAYS some data in the DB during your test, such as an "admin user" with a fixed ID.
Stubs, it's faster than fixture since it won't be saved into DB, and maybe very useful when you want to perform a test that can't implemented by "Fixtures".
So, I suggest that you try both of them in your real coding life, and choose either of them according to the real context.
What you are saying is not true, fixtures or factory girl will use the database. I would avoid fixtures, though, people don't usually use them nowadays.
The proper way to really write your tests would be to stub out activerecord calls, though, because this will make your tests a lot faster. What you want to test is combining data into your structures, not pulling data out of the database - that part is already tested in activerecord's tests.
So stub out the finders like this (if you are using rspec):
Model.should_receive(:find).with(foo: 12) do
Model.new(foo: 12, other_attribute: true)
end
So when the method you are testing calls Model.find(foo: 12) it will get
Model.new(foo: 12, other_attribute: true)
This is much faster than actually creating it in the database and then pulling it out, and there is no point in doing this for what you are testing - it's not important. You can also stub save on the instance and so on depending on what your method is doing. Keep in mind, retrieving data from DB and saving to DB is all already tested in activerecord's tests, there is no point for you to re-do these tests - just focus on your specific logic.
FactoryGirl supports several build strategies, including one where records are saved to the database.
It's straightforward: FactoryGirl.create(:foo) will create a foo and save it to the database, whereas FactoryGirl.build(:foo) will only create the in-memory version of that object.
More information is available about build strategies here: https://github.com/thoughtbot/factory_girl/blob/master/GETTING_STARTED.md , in the "Using factories" section.
In Rails, fixtures records seem to be inserted & deleted in isolation for a given model. I'd like to test having many objects/rows in one transaction, eg. to check uniqueness. How to do it?
Fixtures are not validated. When you set them up they can be totally wrong and Rails won't complain until something blows up. It's a good idea to make sure your initial test DB (that is seeded with your fixtures) is in a valid state before tests are run.
For checking things like uniqueness, I would create the records on the fly and not rely on fixtures. Either create them right in your test case, or use something like FactoryGirl (which by the way, is a great way to clean up your tests and stop using fixtures completely).
Are you saying you want to build a test to check the rails "validates_uniqueness_of" operator or that you want to test the logic of your own unique record? In the first case, I wouldn't bother, the Rails tests cover that. In the second case, I would create a test that creates a record that is the same as one in the fixtures.
In the broader sense of putting multiple saves into a single transaction, you can create your objects and then:
MyModel.transaction do
model1.save
model2.save
end
but I don't think this is the way to accomplish either of the things it seems that you want to do.