I'm evaluating the Fitnesse as an acceptance test tool.
Is there any way to automatically generate fixture class from fitness test pages?
No. In the fitness test pages you define the input data and the expected results but you have to write your fixture classes by your self or if you are lucky you may find something similar with what you need in the internet (for example you could find some oracle fixture and then you don't have to write that).
Related
I just, manually, discovered a migration error. I added a new field to a model, and forgot to add it into the model_params method of the controller. As a result the new field wasn't persisted to the database.
Easy enough to fix once I noticed the problem, but it got me to wondering if there was a way to detect this in testing. I would imagine something like a gem that would parse the schema and generate a set of tests to ensure that all of the fields could be written and that the same data could be read back.
Is this something that can be (or is) done? So far, my searches have led me to lots of interesting reading, but not to a gem like this...
It is possible to write what you would want. Iterate through all the fields in the model, generate params that mirrors those fields, and then run functional tests on your controllers. The problem is that the test is brittle. What if you don't actually want all the fields to be writable through params? What if you reference a model in another controller outside of the standard pattern? How will you handle generating data that would pass different validations? You would either have to be sure that your application would only be written in a certain way or this test would become more and more complex to handle additional edge cases.
I think the solution in testing would be to try to keep things simple; realize that you've made a change to the system and as a result of that change, corresponding tests would need to be updated. In this case, you would update the functional and unit tests affected by that model. If you were strictly adhering to Test Driven Design, you would actually update the tests first to produce a failing test and then implement the change. As a result, hopefully the updated functional test would have failed in this case.
Outside of testing, you may want to look into a linter. In essence, you're asking if you can catch an error where the parameters passed to an object's method doesn't match the signature. This is more catchable when parsing the code completely (i.e. compilation in a static type environment).
EDIT - I skipped a step on the linting, as you would also have to write your code a certain way that a linter would catch it, such as being more explicit of the method and parameters passed to it.
You might want to consider that such a gem may not exist because its not that practical or useful in real life.
Getting the columns off a model is pretty simple from the reflection methods that Active Record gives you. And yeah you could use that theoretically to automagically run a bunch of tests in loop.
But in reality its just not going to cut it. In real life you don't want every column to be assignable. Thats why you are using mass assignment protection in the first place.
And add to that the complexity of the different kinds of constraints and data types your models have. You'll end up with something extremely complex with just adds a bunch of tests with limited value.
If you find yourself omitting a property from mass assignment protection than your should try to cover that part of your controller either with a functional test or an integration test.
class ArticlesControllerTest < ActionController::TestCase
def valid_attributes
{
title: 'How to test like a Rockstar',
content: 'Bla bla bla'
}
end
test "created article should have the correct attributes" do
post :create, article: valid_attributes
article = Article.last
valid_attributes.keys.each do |key|
assert_equals article[key], valid_attributes[key]
end
end
end
I have several models that have start_time and end_time and I have a custom validator that has special rules for these fields. For testing, I feel like I've got 3 options:
Have a validator and validator_spec. Then re-test the entire implementation of the validators in every model to ensure that the validator is working with that model.
Have a validator and validator_spec. In each model, somehow check that the already-tested validator is being included in a model. (Maybe this means testing one condition that would only arise from the validator being included.)
Creating a shared example as the validator test, and include it in each model's test (although it_behaves_like SomeValidator looks kind of weird)
Any thoughts? The validator has several conditions so I'd find it taxes and not DRY to implement #1.
i would propose another option:
4.) implement a validator_spec and build a custom matcher which can be reused in every model using this validator
maybe you can find some inspiration at https://github.com/thoughtbot/shoulda-matchers
The 'mockist' approach would be to write a spec for the validator class, so that it can be tested in isolation. Then in your model specs, you can stub the validator to return a pre-determined value.
There is a trade-off here: You're avoiding a combinatory increase in the number of tests, so your test suite will smaller, faster, and easier to maintain. But your introducing a small risk that in some situations a model may not work in combination with the real validator once integrated in the production code.
For example, if I am using should validate_presence_of in my spec, that's only testing that I have the validate_presence_of piece of code inside my model, and that's testing implementation. More importantly, isn't that spec totally useless for testing the real problem, which is "if I don't fill out a certain field, will the model be saved successfully?"
Some of shoulda-matchers' matchers don't test implementation, they test behavior. For example, look at the source for allow_value (which validate_presence_of uses): #matches? actually sets the instance's attribute to the value and checks to see if that causes an error. All of shoulda-matchers' matchers that test validations (its ActiveModel matchers) that I've looked at work the same way; they actually test that the model rejects bad values.
Note that if you trust ActiveModel and ActiveRecord to be fully tested, it shouldn't matter whether the matchers test behavior or just test that the macros are used.
Unit-testing a model's validations is definitely useful. Suppose you're doing BDD and implementing a simple form that creates an instance of a model. You would first write an acceptance test (a Cucumber or rspec scenario) that tests the happy path of filling out the form correctly and successfully creating the instance. You would then write a second acceptance test with an error in the form that demonstrates that when there's an error in the form no instance is saved and the form is redisplayed with the appropriate error message.
Once you've got that error-path scenario for one of the errors it is possible to make in the form, you will find that if you write more error-path scenarios for the other errors they will be very repetitive -- the only things that will be different are the erroneous field value and the error message. And then you'll have a lot of full-stack scenarios, which take a long time to run. So don't write more than that first error-path scenario. Instead, just write unit tests for the validations that would catch each error. Now most of your tests are simple and fast. (This is a specific example of the general BDD technique of dropping down from acceptance tests to unit tests to handle details.)
However, I don't find shoulda-matchers' ActiveRecord matchers very useful. Considering the matchers that test associations, I find that my acceptance tests always force me to add all the associations to my models that I need, and there's nothing left to do in unit tests. The ActiveRecord matchers that test database features that are invisible to the app (e.g. have_db_index) are useful if you're strictly test-driven, but I tend to slack off there. Also, for what it's worth, the ActiveRecord matchers don't test behavior (which would be hard to implement), only that the corresponding macros are used.
One exception where I do find a shoulda-matchers ActiveRecord matcher useful is deletion of dependent objects. I sometimes find that no acceptance spec has already forced me to handle what happens to associated objects when an object is deleted. The ActiveRecord way to make that happen is to add the :dependent option to the belongs_to, has_many or has_one association. Writing an example that uses shoulda-matchers' belong_to or have_many or have_one matcher with the .dependent option is the most convenient way I know to test that.
Is there any way to auto-generate simple test cases? I found myself spending time writing very simple tests that make sure all controllers and models are working fine. Here is an example of controller test case written with rspec:
machine = FactoryGirl.create(:machine, type: 1)
mac = FactoryGirl.create(:mac, machine_id: m1.id)
win = FactoryGirl.create(:win, machine_id: m4.id)
sign_in user
get :index
get :show, id: machine.id
get :report
I cannot find any tool today that can auto-generate such tests based on new written code. If really nothing exists, I may consider building a solution to this problem.
If a test were predictable enough to generate it wouldn't be worth writing. In your example, you don't assert anything. That's a very weak test, good only for raising code coverage. It would be much stronger if it asserted what should be on the page. You can't generate that. You also can't generate a scenario that traverses multiple pages in a meaningful way. (I think your example wants to be an rspec feature spec or a Cucumber scenario.)
It would make sense to write a generator that creates a skeleton that the developer could fill in with the meaningful parts that can't be generated, however.
To cover basic functionality you could write a specific generator.
You can also redefine standard scaffold templates (for example, by adding your own template to lib/templates/rspec/model/model_spec.rb - this will redefine model scaffolding).
But the real question is why to do so. Following TDD you should write the test and then create your code, not vice versa.
Given a class that inherits from ActiveRecord::Base, lets call it Task, I have two subclasses that specialize some aspects of a task, Activity and Training, using standard Rails single table inheritance.
I'm confident in that choice after looking at other available since the actual data for the models are the same, it's just the behavior that differs. A perfect fit for STI.
A task can be created, started, progressed and finished. That is some logic involved in these transitions, especially start(), that calls for the specialization of the base class.
Since I'm doing this TDD and started out with a working Task calls with full test coverage I'm now wondering how to proceed. I have a few senarios that I have thought about:
Duplicate the tests for Task and test both Activity and Training end to end with some small modifications to test their specialization. Pro: it's quick and easy. Con: it duplicates code and while that might not be a big problem here it will be when the number of specializations grows.
Split tests and keep most of the testing code in a task_spec.rb while moving specialization testing into new specs for the respective subclasses. Pro: keeps the tests DRY. Con: what class do I instantiate in the base test?
That last question is what is nagging at me. Right now I have the base class test set up to randomly create a class from one of the concert subclasses, but is this good form? It almost makes me want to go with approach 1 just to keep the test runs consistent or I'll have to find a way to base the randomness of my class selection of the random seed for the test suite so that I at least have a repeatable random selection.
I'm guessing this must be a common problem people run in to but I can't find any good information on the subject. Do you have any resources or thoughts on the matter?
Using an rspec shared example (mentioned by Renato Zannon) would look like this:
Create a file in spec/support/. I'll call it shared_examples_for_sti.rb.
require 'spec_helper'
shared_examples "an STI class" do
it "should have attribute type" do
expect(subject).to have_attribute :type
end
it "should initialize successfully as an instance of the described class" do
expect(subject).to be_a_kind_of described_class
end
end
In your _spec.rb file for each STI class and it's subclasses, add this line:
it_behaves_like "an STI class"
If you have any other tests that you want to be shared across STI classes and subclasses just add them to the shared_examples.
You could use rspec shared examples to test the behavior that is shared among all of them (basically, the inherited behavior, or places where you want to guard against LSP violations).