Breaking down your RSpec tests - ruby-on-rails

Some of my Rspec tests have gotten really really big (2000-5000 lines). I am just wondering if anyone has ever tried breaking these tests down into multiple files that meet the following conditions:
There is a systematic way of naming and placing your test (e.g. methods A-L gos to user_spec1.rb).
You can run a single file that will actually run the other tests inside other files.
You can still run a specific context within a file
and, good to have, RubyMine can run a specific test (and all tests) just fine.
For now, I have been successful in doing
#user_spec.rb
require 'spec_helper'
require File.expand_path("../user_spec1.rb", __FILE__)
include UserSpec
#user_spec1.rb
module UserSpec do
describe User do
..
end
end

If your specs are getting too big, it's likely that your model is too big as well -- since you used "UserSpec" here, you could say your user class is a "God class". That is, it does too much.
So, I would break this up into much smaller classes, each of which have one single responsibility. Then, test these classes in isolation.
What you may find is that your User class knows how to execute most logic in your system -- this is an easy trap to fall into, but can be avoided if you put your logic in a class that takes a user as an argument... Also if you steadfastly follow the law of demeter (where your user class could only touch 1 level below it, but not two).
Further Reading: http://blog.rubybestpractices.com/posts/gregory/055-issue-23-solid-design.html

Related

Rails, Is there a way to generate Unit Tests from Existing Controllers and methods defined in them?

I was wondering if there is a script that can take existing codebase and generate unit tests for each method in controllers. By default all would be passing since they would be empty and i can remove tests i for methods i dont feel important.
This would save huge time and increase testing. Since i'd have to define only what each method should output and not boilerplate that needs to be written.
You really shouldn't be doing this. Creating pointless tests is technical debt that you don't want. Take some time, go through each controller and write a test (or preferably a few) for each method. You'll thank yourself in the long run.
You can then also use test coverage tools to see which bits still need testing.
You can use shared tests to avoid repetition. So for example with rspec, you could add the following to your spec_helper/rails_helper
def should_be_ok(action)
it "should respond with ok" do
get action.to_sym
expect(response).to be_success
end
end
Then in your controller_spec
describe UserController do
should_be_ok(:index)
should_be_ok(:new)
end

Sharing code between multiple Kiwi test files

I have some Kiwi test helper code that is useful across most of my specs.
What's a nice way of sharing this code across multiple specs (i.e. multiple files)? A category on KiwiSpec might be one option. But that feels a bit off, since I'd be putting code in that category to make things work, rather than because it actually belongs in KiwiSpec.
The 'shared example' feature of Kiwi (since 4.2.0) seems to be better for DRY in a single spec/file, rather than across multiple specs.
The main reason I can't just call some external code from my test is that this external code isn't inside a test case/Kiwi spec, so its assertions either generate compile errors or warnings.
Update
I've tried injecting the assertion functionality needed by the external test helper code as blocks into the helper code. (This has the advantage that my test helper code is not then hard-coded to use any particular test framework.) This was only partially successful: for the test cases where I'm expecting an exception to be raised:
[[theBlock(...) should] raise];
none is raised. I suspect the problem is that I have another block being called inside the main block which has a raise against it.
Update 2
Another possible technique is suggested at https://github.com/kiwi-bdd/Kiwi/issues/138 by user gantaa, whereby we create a self variable pointing to the test suite object outside the context of the test suite.

Grails Test Suite - Specify test order

I am trying to get my geb-spock functional tests to run in a specified order because SpecA will create data required for SpecB during its run.
This question is about running the specifications in order, not the individual test methods within the specification.
I have tried changing the specification name to indicate execution order but that didn't work. I found a solution where a Test Suite was used, and the tests were added to the suite in order, but I can't find how to make a test suite work in Grails.
Explicitly specifying them as grails test-app functional: SpecA SpecB , is not a long term option, as more specs will be added.
For sequential or whatever the sequence you want to run your tasks, I do the following thing in my build.gradle file:
def modules = ["X", "Y", "Z", "ZZ"]
if (modules.size() > 1) {
for(j in 1 .. modules.size()-1 ) {
tasks[modules[j]].mustRunAfter modules[values[j-1]]
}
}
Hope that helps. Cheers!
Not really an answer to your question but a general advice - don't do this. Introducing data setup dependencies between test classes will make your suite brittle in the long run. Reasoning about what the state is at a given point will get harder and harder as the amount of tests grows and the global state size with it. Later on hanging a test or introducing a new one might break many tests downstream. This is just asking for trouble.
Ideally, you want to setup the data needed by a test immediately before that test and tear it down afterwards. Grails Remote Control plugin and test data fixture builders are your friends here.
You should define your initialization code in a single place, and if it's shared between both Specs, it may be a good idea to create a superclass with methods you can call in each Spec's set up methods, or a whole class devoted to declare testing methods to reuse.
In any case, the purpose of a unit test is only to test a single functionality, and it shouldn't be responsible of setting up other tests as well.

What is happening during Rails Testing?

I'm new to Ruby On Rails. I love, it has Testing capabilities built in. But, I can't wrap around my head with testing. Here is my first basic Question about it.
What happens during testing really?
I understand development, we want some result, we use the data we have or get it from users to achieve the end result we want. But, the notion of testing seems sometimes confusing for me. I have been testing applications in browser for some time, are we replicating the same with code? Is it what testing is about? Replicating browser testing with automated code? Enlighten Me here.
Reading A Guide to Testing Rails Applications will be a good starting point.
Basically, you have three kinds of tests: unit, functional and integration.
Unit tests are testing your Models. In these tests you check whether a single method of your model works as expected, for example you set assign a login with spaces, and then you test whether the spaces were removed:
class UserTest < ActiveSupport::TestCase
def test_login_cleaning
u = User.new
u.login = " login_with_spaces "
assert_equal "login_with_spaces", u.login
end
# ... and other tests
end
Functional tests are testing your controllers (and views). In each test you simulate one request sent to one controller with given set of parameters, and then you ensure that the controller returned the proper response.
Note however, that in this test you cannot test the rendering of the page, so it's not strictly simulating a browser. To test whether your page looks nicely, you need to do it manually (I am almost sure some techniques exist, but I do not know of them).
An example of functional test:
class UserControllerTest < ActionController::TestCase
def test_show_renders_admin
get :show, :id => 1
assert_response :success
assert_select "div.user" do
assert_select "span.name", "Joe Admin"
end
end
def test_show_handles_unknown_id
get :show, :id => 9999
assert_response 404
assert_select "p.warning", "No such user"
end
end
Integration tests are testing a sequence of requests - something like a scenario, where an user logins, gets the 'create user' page, creates an user, and so on. These tests check whether the single requests (tested in functional tests) are able to work together.
I see that Simone already pointed the importance of automation in tests, so the link to the Guide is the only value in my answer ;-)
You may find it very helpful to apply some rules of Test Driven Development, especially when your project matures a little.
I know that it's not easy to start the project by writing test, because often you do not yet know how everything will work, but later, when you find a bug, I strongly suggest to start fixing every bug from writing a failing test case. It really, really helps both in the bug-fixing phase, and later - ensuring that the bug does not reappear.
Well, I noticed that I did not directly answer your question ;-)
When you start test procedure, Rails:
deletes the test database (so make sure you do not have any valuable data here),
recreates it using the structure of the development database (so, make sure you have run all your migrations),
loads all the fixtures (from test/fixtures/*)
loads all the test classes from test/units/* and other directories,
calls every method whose name starts with 'test_' or was created by the macro test "should something.." (alphabetically, but you may consider the order as being random)
before every call it executes a special setup procedure, and after every call it executes teardown procedure,
before every call it may (depending on the configuration) recreate your database data, loading the fixtures again.
You will find more information in the Guide.
What happens during testing is that you really run a set of specialized programs or routines (test code) that calls routines in your application (code under test) and verifies that they produce the expected results. The testing framework usually has some mechanism to make sure that each test routine is independent of the other tests. In other words the result from one test does not affect the result of the others.
In Rails specifically you run the tests using the rake test command line tool. This will load and execute each test routine in a random order, and tell you if each test was successful or not.
This answer doesn't necessary apply to Rails itself. When you talk about testing in Rails, you usually mean automatic testing.
The word automatic is the essence of the meaning. This is in fact the biggest difference between unit testing and "browser" testing.
With unit testing you essentially write a code, a routine, that stresses a specific portion of your code to make sure it works as expected. The main advantages of unit testing compared to "browser" testing are:
It's automatic and can be run programmatically.
Your test suite increases during the development lifecycle.
You reduce the risk of regression bugs, because when you modify a piece of code and you run the test suite, you are actually running all the tests, not just a random check.
Here's a basic, very simple example. Take a model, let's say the User model. You have the following attributes: first_name, last_name. You want a method called name to return the first and last name, if they exist.
Here's the method
class User
def name
[first_name, last_name].reject(&:blank?).join(" ")
end
end
and here's the corresponding unit test.
require 'test_helper'
class UserTest < ActiveSupport::TestCase
def test_name
assert_equal "John Doe", User.new(:first_name => "John", :last_name => "Doe").name
assert_equal "John", User.new(:first_name => "John").name
assert_equal "Doe", User.new(:last_name => "Doe").name
assert_equal "", User.new().name
end
end

How to skip certain tests with Test::Unit

In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.

Resources