How to skip certain tests with Test::Unit - ruby-on-rails

In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method

New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.

I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.

Related

how to test i18n in Rails with RSpec

Question context:
let say that there is some really important row in config/locales/en.yml that is crucial to exist.
en:
foo:
bar: "bubla!"
I don't want to test every single line but
I don't want the test to be too brittle (so no I18n.t('foo.bar').should =~ /bubla/)
so way I'm testing currently is like this
#spec/locals_spec.rb
require 'spec_helper'
describe I18n do
it do
I18n.t('date.datepicker').should be_kind_of(String)
end
end
this way I'm just ensuring that translation exist and that it don't continues (e.g. 'foo.bar.car.lol'
but still I'm not satisfied
Question: What's the best practice to test I18n translations with RSpec and where in spec folder I should place them ?
Check this StackOverflow question for some ideas. My preferred way is this answer on the same question.
Update: These days I tend to use the i18n-tasks gem to handle testing related to i18n, and not what I wrote above or have answered on StackOverflow previously.
I wanted to use i18n in my RSpec tests primarily to make sure that I had translations for everything ie there were no translations missed. i18n-tasks can do that and more through static analysis of my code, so I don't need to run tests for all I18n.available_locales anymore (apart from when testing very locale-specific functionality, like, for example, switching from any locale to any other locale in the system).
Doing this has meant I can confirm that all i18n keys in the system actually have values (and that none are unused or obsolete), while keeping the number of repetitive tests, and consequently the suite running time, down.
i think that i would write an acceptance test for such a "crucial" thing.
in most cases you need the translation in some specific context, ie. displaying something in a datepicker. i would test that context using capybara or whatever works with a javascript driver.
just testing that this translation exists is useless if you don't have the context that it's used within.

SimpleCov with multiple apps - or in short, how does Simplecov work?

I'm trying to setup SimpleCov to generate reports for 3 applications that share most of their code(models, controllers) from a local gem but the specs for the code that each app uses are inside each ./spec and not on the gem itself.
For a clearer example. When i run bundle exec rspec spec inside app_1 that uses the shared models from the local gem I want to get(accurate) reports for all the specs that this app_1 has inside ./spec.
The local gem also has some models that belong exclusively for app_2, inside a namespace, so i want to skip the report for those files when i run the test suite inside app_1.
I'm trying to achieve this with something like the following code in app_1/spec/spec_helper.
# This couple of lines are needed to generate report for the models, etc. inside the local gem.
SimpleCov.adapters.delete(:root_filter)
SimpleCov.filters.clear
SimpleCov.adapters.define 'my_filter' do
root = SimpleCov.root.split("/")
root.pop
add_filter do |src|
!(src.filename =~ /^#{root.join("/")}/)
end
add_filter "/app_2_namespace/"
end
if ENV["COVERAGE"] == "true"
SimpleCov.start 'rails'
end
This works, until some questions begin to arise.
Why i get a 85% coverage for a model that's inside the gem but the spec is inside app_2(I'm running the spec inside app_1).
The first time that was a problem, was when i tried to improve that model so i clicked on the report for it and saw which lines were uncovered and i tried to fix them writing tests for them on app_2/spec/namespace/my_model_spec.rb.
But that didn't make any difference, i tried a more aggressive test and i erased all the content on the spec file but somehow i still was getting the 85% of coverage, so the my_model_spec.rb is not related to the coverage results of my_model.rb. Kind of unexpected.
But since this file was on app_2 i decided to add a filter on the SimpleCov.start block on app_1 spec_helper, like:
add_filter "/app_2_name_space/"
I moved then to the app_2 folder and started setting up SimpleCov and see what results i would get here. And they turned out weirder.
For the same model i got 100% coverage, i did the same test of empty'ing the my_model_spec.rb file and still got the 100%. So this really f**ed up, or i don't understand something.
How does this work?(with the Ruby 1.9 Coverage module you say, well when i run locally the example on the official documentation i get different results, so i think there's a bug or outdated documentation there)
ruby-doc: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 1, nil, 0, nil]}
locally: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 0, nil, 1, nil]}
I hope the reports don't show positive results for lines that get evaluated somewhere on the app code, no matter where.
I think the expected behavior is that the results for a model for example are related to it's spec, same thing for controllers, etc.
Is this the case? If so, why am i getting this strange results.
Or do you think the structure of my apps could be messing up with SimpleCov and Coverage?
Thank you for taking the time to read this, if you need more detailed info, just ask.
Regarding your confusion with the model being 100% covered, as I'm not sure that I understand correctly: There's no way for Coverage (and therefore SimpleCov) to know whether your code has been executed from a spec or "somewhere else". Say I have a method "foo" and a method "bar" that calls foo. If I invoke bar in my specs, of course foo will also be shown as covered.
As to your general problem: I think it should be possible to have the coverage reported. Just because the source code is at some different point than your project root should not lead to the loss of coverage reporting.
Two things in your base config: Removing the base adapter (line 2) is unneccessary as adapters basically are glorified config chunks and at this point you'll have it already executed (since it gets invoked when Simplecov is loaded). Resetting the filters should be sufficient.
Also, the custom adapter you define is not used. Please refer to the README as to how to properly set up adapters, but I think you'd be fine with simply using this in the SimpleCov config block when you start the coverage run for now:
SimpleCov.start 'rails' do
your_custom_config
end
What you'll probably want though is a merged coverage report for all your apps. For this, you'll have to define a command_name for each of your spec suites first, inside your config block, like this: command_name 'App1 Specs'.
You'll also have to define a central coverage_path, which will store away your coverage reports across your app suites. Say you have ~/projects/my_project/app[1-3], then putting that into my_project/coverage might make sense. This will lead to your different test suite results getting merged into one single report, just like when using SimpleCov with Cucumber & RSpec for example. Merging has a default timeout of ~10 minutes, so you might need to set this to a higher value using merge_timeout 3600 in your config (those are seconds). For the specifics of these configuration options please again check out the README and the SimpleCov::Configuration documentation. These things are outlined there in fair detail.
So, to sum it up, each of your apps should look somewhat like this:
require 'SimpleCov'
SimpleCov.start 'rails' do
reset_filters!
command_name 'App1 Spec'
coverage_path File.dirname(__FILE__) + '../../coverage' # Assuming this is in my_project/app1/spec/spec_helper.rb
merge_timeout 3600
end
Next thing you might want to add filters to reject all non-project gems by path, and you should be up & running.

Breaking down your RSpec tests

Some of my Rspec tests have gotten really really big (2000-5000 lines). I am just wondering if anyone has ever tried breaking these tests down into multiple files that meet the following conditions:
There is a systematic way of naming and placing your test (e.g. methods A-L gos to user_spec1.rb).
You can run a single file that will actually run the other tests inside other files.
You can still run a specific context within a file
and, good to have, RubyMine can run a specific test (and all tests) just fine.
For now, I have been successful in doing
#user_spec.rb
require 'spec_helper'
require File.expand_path("../user_spec1.rb", __FILE__)
include UserSpec
#user_spec1.rb
module UserSpec do
describe User do
..
end
end
If your specs are getting too big, it's likely that your model is too big as well -- since you used "UserSpec" here, you could say your user class is a "God class". That is, it does too much.
So, I would break this up into much smaller classes, each of which have one single responsibility. Then, test these classes in isolation.
What you may find is that your User class knows how to execute most logic in your system -- this is an easy trap to fall into, but can be avoided if you put your logic in a class that takes a user as an argument... Also if you steadfastly follow the law of demeter (where your user class could only touch 1 level below it, but not two).
Further Reading: http://blog.rubybestpractices.com/posts/gregory/055-issue-23-solid-design.html

What is happening during Rails Testing?

I'm new to Ruby On Rails. I love, it has Testing capabilities built in. But, I can't wrap around my head with testing. Here is my first basic Question about it.
What happens during testing really?
I understand development, we want some result, we use the data we have or get it from users to achieve the end result we want. But, the notion of testing seems sometimes confusing for me. I have been testing applications in browser for some time, are we replicating the same with code? Is it what testing is about? Replicating browser testing with automated code? Enlighten Me here.
Reading A Guide to Testing Rails Applications will be a good starting point.
Basically, you have three kinds of tests: unit, functional and integration.
Unit tests are testing your Models. In these tests you check whether a single method of your model works as expected, for example you set assign a login with spaces, and then you test whether the spaces were removed:
class UserTest < ActiveSupport::TestCase
def test_login_cleaning
u = User.new
u.login = " login_with_spaces "
assert_equal "login_with_spaces", u.login
end
# ... and other tests
end
Functional tests are testing your controllers (and views). In each test you simulate one request sent to one controller with given set of parameters, and then you ensure that the controller returned the proper response.
Note however, that in this test you cannot test the rendering of the page, so it's not strictly simulating a browser. To test whether your page looks nicely, you need to do it manually (I am almost sure some techniques exist, but I do not know of them).
An example of functional test:
class UserControllerTest < ActionController::TestCase
def test_show_renders_admin
get :show, :id => 1
assert_response :success
assert_select "div.user" do
assert_select "span.name", "Joe Admin"
end
end
def test_show_handles_unknown_id
get :show, :id => 9999
assert_response 404
assert_select "p.warning", "No such user"
end
end
Integration tests are testing a sequence of requests - something like a scenario, where an user logins, gets the 'create user' page, creates an user, and so on. These tests check whether the single requests (tested in functional tests) are able to work together.
I see that Simone already pointed the importance of automation in tests, so the link to the Guide is the only value in my answer ;-)
You may find it very helpful to apply some rules of Test Driven Development, especially when your project matures a little.
I know that it's not easy to start the project by writing test, because often you do not yet know how everything will work, but later, when you find a bug, I strongly suggest to start fixing every bug from writing a failing test case. It really, really helps both in the bug-fixing phase, and later - ensuring that the bug does not reappear.
Well, I noticed that I did not directly answer your question ;-)
When you start test procedure, Rails:
deletes the test database (so make sure you do not have any valuable data here),
recreates it using the structure of the development database (so, make sure you have run all your migrations),
loads all the fixtures (from test/fixtures/*)
loads all the test classes from test/units/* and other directories,
calls every method whose name starts with 'test_' or was created by the macro test "should something.." (alphabetically, but you may consider the order as being random)
before every call it executes a special setup procedure, and after every call it executes teardown procedure,
before every call it may (depending on the configuration) recreate your database data, loading the fixtures again.
You will find more information in the Guide.
What happens during testing is that you really run a set of specialized programs or routines (test code) that calls routines in your application (code under test) and verifies that they produce the expected results. The testing framework usually has some mechanism to make sure that each test routine is independent of the other tests. In other words the result from one test does not affect the result of the others.
In Rails specifically you run the tests using the rake test command line tool. This will load and execute each test routine in a random order, and tell you if each test was successful or not.
This answer doesn't necessary apply to Rails itself. When you talk about testing in Rails, you usually mean automatic testing.
The word automatic is the essence of the meaning. This is in fact the biggest difference between unit testing and "browser" testing.
With unit testing you essentially write a code, a routine, that stresses a specific portion of your code to make sure it works as expected. The main advantages of unit testing compared to "browser" testing are:
It's automatic and can be run programmatically.
Your test suite increases during the development lifecycle.
You reduce the risk of regression bugs, because when you modify a piece of code and you run the test suite, you are actually running all the tests, not just a random check.
Here's a basic, very simple example. Take a model, let's say the User model. You have the following attributes: first_name, last_name. You want a method called name to return the first and last name, if they exist.
Here's the method
class User
def name
[first_name, last_name].reject(&:blank?).join(" ")
end
end
and here's the corresponding unit test.
require 'test_helper'
class UserTest < ActiveSupport::TestCase
def test_name
assert_equal "John Doe", User.new(:first_name => "John", :last_name => "Doe").name
assert_equal "John", User.new(:first_name => "John").name
assert_equal "Doe", User.new(:last_name => "Doe").name
assert_equal "", User.new().name
end
end

How can I see what actually happens when a Test::Unit test runs?

In a Rails application I have a Test::Unit functional test that's failing, but the output on the console isn't telling me much.
How can I view the request, the response, the flash, the session, the variables set, and so on?
Is there something like...
rake test specific_test_file --verbose
You can add puts statements to your test case as suggested, or add calls to Rails.logger.debug() to your application code and watch your log/development.log to trace through what's happening.
In your test you have access to a bunch of resources you can user to debug your test.
p #request
p #response
p #controller
p flash
p cookie
p session
Also, remember that your action should be as simple as possibile and all the specific action execution should be tested by single Unit test.
Functional test should be reserved to the the overall action execution.
What does it mean in practice? If something doesn't work in your action, and your action calls 3 Model methods, you should be able to easily isolate the problem just looking at the unit tests. If one (or more) unit test fails, then you know which method is the guilty.
If all the unit tests pass, then the problem is the action itself but it should be quite easy to debug since you already tested the methods separately.
in the failing test use p #request etc. its ugly, but it can work
An answer to a separate question suggested
rake test TESTOPTS=-v
The slick way is to use pry and pry-nav gems. Be sure to include them in your test gem group. I use them in the development group as well. The great thing about pry and pry nav is you can step through your code with a console, so you can not only see the code as it's executed, but you can also enter console commands during the test.
You just enter binding.pry in the places in the code you want to trigger the console. Then using the 'step' command, you can move line by line through the code as it's executed.

Resources