how to test i18n in Rails with RSpec - ruby-on-rails

Question context:
let say that there is some really important row in config/locales/en.yml that is crucial to exist.
en:
foo:
bar: "bubla!"
I don't want to test every single line but
I don't want the test to be too brittle (so no I18n.t('foo.bar').should =~ /bubla/)
so way I'm testing currently is like this
#spec/locals_spec.rb
require 'spec_helper'
describe I18n do
it do
I18n.t('date.datepicker').should be_kind_of(String)
end
end
this way I'm just ensuring that translation exist and that it don't continues (e.g. 'foo.bar.car.lol'
but still I'm not satisfied
Question: What's the best practice to test I18n translations with RSpec and where in spec folder I should place them ?

Check this StackOverflow question for some ideas. My preferred way is this answer on the same question.
Update: These days I tend to use the i18n-tasks gem to handle testing related to i18n, and not what I wrote above or have answered on StackOverflow previously.
I wanted to use i18n in my RSpec tests primarily to make sure that I had translations for everything ie there were no translations missed. i18n-tasks can do that and more through static analysis of my code, so I don't need to run tests for all I18n.available_locales anymore (apart from when testing very locale-specific functionality, like, for example, switching from any locale to any other locale in the system).
Doing this has meant I can confirm that all i18n keys in the system actually have values (and that none are unused or obsolete), while keeping the number of repetitive tests, and consequently the suite running time, down.

i think that i would write an acceptance test for such a "crucial" thing.
in most cases you need the translation in some specific context, ie. displaying something in a datepicker. i would test that context using capybara or whatever works with a javascript driver.
just testing that this translation exists is useless if you don't have the context that it's used within.

Related

How do define a fine grained structure when using Capybara with RSpec where all tests are run after one `visit()` call

I can't work out how to make Capybara with RSpec handle anything more than two levels before my expects.
In RSpec, I can use describe, followed by context followed by it and I can also nest these to provide really good structured output.
In Capybara I get feature then scenario which is synonymous with it? and that's that, straight into expect. The result is that I get started, but then there's a huge blob of expects checking everything on the page. I know I could break these down individually in scenarios but I don't want to call expensive visit calls for each check. There's no reason why there shouldn't be 50 expects checking a page, so bringing some extra structure would be great.
Which keywords would one use at the different levels of the following structure, either in Capybara, or Capybara with RSpec?
<level1> "full page of app"
// visit the page once here
<level2> "check headings"
<level3> "h1 has text ..."
expect here ...
expect here ...
</level>
<level3> "there are three h2s"
expect here ...
expect here ...
</level>
</level>
</level>
The crucial bit is visit() - this should only happen once, as it would be hopelessly inefficient to visit once per expect when all the expects are on the same page. Trying before :all and background mean that it only works on the first test, the returned HTML is empty for the rest of the tests.
The question has changed now to be more specific about the visit, so I'm adding a separate answer.
Each of your <level 3> scenarios are isolated test sections by design and as such each one will need to perform their own visit() - that visit can be in a before(:each) higher up the tree if you want, but each will need to visit the page. This is by design to isolate each test from each other. You can perform multiple expects in each <level 3> if that makes sense for whatever is being tested, or you could factor multiple expects out into methods like verify_widget_is_displayed_correctly.
One other thing to consider is that depending on what all those expects are testing are you may want to be verifying some of them in view tests (which Capybara's matchers are available in by default as of Capybara 2.5) rather than in integration tests. Integration (feature) tests really are about verifying the behavior of the app as the user clicks around rather than minute details of the views layout.
When using Capybara with RSpec you're not using Capybara instead of RSpec you are using RSpec with some extra stuff thrown in by Capybara. As such you can still use context, describe, it and nest them just like you can when using RSpec without Capybara. Capybara adds sugar on top but 'feature' is just the same as 'describe' or 'context' with type: 'feature' set. 'scenario' is just an alias for 'it', 'fscenario' is just 'it' with focus: true metadata set, and 'xscenario' is 'it' with skip metadata set
You can see it here - https://github.com/jnicklas/capybara/blob/master/lib/capybara/rspec/features.rb

SimpleCov with multiple apps - or in short, how does Simplecov work?

I'm trying to setup SimpleCov to generate reports for 3 applications that share most of their code(models, controllers) from a local gem but the specs for the code that each app uses are inside each ./spec and not on the gem itself.
For a clearer example. When i run bundle exec rspec spec inside app_1 that uses the shared models from the local gem I want to get(accurate) reports for all the specs that this app_1 has inside ./spec.
The local gem also has some models that belong exclusively for app_2, inside a namespace, so i want to skip the report for those files when i run the test suite inside app_1.
I'm trying to achieve this with something like the following code in app_1/spec/spec_helper.
# This couple of lines are needed to generate report for the models, etc. inside the local gem.
SimpleCov.adapters.delete(:root_filter)
SimpleCov.filters.clear
SimpleCov.adapters.define 'my_filter' do
root = SimpleCov.root.split("/")
root.pop
add_filter do |src|
!(src.filename =~ /^#{root.join("/")}/)
end
add_filter "/app_2_namespace/"
end
if ENV["COVERAGE"] == "true"
SimpleCov.start 'rails'
end
This works, until some questions begin to arise.
Why i get a 85% coverage for a model that's inside the gem but the spec is inside app_2(I'm running the spec inside app_1).
The first time that was a problem, was when i tried to improve that model so i clicked on the report for it and saw which lines were uncovered and i tried to fix them writing tests for them on app_2/spec/namespace/my_model_spec.rb.
But that didn't make any difference, i tried a more aggressive test and i erased all the content on the spec file but somehow i still was getting the 85% of coverage, so the my_model_spec.rb is not related to the coverage results of my_model.rb. Kind of unexpected.
But since this file was on app_2 i decided to add a filter on the SimpleCov.start block on app_1 spec_helper, like:
add_filter "/app_2_name_space/"
I moved then to the app_2 folder and started setting up SimpleCov and see what results i would get here. And they turned out weirder.
For the same model i got 100% coverage, i did the same test of empty'ing the my_model_spec.rb file and still got the 100%. So this really f**ed up, or i don't understand something.
How does this work?(with the Ruby 1.9 Coverage module you say, well when i run locally the example on the official documentation i get different results, so i think there's a bug or outdated documentation there)
ruby-doc: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 1, nil, 0, nil]}
locally: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 0, nil, 1, nil]}
I hope the reports don't show positive results for lines that get evaluated somewhere on the app code, no matter where.
I think the expected behavior is that the results for a model for example are related to it's spec, same thing for controllers, etc.
Is this the case? If so, why am i getting this strange results.
Or do you think the structure of my apps could be messing up with SimpleCov and Coverage?
Thank you for taking the time to read this, if you need more detailed info, just ask.
Regarding your confusion with the model being 100% covered, as I'm not sure that I understand correctly: There's no way for Coverage (and therefore SimpleCov) to know whether your code has been executed from a spec or "somewhere else". Say I have a method "foo" and a method "bar" that calls foo. If I invoke bar in my specs, of course foo will also be shown as covered.
As to your general problem: I think it should be possible to have the coverage reported. Just because the source code is at some different point than your project root should not lead to the loss of coverage reporting.
Two things in your base config: Removing the base adapter (line 2) is unneccessary as adapters basically are glorified config chunks and at this point you'll have it already executed (since it gets invoked when Simplecov is loaded). Resetting the filters should be sufficient.
Also, the custom adapter you define is not used. Please refer to the README as to how to properly set up adapters, but I think you'd be fine with simply using this in the SimpleCov config block when you start the coverage run for now:
SimpleCov.start 'rails' do
your_custom_config
end
What you'll probably want though is a merged coverage report for all your apps. For this, you'll have to define a command_name for each of your spec suites first, inside your config block, like this: command_name 'App1 Specs'.
You'll also have to define a central coverage_path, which will store away your coverage reports across your app suites. Say you have ~/projects/my_project/app[1-3], then putting that into my_project/coverage might make sense. This will lead to your different test suite results getting merged into one single report, just like when using SimpleCov with Cucumber & RSpec for example. Merging has a default timeout of ~10 minutes, so you might need to set this to a higher value using merge_timeout 3600 in your config (those are seconds). For the specifics of these configuration options please again check out the README and the SimpleCov::Configuration documentation. These things are outlined there in fair detail.
So, to sum it up, each of your apps should look somewhat like this:
require 'SimpleCov'
SimpleCov.start 'rails' do
reset_filters!
command_name 'App1 Spec'
coverage_path File.dirname(__FILE__) + '../../coverage' # Assuming this is in my_project/app1/spec/spec_helper.rb
merge_timeout 3600
end
Next thing you might want to add filters to reject all non-project gems by path, and you should be up & running.

Running rspec from rails application code

I've got a situation where I need to validate some regular expressions.
So, during my application run, I may want to test that a particular regex:
Contains no spaces
Contains only a certain number of capture groups
Does not use certain characters
Only contains a certain number of wildcards
rspec seems like the perfect tool for doing this. I realize that it's typically used to test application interfaces, assumptions and logic before an application is run, however. But, the natural syntax combined with the automatic reporting output would be nice to have.
Questions:
Is this an appropriate use of rspec?
How can one call a description from within a running application?
Or, should I abandon this approach and simply write methods within my class to perform the validations?
Using rspec in this way is highly discouraged and unusual. You should leave testing code in a :test group in your Gemfile and not reference it in your app.
Instead, use rails validations that your field matches a regex format, and then write tests in rspec to verify your validations.
This is definitely something new: using rspec inside rails for validation. But for specific problems one tends to propose to use a DSL, and as such rspec is a DSL which might just perfectly suited for your job.
So if that is the case: why not, yes, go ahead. Be creative and find new ways to use the tools you have.
Just a small warning: from the few points you marked, the complexity does not seem to be too big, so make sure you are not using a bazooka to kill a fly. Rspec is a big and very powerful tool, tying in rspec to run during the rails process might not be entirely straightforward.
If you want to generate a report, you could use the global after(:all) { puts "report goes here" } or after(:each). If you expect some of your data to blow up your tests, you can test for .should raise_exception. I imagine you'd be writing lots of exception handling to keep the expected failures out of the output. Logging the results to a database or a file might also be annoying. If you can, describe the test that you are doing on the data and then just parse the output of rspec at the end.
class Car
attr_accessor :doors
end
describe "Car" do
it "should have doors" do
Car.new.should respond_to(:doors)
fail("failing intentionally")
end
it "should pass this easily" do
Car.new should_not be nil
end
after(:all) { puts "report here" }
end
You can see below that I have a description of the test that failed.
$ rspec rspec_fail.rb
F.report here
Failures:
1) Car should have doors
Failure/Error: fail("failing intentionally")
RuntimeError:
failing intentionally
# ./rspec_fail.rb:9:in `block (2 levels) in <top (required)>'
Finished in 0.00052 seconds
2 examples, 1 failure
I would be easy enough to just make a report of the failures if this was testing text and regex's. Failure/Error: fail("Data has spaces") etc.

How to skip certain tests with Test::Unit

In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.

Should I write rails tests with the def or test keyword?

This seems like a simple question but I can't find the answer anywhere. I've noticed that in general, tests in a Ruby on Rails app can be written as:
test "the truth" do
assert true
end
or
def the_truth
assert true
end
It seems newer material writes tests the first way, but I can't seem to find a reason for this. Is one favored over the other? Is one more correct? Thanks.
There has been a shift in recent years from short, abbreviated test names to longer, sentence-like test names. This is partly due to the popularity of RSpec and the concept that tests are specs and should be descriptive.
If you prefer descriptive test names, I highly recommend going with the test method. I find it to be more readable.
test "should not be able to login with invalid password" do
#...
end
def_should_not_be_able_to_login_with_invalid_password
#...
end
Also, because the description is a string it can contain any characters. With def you are limited in which characters you can use.
I believe the first method was implemented starting with Rails 2.2.
As far as I am aware, it simply improves readability of your code (as def can be any function while test is used only in test cases).
Good luck!
As Mike Trpcic suggests you should check out RSpec and Cucumber. I'd like to add that you should also take a look at:
Shoulda (http://github.com/thoughtbot/shoulda/tree/master)
Factory Girl (http://github.com/thoughtbot/factory_girl/tree/master)
Shoulda is a macro framework for writing concise unit tests for your models/controllers, while the second is a replacement for fixtures.
I would suggest doing your testing with either RSpec or Cucumber. I use both to test all my applications. RSpec is used to test the models and controllers, and Cucumber tests the Views (via the included Webrat functionality).

Resources