How should I test my Rails app? - ruby-on-rails

In the past couple of days I've been making slow progress adding tests to an existing rails app I've been working on for a little bit.
I'm just trying to figure out how much and what kind of tests (unit, functional, integration) will be enough to save me time debugging and fixing deploys that break existing functionality.
I'm the only one working on this application. It's basically an inventory management database for a small company (~20 employees). I didn't add testing from the beginning because I didn't really see the point but I've have a couple of deploys screw up existing functionality in the last little bit, so I thought it might be a good thing to add.
Do I need to test my models and controllers individually AND perform integration testing? There seem to be developers who believe that you should just integration test and backtrack to figure out what's wrong if you get an error from there.
So far I'm using RSpec + Factory Girl + Shoulda. That made it pretty easy to set up tests for the models.
I'm starting on the controllers now and am more than a little bit lost. I know how to test an individual controller but I don't know if I should just be testing application flow using integration tests as that would test the controllers at the same time.

I use integration tests to test the successful paths and general failure paths, but for the edge cases or more in-depth scenarios then I write model/view/controller tests.
I usually write the tests before writing the functionality in the application but if you want to add tests afterwards then I recommend using something like SimpleCov to see which areas of your application need more testing and slowly build up the test coverage for your application.

Writing tests after the app is already written is going to be tedious. At a minimum you should be testing for integration cases (which will test most controller functions and views), and also model tests separately so that your model calls work as you think they should.
For integration testing I find Capybara easy to use.

As a rule of thumb, most of your tests need to be unit, some functional, few integration.
Rails adheres to this wholeheartedly and you should do the same.
At the minimum you have to have:
A unit test for each of your models, add a test case for each of your validations in each of them, and finally a last one to make the sure the model is saved. If you have stuff like dependant: :destroy, touch: true on related models (has_many, belongs_to), add test cases for them as well.
A functional test for each of your controllers. Test cases for each action. Some actions can have multiple responses (like in case of 422 errors) so add another case to test the error response. If you have a authorization filter, test them as well.
Then you can have an integration test for a typical new user flow. Like
Sign Up -> Create Post -> View it -> logout. Another one is to make sure that unauthorized users cannot access your resources.
And from now, do not commit ANY code without the above mentioned tests.
The best time to plant a tree was 10 years ago, second best time is now, so start testing!

Related

Replacing rails controller tests for integration test should always persist to db?

I'm finding Rails integration tests relevant for testing flows and I have some questions about the industry standard on replacing controller test (deprecated in rails 5) with integration tests.
Usually we have tiny controllers where we get the parameters, call the right collaborator and prepare the response and it is easy to test it by mocking the collaborator directly on the controller object.
I am concerned about the overhead of migrating every controller test to integration test that persist the db. What are the standards for this case?
Whats the standard when testing just one route/action and not a complete flow?
How can we replace this?:
#controller.stubs(:authenticate).returns(true)
Integration tests are intended to mimic a real user. They're meant to test the entire application in their entirety.
Opinion varies on what this means. To me, it means you should avoid stubbing/mocking completely. Not a single thing stubbed or mocked, everything executed in full. This means that every integration test I write goes through the actual authentication process of typing in a username and password. Some of the steps are redundant, yes.
Integrations tests are slower all around than unit/controller tests. Cutting out the authentication steps likely won't save you enough time to make a difference in the long run (no pun intended).

Structure of BDD tests

I'm digging into Capybara and rspec, to move from TDD to BDD.
My generators make a whole lot of directories and spec tests,
with directory structure similar to this:
spec
controllers
models
requests
routing
views
I think that most of this is TDD rather than BDD. If I read here:
"A great testing strategy is to extensively cover the data layer with
unit tests then skip all the way up to acceptance tests. This approach
gives great code coverage and builds a test suite that can flex with a
changing codebase."
Then I figure that things should be quite different
something on the lines of:
spec
models
acceptance
Basically I take out controllers, requests, views, and routing to just implement tests as user case scenarios in the acceptance directory with Capybara, Rspec.
This makes sense to me, though I'm not sure if this is the standard/common approach to this.
What is your approach?
Thanks,
Giulio
tl;dr
This is not a standard approach.
If you only test models and feature specs... then you miss out on the bits in the middle.
You can tell: "method X broke on the Widget model" or you can tell "there's something wrong while creating widgets" but you have no knowledge of anything else.
If something broke, was it the controller? the routing? some hand-over between the two?
it's good to have:
extremely thorough testing at the model-level (eg check every validation, every method, every option based on incoming arguments)
rough testing in the middle to make sure sub-systems work the way you expect (eg controllers set up the right variables and call the right templates/redirections given a certain set of circumstances)
overall feature testing as smoke-tests (eg that a user can go through the happy path and everything works the way they expect... that if they input bad stuff, that the app is throwing up the right error messages and redisplaying the forms for them to fix the problem)
Don't forget that models aren't the only classes in your app.. and all classes need some kind of testing. Controllers are classes too. As are form and service objects, mailers, etc.
That said - it's common to consider that view-tests are going overboard. I'm also not keen on request-tests our routing test myself (unless I have something complex which I want to work right, eg lots of optional params in a route that map to interesting search-patterns)

In rails, should I include user input form error flows on capybara (or integration) tests?

Rails supports several types of tests:
Model tests
Controller tests
Functional tests
Integration tests
And, with capybara, it can also support:
Acceptance/integration/feature (depends on the author) tests
On some sites I see that these acceptance/integration/feature tests should only test particular flows, leaving edge cases for other kinds of tests. For example:
Integration tests are used to test the interaction among any number of controllers. They are generally used to test important work flows within your application.
http://guides.rubyonrails.org/testing.html#integration-testing
While these are great for testing high level functionality, keep in mind that feature specs are slow to run. Instead of testing every possible path through your application with Capybara, leave testing edge cases up to your model, view, and controller specs.
http://robots.thoughtbot.com/how-we-test-rails-applications
But I also see things like:
Your goal should be to write an integration test for every flow through your app: make sure every page gets visited, that every form gets submitted correctly one time and incorrectly one time, and test each flow for different types of users who may have different permissions (to make sure they can visit the pages they're allowed to, and not visit the pages they're not allowed to). You should have a lot of integration tests for your apps!
https://www.learnhowtoprogram.com/lessons/integration-testing-with-capybara
So, that's my question: In rails, should I include user input form error flows on capybara (or integration) tests?
Or do you think it should be enough to write view tests to test for the existance of flash messages, test failure flows via controller tests with the assigns helper, and only test successful flows through acceptance/integration/feature tests?
Edit The accepted answer was due to the comments.
User input form errors are handled by Active Record validations in Rails. So you could cover these with unit tests. But those kinds of tests only verify that you have validations present on your model. I'm not sure how much utility these types of tests offer other than allowing you to recreate your models in another framework or language and still have your tests pass.

In RSpec, what are Request Specs supposed to test?

Rspec noob here, just trying to improve my test coverage.
One very basic yet important question I have is just: What kinds of tests go where?
Model tests are straight forward. I just need to test the functionality of the models methods and validations. View tests seem simple. That would just be testing that each view renders the desired data.
What confuses me is what exactly goes in my Request Specs. Most of my rails experience is from following Michael Hartle's Rails Tutorial. His Request specs seem to be based around a series of actions that the user could take in the application. But he also includes test which seem like they should be in the View Specs that I am considering moving elsewhere.
If someone could help me understand what kinds of tests go in Request, that would be helpful.
From the RSpec docs:
Request specs provide a thin wrapper around Rails' integration tests, and are
designed to drive behavior through the full stack, including routing
(provided by Rails) and without stubbing (that's up to you).
With request specs, you can:
specify a single request
specify multiple requests across multiple controllers
specify multiple requests across multiple sessions
Check the rails documentation on integration tests for more information.
From Rails' docs on integration tests:
Integration tests are used to test the interaction among any number of controllers. They are generally used to test important work flows within your application.
If your test has to do with how a single view is rendered (which should be completely decoupled from any actual HTTP request), then it's probably better as a view test. If it has to do with multiple views or multiple requests, then an integration test is probably more appropriate.

Ruby_on_Rails Testing The Database Using Rspec

Actually i am newbie to rails testing
I want to know what are all the step covered in the rails testing.?
Could anyone help me knowing the steps involve in Professional Testing
Check out the rails guide Gavin linked you. Can also check out this book: http://pragprog.com/book/achbd/the-rspec-book. It covers how to use rspec, and covers testing various parts of the application.
I also rather enjoyed Rails 3 in action, which teaches the basics and various topics of rails, while doing inside-out testing throughout the entire book. Rails 4 in action is a work in progress edition as well.
As for my answer:
Generally testing first before you write code makes the process better & easier. Writing tests after the fact isn't as nice, even though it does give you the coverage. When it's part of your regular workflow to add features, it's quite nice. If you have a new feature that you want to add, such as user sign up that sends a confirmation email. You can write the integration test first. It describes the user as going through the sign up form, filling it properly, and then checking if an email was sent.
From there you can test internals of the user model, making sure methods you build for this to work function as expected. Then when you have your tests green, add another scenario to the integration tests where the login was invalid.

Resources