Behavior Driven Development using existing software components - ruby-on-rails

I'm currently reading through the beta version of The Rspec Book: http://www.pragprog.com/titles/achbd/the-rspec-book
It describes The Behaviour Driven Development Cycle (red, green, refactor) as taking small steps during the development process. This means adding features one at a time.
My question is:
If I were to describe a single feature of my software (eg: a successful user login scenario in a cucumber test) AND if I were using a modular component (such as Devise) which has many features (scenarios). How could I possibly follow Behaviour Driven techniques? Once my first step passes, I must reverse engineer my other tests to reflect the functionality of the software component I am using thus violating the principal of BDD!
Edit (for clarity):
My first scenario is passing after implementing Devise. But now I must factor all my subsequent end to end tests (that I have not yet written) around the behaviour of Devise rather than to my stake holder's requirements. So the BDD cycle can no longer be applied. I must reverse engineer Devise in my tests to make them pass or not write the tests.

The BDD cycle involves creating scenarios, then having conversations around those scenarios to discover any more which are missing, any misunderstandings, etc.
If you use a BDD tool like Cucumber, then you can capture the scenarios you've discussed.
Ideally, the scenarios will be in terms of high-level steps, focused on the capabilities of the system and the value they provide to the users. They won't be about logging in or failing to authenticate.
BDD isn't really about testing. It's about learning, and keeping things easy to change. If you're never going to change the mechanisms for logging in then it's enough to manually verify authentication. Focus instead on the things which make your application different to all the other application out there. How does your software actually provide value? How does it communicate with other systems, applications and users? How is money made, or lives saved, or fun had?
If you can answer those questions and keep the steps focused on those, you'll have failing scenarios still, even with Devise. Your scenarios will look more like:
Given I have registered an account
When I <do this differentiating thing>
Then I <achieve this differentiating outcome>
Logging in will be implicit, and called as a lower level step as part of the first Given.

So I can understand better: You have an existing system and it includes tests but does not include login functionality. You decide to add login:
Given a visitor is not logged in
When a visitor goes to the admin page
Then the visitor should see the login page
Ok, so then you decide how you want to implement the login because this scenario is failing. Right? So you choose Devise and the scenario passes but all the tests or specs that relied on a security-free system now fail. Am I still on track? If that's the case, you've encountered a case where you are adding a feature that is so pervasive in your application that you need to touch a number of tests/specs to get them all running. You are still testing first because either the code is broken or the test doesn't recognize the new feature. In any case, you know where the work needs to be done.
Helpful?

Related

Setting bounds between rspec and cucumber in Rails

I like the way Cucumber connects user specifications to integration testing. The Cucumber part is more or less clear to me.
I struggle when it comes to what should be tested with rspec (non-integration testing) and what shouldn't.
Is it right to unit-test with rspec something that has already been tested with Cucumber (e.g. unit-test will 100% fail if Cucumber test fails, and unit-test will 100% succeed if Cucumber test succeeds)?
To be specific, I have three examples I'd like to resolve.
Here is a case from the RSpec book.
They have the following Cucumber scenario:
Given I am not yet playing
When I start a new game
Then I should see "Welcome to Codebreaker!"
And I should see "Enter guess:"
They build two rspec-tests right after:
describe "#start" do
it "sends a welcome message" do
end
it "prompts for the first guess" do
end
end
Another example is testing routing or action redirect, while there is the following scenario:
Given I am at the login page
When I fill in the right username and password
Then I should be at the index page
Sometimes we test helpers that are already tested with Cucumber:
Given Mike has spent 283 minutes online
When I go to the Mike's profile page
Then I should see "4:43" for "Time online:"
I should probably test helper that breaks 283 minutes into "4:43", but it turns out that it is already tested with Cucumber.
It may not be the best examples, but it illustrates what I am talking about.
To me those tests are duplicates.
Could you please comment on the examples above?
Is there any principles or guidelines on what should be tested with rspec, when there are Cucumber tests already?
All this is only my personal opinion on this broad and open topic. Some people might disagree and funny enough they can.
As of general guidelines you should use Cucumber to test entire application stack, something that is called user experience. This application stack might be composed of many smaller independent objects but, same as user using your application wouldn't be interested in those details, your cucumber test shouldn't care for them and focus instead on outer layer of your application.
RSpec( in your setup!) should on the other hand put a main focus on those small objects, building blocks of your application.
There is a big problem with small application examples from the books: They are to small!
Boarder between outer layer of your application and its interiors is to blurred. Entire application is build with two objects! It is hard to distinguished what should test what. As your application grow in size its getting more obvious what is a user experience test(cucumber) and what is object - state test/message expectation test(RSpec).
Using your second example:
With this Cucumber story:
Given I am at the login page
When I fill in the right username and password
Then I should be at the index page
Rspec:
You will probably have some sort of User model:
test user name syntax(must start with capital for example)
test password must be 7 characters have numbers etc...
You can have some sort of authentication object:
test for valid and invalid logins etc
test for exceptions being thrown ....
What if your Authentication database is on different server?
mock connection and authentication database and test if database receive an request from authentication object.
And yada yada yada... Forever your cucumber test will guard general purpose of your application user login. Even when you add or change behaviour, under the hood, as long as this test pass you can be confident that user can login.
you should test things once(in one place)
object its responsible for testing its incoming massage(object public interface) for state(return value)
when your object depends on other object(sends its a message) don't test it for state(return value) that object is responsible for it.
when your object depends on other object(sends its a message) mock that second object and test if it receive your message.
Generally your question is too broad and you wont find a single answer, different people will have different ideas. What you should focus on is to test, as good as you can and with time you will definitely find right way for you. Not a great test suit is much better then none.
Because good design helps to write good test I would recommend Practical Object-Oriented Design in Ruby: An Agile Primer by Sandi Metz(which is one of a best book I've read)

Ruby_on_Rails Testing The Database Using Rspec

Actually i am newbie to rails testing
I want to know what are all the step covered in the rails testing.?
Could anyone help me knowing the steps involve in Professional Testing
Check out the rails guide Gavin linked you. Can also check out this book: http://pragprog.com/book/achbd/the-rspec-book. It covers how to use rspec, and covers testing various parts of the application.
I also rather enjoyed Rails 3 in action, which teaches the basics and various topics of rails, while doing inside-out testing throughout the entire book. Rails 4 in action is a work in progress edition as well.
As for my answer:
Generally testing first before you write code makes the process better & easier. Writing tests after the fact isn't as nice, even though it does give you the coverage. When it's part of your regular workflow to add features, it's quite nice. If you have a new feature that you want to add, such as user sign up that sends a confirmation email. You can write the integration test first. It describes the user as going through the sign up form, filling it properly, and then checking if an email was sent.
From there you can test internals of the user model, making sure methods you build for this to work function as expected. Then when you have your tests green, add another scenario to the integration tests where the login was invalid.

BDD - should each scenario be self contained?

I have a BDD feature containing multiple scenarios. Should each scenario be completely self contained and runnable individually?
Should be, yes. It is generally good practise in all forms of TDD (BDD included) to make sure that each "test" can run independently, and isn't coupled with or have a dependency on another test having been run first. This will help avoid creating a brittle test suite (i.e. one that is prone to breaking).
That's not to say that you cannot chain readability together. For a very cheap/quick example:
Feature: Users can register and log in
Scenario: Should be able to register
Given I am not registered
When I complete the registration form
Then I will be registered
Scenario: Should be able to log in
Given I am registered
When I correctly sign-in with my credentials
Then I will be logged in
Scenario: Should be able to log out
Given I am logged in
When I sign-out
Then I will be logged out
Each scenario indicates a test that can be automated - and each should be designed behind the scenes to be able to run independently. But as a reader of the feature (say a business stakeholder) - the process is complete and they can understand the entire picture more easily.

How should I test my Rails app?

In the past couple of days I've been making slow progress adding tests to an existing rails app I've been working on for a little bit.
I'm just trying to figure out how much and what kind of tests (unit, functional, integration) will be enough to save me time debugging and fixing deploys that break existing functionality.
I'm the only one working on this application. It's basically an inventory management database for a small company (~20 employees). I didn't add testing from the beginning because I didn't really see the point but I've have a couple of deploys screw up existing functionality in the last little bit, so I thought it might be a good thing to add.
Do I need to test my models and controllers individually AND perform integration testing? There seem to be developers who believe that you should just integration test and backtrack to figure out what's wrong if you get an error from there.
So far I'm using RSpec + Factory Girl + Shoulda. That made it pretty easy to set up tests for the models.
I'm starting on the controllers now and am more than a little bit lost. I know how to test an individual controller but I don't know if I should just be testing application flow using integration tests as that would test the controllers at the same time.
I use integration tests to test the successful paths and general failure paths, but for the edge cases or more in-depth scenarios then I write model/view/controller tests.
I usually write the tests before writing the functionality in the application but if you want to add tests afterwards then I recommend using something like SimpleCov to see which areas of your application need more testing and slowly build up the test coverage for your application.
Writing tests after the app is already written is going to be tedious. At a minimum you should be testing for integration cases (which will test most controller functions and views), and also model tests separately so that your model calls work as you think they should.
For integration testing I find Capybara easy to use.
As a rule of thumb, most of your tests need to be unit, some functional, few integration.
Rails adheres to this wholeheartedly and you should do the same.
At the minimum you have to have:
A unit test for each of your models, add a test case for each of your validations in each of them, and finally a last one to make the sure the model is saved. If you have stuff like dependant: :destroy, touch: true on related models (has_many, belongs_to), add test cases for them as well.
A functional test for each of your controllers. Test cases for each action. Some actions can have multiple responses (like in case of 422 errors) so add another case to test the error response. If you have a authorization filter, test them as well.
Then you can have an integration test for a typical new user flow. Like
Sign Up -> Create Post -> View it -> logout. Another one is to make sure that unauthorized users cannot access your resources.
And from now, do not commit ANY code without the above mentioned tests.
The best time to plant a tree was 10 years ago, second best time is now, so start testing!

Functional Testing of Authorization In Rails

I know how to run functional/integration tests in Rails, this question is about best practices. Let's say authorization is performed using four distinct user roles:
basic
editor
admin
super
This means that for each action there are up to five different behaviors possible (4 roles + unauthenticated/anonymous). One approach I've taken is to test every role on every action, for example:
test_edit_by_anonymous_user
test_edit_by_basic_user
test_edit_by_editor_user
test_edit_by_admin_user
test_edit_by_super_user
But this obviously leads to a lot of tests (every controller action on the site really needs to be tested five times). The opposite approach would be to test the authorization mechanism in isolation and then authenticate as super before testing every action (on setup), and only test one version of each page.
I've tried several approaches with varying degrees of specificity but haven't been completely satisfied with anything. I feel more comfortable when I'm testing more cases, but the amount of test code and difficulty of abstraction has been a turn-off. Does anyone have an approach to this problem that they're satisfied with?
It really depends on how you have setup your code for checking the authorization and how you test for it in actions. I can tell you what we do as an example. We have roles like you do, and some pages that require login, some that require a role, and some that have different output based on role. We test each type a little differently.
First, we test authorization and login separately.
Also, we created filters for actions that require the user has logged in, and then others for requiring a certain role. For example check_admin, check_account_owner, etc. We can then test that those filters work on their own.
We then add checks in the controller tests that the correct filters are being called. We use shoulda and wrote some easy extensions so we can add checks like:
should_filter_before_with :check_admin, :new
That way we are testing what needs to be tested and no more.
Now, for more complex actions that do different logic depending on role, we do test those actions for each role that contains special logic. We don't write tests for roles on that action that will be filtered, or are supersets of other roles. For example if the action adds more fields to a form if you are an admin, we test non-admins and admin. We don't test admin and super admin since our code for role checking understands that super-admins are admins.
Also, for templates that contain logic to only display certain items for certain roles, we try and move that code into helpers, or if common like an admin toolbar, into partials. Then we can test those on their own and not on every action that includes them.
To sum up, test only what you need to for a given action. Just like you wouldn't test Rails internals in your Unit tests, if you write common code for your role checks and test that, you don't need to test it again on every action.
In some situations you may be required to test ALL possible roles and authorization levels against different actions - like, when working for a bank, for instance :) In this case it makes sense to take a more dynamic approach to testing. Instead of defining each test case, you would generate all the combinations.
A few years ago Ryan Davis did a presentation about the "functional test matrix" which is part of ZenTest. Dr. Nic did a writeup, and at the end of the post you'll find updated links in the comments. This solution was designed for exactly the problem you describe. You could also roll your own solution, by running tests inside nested loops, for example - the idea is basically the same.
Consider an application which is having 2 roles admin and read only
Perform below tests:
login with read only mode perform some action and logout. Now from same system and same browser login with admin role and see the behaviour of the system. And vice versa.
login with admin role and copy the cookie value and log out. Now login with normal role and edit the cookie value by using cookimanager+ or editthiscookie tool. And if application is working as expected then it is an issue.
Repeat above test case 2 from same machine same browser, same machine different browser, from different machine
if it is thick client application then perform reverse engineering and analyse the code. Try to change the logic of authorization management (someone needs to have coding experience for this) recompile the code and repeat test 2-3.
using proxy interruption tool like burp suite analyze get/post request for both the roles.
Now based on type of role available for your application decide test cases.

Resources