I have assigned one test case to two different user stories. I know it's not the cleanest method but it helps in the case I created that.
In the test plan I added requirements and hence their respective test cases. Now this single test case is present in two different test suites since it tests two different user stories.
When I run this test case I expect it to either fail or succeed in both suites, but it seems that there are two totally different instances of that test case in the plan and I can have one passing and the other one failing.
Is there a need for such a behavior or is it unexpected, therefore a bug in MTM?
When you create now test plans on MTM you can specify the configurations for it and which one of them will be the default. So, when you add new requirements they automatically take the default configuration. However, you can always change it by assigning another available configuration for any requirement you want. My point is that test case, that belongs to two different User Stories, when it is assigned to the test plan has an extra information which is the configuration that it will be used to test it.
So, if your test case A is assigned to user stories A and B, and these requirements have been assigned to the same Test Plan but they have different configurations it is very possible the one test case instance to fail and the other to pass.
Related
I'm wondering is there is a way to use multiple kinds of tables with SLIM (as opposed to FIT) in one test and keeping the context of the same instance of the test class (the harness around the system under test).
With FIT you can enter flow mode by referencing a DoFixture be itself at the start of a test page. This allows you to leverage a variety of different table/fixture type.
I would like to do something similar with SLIM (maybe using a Script Fixture).
Is this possible?
You can have multiple script tables all using the same instance (or 'actor') by not specifying a class as second cell value in the 2nd and following tables, see http://fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScriptTable. You can also use this same instance/actor in decision tables (that do not link to separate code, but just invoke scenario's for the activated script fixture, see http://fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScenarioTable).
I'm not aware of other Slim tables that can also share a fixture instance.
Rails supports several types of tests:
Model tests
Controller tests
Functional tests
Integration tests
And, with capybara, it can also support:
Acceptance/integration/feature (depends on the author) tests
On some sites I see that these acceptance/integration/feature tests should only test particular flows, leaving edge cases for other kinds of tests. For example:
Integration tests are used to test the interaction among any number of controllers. They are generally used to test important work flows within your application.
http://guides.rubyonrails.org/testing.html#integration-testing
While these are great for testing high level functionality, keep in mind that feature specs are slow to run. Instead of testing every possible path through your application with Capybara, leave testing edge cases up to your model, view, and controller specs.
http://robots.thoughtbot.com/how-we-test-rails-applications
But I also see things like:
Your goal should be to write an integration test for every flow through your app: make sure every page gets visited, that every form gets submitted correctly one time and incorrectly one time, and test each flow for different types of users who may have different permissions (to make sure they can visit the pages they're allowed to, and not visit the pages they're not allowed to). You should have a lot of integration tests for your apps!
https://www.learnhowtoprogram.com/lessons/integration-testing-with-capybara
So, that's my question: In rails, should I include user input form error flows on capybara (or integration) tests?
Or do you think it should be enough to write view tests to test for the existance of flash messages, test failure flows via controller tests with the assigns helper, and only test successful flows through acceptance/integration/feature tests?
Edit The accepted answer was due to the comments.
User input form errors are handled by Active Record validations in Rails. So you could cover these with unit tests. But those kinds of tests only verify that you have validations present on your model. I'm not sure how much utility these types of tests offer other than allowing you to recreate your models in another framework or language and still have your tests pass.
I am trying to create test suites in Fitnesse. One test case can be in multiple suites. For every test suite, I have different parameters to pass. Is there any way I can define common paramaters at some place and use them in many test cases and suites instead of duplicating the Fitnesse pages? In our development environment, we have to run the application with different configurations and parameters for testing. It will be very difficult to maintain when the application grows large.
Yes it can be done. Put all the test cases that share common variable in same suite and define variable as suite level or SetUp page (which will be automatically included in all the test pages).
If for some reason you cannot do above then put all the !define on a page and include that page using include directive !include on all the pages you want those parameters.
You can create multiple Suites with different configurations by defining common variables here. Then setup Symbolic Links to those test suites (you have already created) in the "properties" page.
you can refer to this page
http://www.fitnesse.org/FitNesse.UserGuide.FitNesseWiki.SymbolicLinks
The idea is similar to managing testing in different environments.
I have a BDD feature containing multiple scenarios. Should each scenario be completely self contained and runnable individually?
Should be, yes. It is generally good practise in all forms of TDD (BDD included) to make sure that each "test" can run independently, and isn't coupled with or have a dependency on another test having been run first. This will help avoid creating a brittle test suite (i.e. one that is prone to breaking).
That's not to say that you cannot chain readability together. For a very cheap/quick example:
Feature: Users can register and log in
Scenario: Should be able to register
Given I am not registered
When I complete the registration form
Then I will be registered
Scenario: Should be able to log in
Given I am registered
When I correctly sign-in with my credentials
Then I will be logged in
Scenario: Should be able to log out
Given I am logged in
When I sign-out
Then I will be logged out
Each scenario indicates a test that can be automated - and each should be designed behind the scenes to be able to run independently. But as a reader of the feature (say a business stakeholder) - the process is complete and they can understand the entire picture more easily.
I know how to run functional/integration tests in Rails, this question is about best practices. Let's say authorization is performed using four distinct user roles:
basic
editor
admin
super
This means that for each action there are up to five different behaviors possible (4 roles + unauthenticated/anonymous). One approach I've taken is to test every role on every action, for example:
test_edit_by_anonymous_user
test_edit_by_basic_user
test_edit_by_editor_user
test_edit_by_admin_user
test_edit_by_super_user
But this obviously leads to a lot of tests (every controller action on the site really needs to be tested five times). The opposite approach would be to test the authorization mechanism in isolation and then authenticate as super before testing every action (on setup), and only test one version of each page.
I've tried several approaches with varying degrees of specificity but haven't been completely satisfied with anything. I feel more comfortable when I'm testing more cases, but the amount of test code and difficulty of abstraction has been a turn-off. Does anyone have an approach to this problem that they're satisfied with?
It really depends on how you have setup your code for checking the authorization and how you test for it in actions. I can tell you what we do as an example. We have roles like you do, and some pages that require login, some that require a role, and some that have different output based on role. We test each type a little differently.
First, we test authorization and login separately.
Also, we created filters for actions that require the user has logged in, and then others for requiring a certain role. For example check_admin, check_account_owner, etc. We can then test that those filters work on their own.
We then add checks in the controller tests that the correct filters are being called. We use shoulda and wrote some easy extensions so we can add checks like:
should_filter_before_with :check_admin, :new
That way we are testing what needs to be tested and no more.
Now, for more complex actions that do different logic depending on role, we do test those actions for each role that contains special logic. We don't write tests for roles on that action that will be filtered, or are supersets of other roles. For example if the action adds more fields to a form if you are an admin, we test non-admins and admin. We don't test admin and super admin since our code for role checking understands that super-admins are admins.
Also, for templates that contain logic to only display certain items for certain roles, we try and move that code into helpers, or if common like an admin toolbar, into partials. Then we can test those on their own and not on every action that includes them.
To sum up, test only what you need to for a given action. Just like you wouldn't test Rails internals in your Unit tests, if you write common code for your role checks and test that, you don't need to test it again on every action.
In some situations you may be required to test ALL possible roles and authorization levels against different actions - like, when working for a bank, for instance :) In this case it makes sense to take a more dynamic approach to testing. Instead of defining each test case, you would generate all the combinations.
A few years ago Ryan Davis did a presentation about the "functional test matrix" which is part of ZenTest. Dr. Nic did a writeup, and at the end of the post you'll find updated links in the comments. This solution was designed for exactly the problem you describe. You could also roll your own solution, by running tests inside nested loops, for example - the idea is basically the same.
Consider an application which is having 2 roles admin and read only
Perform below tests:
login with read only mode perform some action and logout. Now from same system and same browser login with admin role and see the behaviour of the system. And vice versa.
login with admin role and copy the cookie value and log out. Now login with normal role and edit the cookie value by using cookimanager+ or editthiscookie tool. And if application is working as expected then it is an issue.
Repeat above test case 2 from same machine same browser, same machine different browser, from different machine
if it is thick client application then perform reverse engineering and analyse the code. Try to change the logic of authorization management (someone needs to have coding experience for this) recompile the code and repeat test 2-3.
using proxy interruption tool like burp suite analyze get/post request for both the roles.
Now based on type of role available for your application decide test cases.