I am trying to create test suites in Fitnesse. One test case can be in multiple suites. For every test suite, I have different parameters to pass. Is there any way I can define common paramaters at some place and use them in many test cases and suites instead of duplicating the Fitnesse pages? In our development environment, we have to run the application with different configurations and parameters for testing. It will be very difficult to maintain when the application grows large.
Yes it can be done. Put all the test cases that share common variable in same suite and define variable as suite level or SetUp page (which will be automatically included in all the test pages).
If for some reason you cannot do above then put all the !define on a page and include that page using include directive !include on all the pages you want those parameters.
You can create multiple Suites with different configurations by defining common variables here. Then setup Symbolic Links to those test suites (you have already created) in the "properties" page.
you can refer to this page
http://www.fitnesse.org/FitNesse.UserGuide.FitNesseWiki.SymbolicLinks
The idea is similar to managing testing in different environments.
Related
While rspec automatically creates specs for any helpers created by the Rails generators, I was wondering if other Rails developers find it important/useful to spec the helpers in real-world or if they often don't bother, since the helpers are often tested by proxy through testing of the components that use them?
Personally I do test helper methods, because I like to test them in isolation. If the following feature specs fails I know I probably made a mistake in my test setup because I already ensured that the helper method works.
It is also easier to test all possible scenarios. If you want to test all possibilities as part of a whole you need more test setup and sacrifice performance.
Ideally, you want to write tests for everything, but in real world with time constraints, it is not uncommon to skip simple helper method tests because you implicitly test them while building the actual test. In the same way some developers may skip private method tests.
I'm wondering is there is a way to use multiple kinds of tables with SLIM (as opposed to FIT) in one test and keeping the context of the same instance of the test class (the harness around the system under test).
With FIT you can enter flow mode by referencing a DoFixture be itself at the start of a test page. This allows you to leverage a variety of different table/fixture type.
I would like to do something similar with SLIM (maybe using a Script Fixture).
Is this possible?
You can have multiple script tables all using the same instance (or 'actor') by not specifying a class as second cell value in the 2nd and following tables, see http://fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScriptTable. You can also use this same instance/actor in decision tables (that do not link to separate code, but just invoke scenario's for the activated script fixture, see http://fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScenarioTable).
I'm not aware of other Slim tables that can also share a fixture instance.
Rails supports several types of tests:
Model tests
Controller tests
Functional tests
Integration tests
And, with capybara, it can also support:
Acceptance/integration/feature (depends on the author) tests
On some sites I see that these acceptance/integration/feature tests should only test particular flows, leaving edge cases for other kinds of tests. For example:
Integration tests are used to test the interaction among any number of controllers. They are generally used to test important work flows within your application.
http://guides.rubyonrails.org/testing.html#integration-testing
While these are great for testing high level functionality, keep in mind that feature specs are slow to run. Instead of testing every possible path through your application with Capybara, leave testing edge cases up to your model, view, and controller specs.
http://robots.thoughtbot.com/how-we-test-rails-applications
But I also see things like:
Your goal should be to write an integration test for every flow through your app: make sure every page gets visited, that every form gets submitted correctly one time and incorrectly one time, and test each flow for different types of users who may have different permissions (to make sure they can visit the pages they're allowed to, and not visit the pages they're not allowed to). You should have a lot of integration tests for your apps!
https://www.learnhowtoprogram.com/lessons/integration-testing-with-capybara
So, that's my question: In rails, should I include user input form error flows on capybara (or integration) tests?
Or do you think it should be enough to write view tests to test for the existance of flash messages, test failure flows via controller tests with the assigns helper, and only test successful flows through acceptance/integration/feature tests?
Edit The accepted answer was due to the comments.
User input form errors are handled by Active Record validations in Rails. So you could cover these with unit tests. But those kinds of tests only verify that you have validations present on your model. I'm not sure how much utility these types of tests offer other than allowing you to recreate your models in another framework or language and still have your tests pass.
I have assigned one test case to two different user stories. I know it's not the cleanest method but it helps in the case I created that.
In the test plan I added requirements and hence their respective test cases. Now this single test case is present in two different test suites since it tests two different user stories.
When I run this test case I expect it to either fail or succeed in both suites, but it seems that there are two totally different instances of that test case in the plan and I can have one passing and the other one failing.
Is there a need for such a behavior or is it unexpected, therefore a bug in MTM?
When you create now test plans on MTM you can specify the configurations for it and which one of them will be the default. So, when you add new requirements they automatically take the default configuration. However, you can always change it by assigning another available configuration for any requirement you want. My point is that test case, that belongs to two different User Stories, when it is assigned to the test plan has an extra information which is the configuration that it will be used to test it.
So, if your test case A is assigned to user stories A and B, and these requirements have been assigned to the same Test Plan but they have different configurations it is very possible the one test case instance to fail and the other to pass.
In view pages, people use form tag helpers and link helpers etc.
If I then rename a controller, or a action, my view pages may break.
How can I unit test view related tags for this kind of a breakage?
So the term "unit test" is usually reserved for tests that only test one piece of an application at a time-- you test one view and you test it independently of the associated controller and model.
But as you have found, if you isolate the tests, you can break the interaction between the two and still have all your unit tests passing.
That's why it's important to have tests that exercise your whole application working together. These are sometimes called functional, integration, or acceptance tests (I don't find it very useful to distinguish between these terms but YMMV).
This is usually done using a browser simulator like capybara or webrat so that you are using the application exactly how a user would in the browser. They demand different techniques than unit tests do, so that you don't end up with very brittle tests or tests that take a long time to run without providing additional value for the time spent.
You can use various test frameworks to drive capybara, including RSpec. Many people use RSpec for unit tests and use Cucumber for integration tests. I highly recommend The RSpec Book, which also covers Cucumber and the different methods of testing and when you should use them.