We have a complex Fitnesse suite with tens of subsuites, includes and symbolic links. Sometimes we don't want to run the whole thing and would like to run
selected test cases and not run the others. We see two ways to do it:
By managing page properties (Suite - Test - Normal), we can turn on/off test cases.
But this is inconvenient. First of all, it's boring. And second, we can't see the suite current state (what test cases are turned on and gonna be run).
In Fitnesse, there are tags, and we can specify desired tags in suiteFilter or excludeSuiteFilter.
This is inconvenient too. You have to remember tag names and don't forget or misspell them in filters. Of course, we can store predefined links with carefully selected lists of tags, but in our case it's not the option, since the lists are subject to frequent changes.
Also, we don't want to divide our suite on several parts, since we benefit from having the general Scenario Library and variable lists.
The ideal solution for us would be having a Fitnesse suite configurator, which could display and change settings of Fitnesse pages. Say, it may be a Fitnesse plugin, which read Fitnesse folder structure and display current settings in an html page or with Windows Form, letting change these settings and save the changes. Or an external tool with similar functionality.
Have you heard of such tools? Did you bump into the same troubles? What would be your suggestions?
I would agree that the first option you listed, manipulating the page properties is a bad idea. It will cause pain in the log run.
I would note that tags area very reasonable approach. The thing to keep in mind about tags filets is that you can build links that will run all of the tests tagged a specific value and make that a part of the FrontPage
For example, you can put a link in your FrontPage that will run all tests marked "smoke".
[[Run Smoke Tests][.FrontPage.MonsterSuite?suite&suiteFilter=smoke]]
There is one other variation on selective execution you could do, but I've not been as successful with. Take a look at SuiteQuery: http://fitnesse.org/FitNesse.UserGuide.TestSuites.SuiteQuery.
SuiteQuery is a technique that lets you specify a suite by building a table that lists the pages or page name filters to run.
!|Suite|
|Page|FitNesse.SuiteAcceptanceTests|
|Content|[Bb]ug|
!|Suite|
|Page|FitNesse.SuiteAcceptanceTests|
|Title|Import|
There is an another way..
Create a new suite and add the following code to it..
!see .FrontPage.TestPage
Related
I'm writing a very long integration test for a wizard that has around 15 steps. Each of these steps has around 20 inputs/select boxes.
I started out using static data in my tests, but now I've begun to write stuff like selecting a random value from a select box, and clicking a random radio button for an option. This does seem like it's more capable of catching bugs, for example; one of the buttons on the page might not be rendered correctly and therefore the value never gets saved to the database - this would never have been found using static data that selects the same option every time. Alternatively, I could manually write out every possible option that could be chosen, but that'd take an eternity to do.
I hear that one of the main reasons not to use random data is that you can not explicitly see the data used in your tests and it can make failing tests hard to resolve.
Is this path that I'm going down one to be avoided? or is testing in this manner something that's generally done?
This is inherently a QA question rather than an automation one. You'll need to ask yourself and your team whether or not testing every single permutation is even worth the time and effort. Usually it is not. In my experience it's best to get information on the most common user journeys in your wizard and branch out from there. I would tackle those first from an automation standpoint and then move onto lower risk paths.
I like to use random data in certain low-risk areas that the devs confirm are relatively inconsequential (for example, a true/false radio box) and you can always make sure you are logging output properly to catch bugs.
I'm developing an application in rails with cucumber.
The application includes a workflow that have multiple steps.
Some steps are
A user import files (3 different files),
Other user make make some checks to date that was imported,
Other user input some parameter,
Other user apply the parameter to the data that was imported,
etc.
The steps must be executed in the correct order, and I is necessary to run all the previous steps in order to execute each one, for example to apply the parameter its necessary to have the data imported and the parameters defined.
My problem is how to build cucumber scenarios/features in this situation.
I know that a scenario is not suppose to call all the previous scenario.But the only other idea that I have is to create a very long scenario performing all this steps, and that make sense because it will be a scenario with more than 2 hundred steps.
Any thought on a pragmatical way of implementing cucumber in this kind of situation?
Many Tks
It sounds as if you have to perform every thing every time.
Will every usage of your system include importing three files? Are there any cases where the user may only need to import two files? If the case is that there will always be three files imported, then you might abstract that step as
given the files are imported
Things that always have to be done may be combined into some generic setup. As the setup never changes, the details may not be necessary to mention explicit.
My experience though, is that at the beginning it is hard to separate scenarios and try to do too much in a few scenarios with many steps. If you don't see any other way, start there. Look at your scenario and see if they possible to separate into perhaps two independent scenarios. It may be possible to separate it into two scenarios that are independent. Next step would be to see if these two new scenarios are possible to divide into two smaller, independent scenarios. It happens that it is possible.
It is obviously always possible that Cucumber is not the tool you need. It is possible that you would be better off with a unit test framework.
In Cucumber, is it possible to run a Background step for the whole feature? So it doesn't get repeated every scenario?
I am running some tests on a search engine, and I need to pre-seed the search engine with test data. Since this data can be quite long to generate and process (I'm using Elasticsearch and I need to build the indices), I'd rather do this background only once, but only for all tests under the same feature.
Is it possible with Cucumber?
Note that I am using MongoDB, so I don't use transactions but truncation, and I believe I have DatabaseCleaner running automatically after each test, that I suppose I'll have to disable (maybe with an #mention?)
EDIT :
Yes I'm using Cucumber with Ruby steps for Rails
EDIT2 : concrete examples
I need to test that my search engine always return relevant results (eg. when searching for "buyers" it should return results with "buyer", "buying", "purchase", etc. (has to do with ES configuration), and other contextual information gets updates correctly (eg in the sidebar
I have categories/filters with the number of hits in parenthesis, I must make sure those number gets refreshed as the user plays with filters)
For this I pre-seed the search engine with a dozen of results, and I run all those tests that are based on the same inputs. I often have "example" clauses that just do something slightly different, but based on the same seeding
Supposing the search data is a meaningful part of the scenario, something that someone reading the feature should know about, I'd put it in a step rather than hide it in a hook. There is no built-in way of doing what you want to do, so you need to make the step idempotent yourself. The simplest way is to use a global.
In features/step_definitions/search_steps.rb:
$search_data_initialized = false
Given /^there is a foo, a bar and a baz$/ do
# initialize the search data
$search_data_initialized = false
end
In features/search.feature:
Feature: Search
Background:
Given there is a foo, a bar and a baz
Scenario: User searches for "foo"
...
There are a number of approaches for doing this sort of thing:
Make the background task really fast.
Perhaps in your case you could put the search data outside of your application and then symlink it into the app in your background step? This is a preferred approach.
Use a unit test tool.
Consider if you really get any benefit out of having scenarios to 'test' search. If you don't use a tool that allows you greater control because your tests are being written in a programming language
Hack cucumber to work in a different way
I'm not going to go into this, because my answer is all about looking at the alternatives
For your particular example of testing search there is one more possibility
Don't test at all
Generally search engines are other peoples code that we use. They have thousands of unit tests and tens of thousands of happy customers, so what value do your additional tests bring?
I am using Ruby on Rails 3.2.2 and Cucumber with the cucumber-rails gem. I would like to know what Cucumber tags are commonly used throughout an application or at least on what criteria I should think about those so to make tags "efficient"/"useful". More, I would like to know "how"/"in which ways" I "could"/"should" use Cucumber tags.
Tags are most commonly used to select or exclude certain tests from running. Your particular situation will dictate what 'groups' of tests are useful to run or not run for a particular test run, but some common examples could be:
#slow - indicates a test that takes a long time to run, you might want to exclude this from most test runs and only run it on an overnight build so that developers don't have to wait for it every time.
#wip - indicates that this test exercises unfinished functionality, so it would be expected to fail while the feature is in development (and when it's done, the #wip tag would be removed). This has special significance in Cucumber as it will return a non-zero exit code if any #wip tests actually pass
#release_x, #sprint_y, #version_z etc. Many teams tag each test with information about which release/sprint/version contains it, so that they can run a minimal suite of tests during development. Generally the same as the #wip tag except that they stay attached to the test so they always know when a particular feature was introduced.
#payments, #search, #seo etc. Basically any logical grouping of tests that isn't already expressed by the organisation of your feature files. Commonly used when a test relates to a cross-cutting concern, or when your project is divided into components along different lines to your feature files.
Tags are also used to fire hooks - bits of code which can run before, after, or 'around' tests with particular tags. Some examples of this are:
#javascript - indicates that a test needs Javascript support, so the hook can switch from an HTTP-only driver to one with JS support. Capybara goes one step further by automatically switching to a driver named after the tag, if one is found (so you could use e.g. #desktop, #mobile, #tablet drivers)
#logged_in - indicates that the test needs to run in the context of a logged-in user, this sometimes makes sense to express with a tag, although a Background section would be more commonly used
Additionally, tags can be used just for informational purposes. I've seen teams tag tests with the related issue number, author, developer, amongst other things, many of which can be useful (but many of which duplicate information which is easily found in source control, so I'd caution against that).
I'm using SpecFlow to do some BDD-style testing. Some of my features are UI tests, so they use WatiN. Some aren't UI tests, so they don't.
At the moment, I have a single StepDefinitions.cs file, covering all of my features. I have a BeforeScenario step that initializes WatiN. This means that all of my tests start up Internet Explorer, whether they need it or not.
Is there any way in SpecFlow to have a particular feature file associated with a particular set of step definitions? Or am I approaching this from the wrong angle?
There is a simple solution to your problem if you use tags.
First tag you feature file to indicate that a particular feature needs WatiN like that:
Feature: Save Proportion Of Sample Pool Required
As an <User>
I want to <Configure size of the Sample required>
so that <I can advise the deployment team of resourcing requirments>.
#WatiN
Scenario: Save valid sample size mid range
Given the user enters 10 as sample size
When the user selects save
Then the value is stored
And then decorate the BeforeScenario binding with an attribute that indicates the tag:
[BeforeScenario("WatiN")]
public void BeforeScenario()
{
...
}
This BeforeScenario method will then only be called for the features that use WatiN.
Currently (in SpecFlow 1.3) step-definitions are global and cannot be scoped to particular features.
This is by design to have the same behavior as Cucumber.
I asked the same question on the cucumber group:
http://groups.google.com/group/cukes/browse_thread/thread/20cd7e1db0a4bdaf/fd668f7346984df9#fd668f7346984df9
The baseline is, that the language defined by all the feature files should also be global (one global behavior of the whole application). Therefore scoping definitions to features should be avoided. Personally I am not yet fully convinced about this ...
However your problem with starting WatiN only for scenarios that need UI-Integration can be solved in two different ways:
Tags and tagged hooks: You can tag your scenarios (i.e with #web) and define ina BeforeScenario-Hook that only should run for scenarios with a certain tag (i.e. [BeforeScenario("web")]). See the Selenium integration in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/blob/master/ASP.NET-MVC/BookShop/BookShop.AcceptanceTests.Selenium/Support/SeleniumSupport.cs
We often completely separate scenarios that are bound to the UI and scenarios that are bound to a programmatic API (i.e controller, view-model ...) into different projects. We tried to illustrate this in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/tree/master/ASP.NET-MVC/BookShop/ .
Check this out (new feature in SpecFlow 1.4): https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings
I originally assumed that a step file was associated with a particular feature file. Once I realized this was not true it helped me to improve all my SpecFlow code and feature files. The language of my feature files is now less context depended, which has resulted in more reusable step definitions and less code duplication. Now I organize my step files according to general similarities and not according to which feature they are for. As far as I know there is no way to associate a step with a particular feature, but I am not a SpecFlow expert so don't take my word for it.
If you still would like to associate your step files with a particular feature file, just give them similar names. There is no need for it to be forced to only work for that feature even if the step code only makes sense for that feature. This is because even if you happen to create a duplicate step for a different feature, it will detect this as an ambiguous match. The behavior for ambiguous matches can be specified in an App.config file. See
http://cloud.github.com/downloads/techtalk/SpecFlow/SpecFlow%20Guide.pdf
for more details the about App.config file. By default ambiguous matches are detected and reported as an error.
[edit]:
Actually there is a problem with working this way (having step files associated with feature files in your mind only). The problem comes when you add or modify a .feature file and use the same wording you have used before, and you forget to add a step for it, but you don't notice this because you already created a step for that wording once before, and it was written in a context sensitive manner. Also I am no longer convinced of the usefulness of not associating step files with feature files. I don't think most clients would be very good at writing the specification in a context independent manner. That is not how we normally write or talk or think.
Solution for this is to implement Tags & Scoped Binding with the test scenario which is related to Web or related to Controller / Core logic in code.
And drill down the scope for each scenario to any of the below mentioned Before / After execution
BeforeTestRunScenario
BeforeFeature
BeforeScenario
BeforeScenarioBlock
BeforeStep
AfterStep
AfterScenarioBlock
AfterScenario
AfterFeature
AfterTestRunScenario
Also consider using implementation-agnostic DSL along with implementation-specific step definitions. For example, use
When I search for 'Barbados'
instead of
`When I type 'Barbados' in the search field and push the Search button
By implementing multiple step definition assemblies, the same Scenario can execute through different interfaces. We use this approach to test UI's, API's, etc. using the same Scenario.