Multiple steps workflow in cucumber - ruby-on-rails

I'm developing an application in rails with cucumber.
The application includes a workflow that have multiple steps.
Some steps are
A user import files (3 different files),
Other user make make some checks to date that was imported,
Other user input some parameter,
Other user apply the parameter to the data that was imported,
etc.
The steps must be executed in the correct order, and I is necessary to run all the previous steps in order to execute each one, for example to apply the parameter its necessary to have the data imported and the parameters defined.
My problem is how to build cucumber scenarios/features in this situation.
I know that a scenario is not suppose to call all the previous scenario.But the only other idea that I have is to create a very long scenario performing all this steps, and that make sense because it will be a scenario with more than 2 hundred steps.
Any thought on a pragmatical way of implementing cucumber in this kind of situation?
Many Tks

It sounds as if you have to perform every thing every time.
Will every usage of your system include importing three files? Are there any cases where the user may only need to import two files? If the case is that there will always be three files imported, then you might abstract that step as
given the files are imported
Things that always have to be done may be combined into some generic setup. As the setup never changes, the details may not be necessary to mention explicit.
My experience though, is that at the beginning it is hard to separate scenarios and try to do too much in a few scenarios with many steps. If you don't see any other way, start there. Look at your scenario and see if they possible to separate into perhaps two independent scenarios. It may be possible to separate it into two scenarios that are independent. Next step would be to see if these two new scenarios are possible to divide into two smaller, independent scenarios. It happens that it is possible.
It is obviously always possible that Cucumber is not the tool you need. It is possible that you would be better off with a unit test framework.

Related

Dependencies between Features

We're having our first attempt at writing some Gherkin specs for a greenfield application and I'm not sure how to tackle what appear to be inter-dependant features.
Essentially, we have a feature CreateADoor that is actually used as part of two other features BuildAHouse and BuildAShed.
The CreateADoor feature is relatively complex in terms of validation etc. which is why we have lifted it out as a seperate feature (to avoid duplication). The issue is that the result of scenarios for this feature are dependant on the context they were called from (should my newly built door be on a House or a Shed).
The only way I can really see to solve this is to get rid of CreateADoor and have its scenarios duplicated inside both BuildAHouse and BuildAShed. In this specific situation this would be (just about) bearable, but what about the situation where CreateADoor requires 10 scenarios to spec it out, and is used by 10 different features. Having 10 scenarios explode out into 100 doesn't seem good, but I can't see another option at the minute.
Can anyone suggest a different approach that allows us to avoid this explosion of scenarios?
Ideally you should not create these dependencies, but instead if creating a door is part of building a house then the building a house feature should create a door as part of its setup instead of reusing the feature to test creating a door.
This might look like this:
Given I have created a house door
When I create a house
Then I should be able to live in it
and the logic for creating the door should be in the code behind the Given step. This logic might be very different from what actually happens when you create a door in the tests.
If you can't separate things like this then one thing I have done in the past is to make the code behind the Given step call the other steps from the CreateADoor feature so that the code is not duplicated, but the existing steps are reused. This is not ideal, but pragmatically this is sometimes necessary.

BDD - how to deal with specification of cross cutting features?

In BDD, how would you deal with the specification of cross cutting features?
Consider for example an application that allows working on a document. There are features like editing text or adding images to the document. Now there's an additional feature "Changelog" that should provide the ability to investigate any change that has been done to a document before.
Now here's my dilemma: Either the "Changelog" gets it's own spec but than it's kind of a never-ending feature. Whenever a new feature for editing the document is added I also need to add something to the "Changelog" feature. Or the "Changelog" is specified in all other features' specs by always sketching out which kind of entry should appear in the changelog after a certain operation. In this case I need to foresee the changelog feature when defining other features, and features that have already been defined and possibly implemented need refinement for the changelog feature.
Any practical advice how to solve this dilemma?
I handle things like this by adding an extra assert step on any scenario that is relevant.
Given some set up
When I use a feature
Then something happens
And it is reflected in the changelog as a 'something happened' entry
The reason I would do it this way, rather than having a separate spec is that it sounds like the two actions are part of the same feature. It wouldn't make sense to me to split them out into separate scenarios. Any existing scenarios that would be broken by a change will be using the same step definition so will be updated when you make this test pass.
The downside of doing it this way is that the relevant tests that you would want to run when making changes to the changelog functionality are distributed through your test suite. I would remedy this by tagging a relevant subset of the tests with #changelog and creating a test run to run only these tests.

Managing the test cases , scenarios and feature files with specflow

I have number of different manual test cases which needs to be automated with the Specflow .
There are multiple test cases multiple scenarios. SO there will be multiple feature files ?
We are following Sprint system. each sprint has 100+ test cases which are going to be automated.
Which will be the best practice for managing the test cases and scenarios using the feature files ? Theres no point in creating same set of functions everytime for different test cases.
you would manage this the same as you would manage any other code files. Make changes, if the changes conflict with others changes then merge the changes before you check in.
The best way to avoid merge issues is to try and work in different areas. Create many feature files as then multiple people can work on different features at one time and you won't have conflicts.
Communication between the testers is important in avoiding conflicts as well, and in the case of scenarios in specflow it will be important in ensuring that you use consistent step names. Also checking in often will ensure that you minimise the number of merge issues, even after each scenario has been created.
EDIT
based on your edited question in specflow all steps are global, so if feature1 has a scenario with a step Given a user 'Bob' is logged in and Feature32 also has a scenarion with the step Given a user 'Tom' is logged in then they will both share the same step source code and so the same function will be reused.
As long as you write your steps in a consistent manner (ie use the same text) then you should get excellent reuse of functions across all of your hundreds of features and scenarios.

Feature-scoped step definitions with SpecFlow?

I'm using SpecFlow to do some BDD-style testing. Some of my features are UI tests, so they use WatiN. Some aren't UI tests, so they don't.
At the moment, I have a single StepDefinitions.cs file, covering all of my features. I have a BeforeScenario step that initializes WatiN. This means that all of my tests start up Internet Explorer, whether they need it or not.
Is there any way in SpecFlow to have a particular feature file associated with a particular set of step definitions? Or am I approaching this from the wrong angle?
There is a simple solution to your problem if you use tags.
First tag you feature file to indicate that a particular feature needs WatiN like that:
Feature: Save Proportion Of Sample Pool Required
As an <User>
I want to <Configure size of the Sample required>
so that <I can advise the deployment team of resourcing requirments>.
#WatiN
Scenario: Save valid sample size mid range
Given the user enters 10 as sample size
When the user selects save
Then the value is stored
And then decorate the BeforeScenario binding with an attribute that indicates the tag:
[BeforeScenario("WatiN")]
public void BeforeScenario()
{
...
}
This BeforeScenario method will then only be called for the features that use WatiN.
Currently (in SpecFlow 1.3) step-definitions are global and cannot be scoped to particular features.
This is by design to have the same behavior as Cucumber.
I asked the same question on the cucumber group:
http://groups.google.com/group/cukes/browse_thread/thread/20cd7e1db0a4bdaf/fd668f7346984df9#fd668f7346984df9
The baseline is, that the language defined by all the feature files should also be global (one global behavior of the whole application). Therefore scoping definitions to features should be avoided. Personally I am not yet fully convinced about this ...
However your problem with starting WatiN only for scenarios that need UI-Integration can be solved in two different ways:
Tags and tagged hooks: You can tag your scenarios (i.e with #web) and define ina BeforeScenario-Hook that only should run for scenarios with a certain tag (i.e. [BeforeScenario("web")]). See the Selenium integration in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/blob/master/ASP.NET-MVC/BookShop/BookShop.AcceptanceTests.Selenium/Support/SeleniumSupport.cs
We often completely separate scenarios that are bound to the UI and scenarios that are bound to a programmatic API (i.e controller, view-model ...) into different projects. We tried to illustrate this in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/tree/master/ASP.NET-MVC/BookShop/ .
Check this out (new feature in SpecFlow 1.4): https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings
I originally assumed that a step file was associated with a particular feature file. Once I realized this was not true it helped me to improve all my SpecFlow code and feature files. The language of my feature files is now less context depended, which has resulted in more reusable step definitions and less code duplication. Now I organize my step files according to general similarities and not according to which feature they are for. As far as I know there is no way to associate a step with a particular feature, but I am not a SpecFlow expert so don't take my word for it.
If you still would like to associate your step files with a particular feature file, just give them similar names. There is no need for it to be forced to only work for that feature even if the step code only makes sense for that feature. This is because even if you happen to create a duplicate step for a different feature, it will detect this as an ambiguous match. The behavior for ambiguous matches can be specified in an App.config file. See
http://cloud.github.com/downloads/techtalk/SpecFlow/SpecFlow%20Guide.pdf
for more details the about App.config file. By default ambiguous matches are detected and reported as an error.
[edit]:
Actually there is a problem with working this way (having step files associated with feature files in your mind only). The problem comes when you add or modify a .feature file and use the same wording you have used before, and you forget to add a step for it, but you don't notice this because you already created a step for that wording once before, and it was written in a context sensitive manner. Also I am no longer convinced of the usefulness of not associating step files with feature files. I don't think most clients would be very good at writing the specification in a context independent manner. That is not how we normally write or talk or think.
Solution for this is to implement Tags & Scoped Binding with the test scenario which is related to Web or related to Controller / Core logic in code.
And drill down the scope for each scenario to any of the below mentioned Before / After execution
BeforeTestRunScenario
BeforeFeature
BeforeScenario
BeforeScenarioBlock
BeforeStep
AfterStep
AfterScenarioBlock
AfterScenario
AfterFeature
AfterTestRunScenario
Also consider using implementation-agnostic DSL along with implementation-specific step definitions. For example, use
When I search for 'Barbados'
instead of
`When I type 'Barbados' in the search field and push the Search button
By implementing multiple step definition assemblies, the same Scenario can execute through different interfaces. We use this approach to test UI's, API's, etc. using the same Scenario.

How do you plan your Rails app?

I'm starting a Rails app for a customer and am considering either creating a mind map or jumping straight to a Cucumber specification.
How do you plan your Rails app?
As an additional question, say you also start with Cucumber, at which point would you write Unit tests? Before satisfying the specifications?
I've got a 6 step process.
I prefer to work out the model relationship and uses before doing anything. Generally I try to define models into units containing coherent chunks of information. Usually this starts by identifying the orthogonal resources my application will need (Users, Posts, etc). I then figure out what information each of those resources absolutely need (attributes) and may potentially need (associations), and how that information will likely be operated on (methods), from there I define a set of rules to govern resource consistency (validations).
I usually iterate over my design a few times because the act of defining other models usually makes me rethink ones I've already done. Once I have a model design I like, I will start refactoring or specializing(subclassing) models to clarify the design.
I write the migration and make skeletons for my models. I usually won't write tests until I have a first draft of methods and validations implemented. It's not always obvious how to implement things until giving it some moderate thought.
Next comes the test suite. Doesn't matter what I used to write the tests, so long as I can be certain the backend is sane.
This is when I piece together the control flow. What happens on a successful request? Unsuccessful request? Which controller actions will link to others? Usually there is a 1-1 mapping between controllers and models (not counting sub classes of models), every so often I'll encounter situations where I need to act on multiple model types, for that I'll probably create a new controller. Depending on how complex my app is I may model the flow as a state machine.
Lastly I create the views. I start by sketching out the UI based which is heavily influenced by my model's relationships and attributes. Abstract out common parts, then write the views.
Polish the UI. I create a CSS, and start to replace links with remote calls, or even just javascript when appropriate.
I may interleave steps 2 and 3. I find it's very easy to write a test just after I write the code to be tested. Especially because I'm usually testing things in a console as I write, and half the test is written by pasting from the console.
I may also compartmentalize steps 4 and 5 for each model/controller. Any point I may go back and revise, a previous decision, and propagate those changes through my steps.
I start with sketches of the user interface and then progress to HTML mockups. Once the UI design is finalised I can identify the RESTful resources in the application and their relationships.
I don't think writing only cucumber features as specifications is a good idea. Writing test code without be able to test it pass leads to errors in the tests and increases the time you'll need to correct them later.
So I'd do the following :
Write some mindmap. But keep it simple with the major ideas of the project.
Start writing tests and coding at the sime time (write one test, make it pass, write an other, ...).
So you'll write your specifications while driving your application. Keeping it clean but also remaining agile and being able to change some ideas in the middle of the project.

Resources