BDD - how to deal with specification of cross cutting features? - bdd

In BDD, how would you deal with the specification of cross cutting features?
Consider for example an application that allows working on a document. There are features like editing text or adding images to the document. Now there's an additional feature "Changelog" that should provide the ability to investigate any change that has been done to a document before.
Now here's my dilemma: Either the "Changelog" gets it's own spec but than it's kind of a never-ending feature. Whenever a new feature for editing the document is added I also need to add something to the "Changelog" feature. Or the "Changelog" is specified in all other features' specs by always sketching out which kind of entry should appear in the changelog after a certain operation. In this case I need to foresee the changelog feature when defining other features, and features that have already been defined and possibly implemented need refinement for the changelog feature.
Any practical advice how to solve this dilemma?

I handle things like this by adding an extra assert step on any scenario that is relevant.
Given some set up
When I use a feature
Then something happens
And it is reflected in the changelog as a 'something happened' entry
The reason I would do it this way, rather than having a separate spec is that it sounds like the two actions are part of the same feature. It wouldn't make sense to me to split them out into separate scenarios. Any existing scenarios that would be broken by a change will be using the same step definition so will be updated when you make this test pass.
The downside of doing it this way is that the relevant tests that you would want to run when making changes to the changelog functionality are distributed through your test suite. I would remedy this by tagging a relevant subset of the tests with #changelog and creating a test run to run only these tests.

Related

Multiple steps workflow in cucumber

I'm developing an application in rails with cucumber.
The application includes a workflow that have multiple steps.
Some steps are
A user import files (3 different files),
Other user make make some checks to date that was imported,
Other user input some parameter,
Other user apply the parameter to the data that was imported,
etc.
The steps must be executed in the correct order, and I is necessary to run all the previous steps in order to execute each one, for example to apply the parameter its necessary to have the data imported and the parameters defined.
My problem is how to build cucumber scenarios/features in this situation.
I know that a scenario is not suppose to call all the previous scenario.But the only other idea that I have is to create a very long scenario performing all this steps, and that make sense because it will be a scenario with more than 2 hundred steps.
Any thought on a pragmatical way of implementing cucumber in this kind of situation?
Many Tks
It sounds as if you have to perform every thing every time.
Will every usage of your system include importing three files? Are there any cases where the user may only need to import two files? If the case is that there will always be three files imported, then you might abstract that step as
given the files are imported
Things that always have to be done may be combined into some generic setup. As the setup never changes, the details may not be necessary to mention explicit.
My experience though, is that at the beginning it is hard to separate scenarios and try to do too much in a few scenarios with many steps. If you don't see any other way, start there. Look at your scenario and see if they possible to separate into perhaps two independent scenarios. It may be possible to separate it into two scenarios that are independent. Next step would be to see if these two new scenarios are possible to divide into two smaller, independent scenarios. It happens that it is possible.
It is obviously always possible that Cucumber is not the tool you need. It is possible that you would be better off with a unit test framework.

Dependencies between Features

We're having our first attempt at writing some Gherkin specs for a greenfield application and I'm not sure how to tackle what appear to be inter-dependant features.
Essentially, we have a feature CreateADoor that is actually used as part of two other features BuildAHouse and BuildAShed.
The CreateADoor feature is relatively complex in terms of validation etc. which is why we have lifted it out as a seperate feature (to avoid duplication). The issue is that the result of scenarios for this feature are dependant on the context they were called from (should my newly built door be on a House or a Shed).
The only way I can really see to solve this is to get rid of CreateADoor and have its scenarios duplicated inside both BuildAHouse and BuildAShed. In this specific situation this would be (just about) bearable, but what about the situation where CreateADoor requires 10 scenarios to spec it out, and is used by 10 different features. Having 10 scenarios explode out into 100 doesn't seem good, but I can't see another option at the minute.
Can anyone suggest a different approach that allows us to avoid this explosion of scenarios?
Ideally you should not create these dependencies, but instead if creating a door is part of building a house then the building a house feature should create a door as part of its setup instead of reusing the feature to test creating a door.
This might look like this:
Given I have created a house door
When I create a house
Then I should be able to live in it
and the logic for creating the door should be in the code behind the Given step. This logic might be very different from what actually happens when you create a door in the tests.
If you can't separate things like this then one thing I have done in the past is to make the code behind the Given step call the other steps from the CreateADoor feature so that the code is not duplicated, but the existing steps are reused. This is not ideal, but pragmatically this is sometimes necessary.

How to shift development of an existing MVC3 app to a TDD approach?

I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.

How to assure I am testing everything, I have all the features and only those features for old code?

We are running a pretty big website, we have some critical legacy code there we want to have well covered.
At the same time we would like to have a report of the features we are currently supporting and covered. And also we want to be sure we really cover every possible corner case. Some code paths are critical and will need many more tests even after achieving 100% coverage.
As we are already using rspec and rspec has "feature" and "scenario" keywords, we tried to make a list using rspec rather than going for cucumber but I think this question can be applied to any testing tool.
We want something like this:
feature "each advertisement will be shown a specified % of impressions"
scenario "As ..."
This feature is minimal from the point of view of managers but huge in the code. It involves a backend tool, a periodic task, logic in the models and views in backend and front end.
We tried to divide it like this:
feature "each creative will be shown a specified % of impressions"
context "configuration"
context "display"
scenario "..."
context "models"
it "should ..."
context "frontend"
context "display"
scenario "..."
context "models"
it "should ..."
Configuration takes place in another tool, display would contain integration tests and models would contain unit test.
I repeat myself but the idea is sto assure that the feature is really finished(including building the configuration tool) and 100% tested.
But looking at this file, it is not integration, nor unit test not even belong to any particular project.
Definitely there should be a better way of managing this.
Any experiences, resources, ideas you can share to guide us ?
The scenario you're describing is a huge reason why BDD is so popular. It forces you to write code in a way that's easy to test. Having said that, you're obviously not going to go back and rewrite the entire legacy application. There are a few things you should consider though:
As you go through each section of the application, you should ask yourself 'Will it be harder to refactor than to write tests for this?'. Sometimes refactoring before writing tests just cannot be avoided.
Testing isn't about 100% coverage, it's about 100% confidence. As you mentioned, you plan on writing more tests even when you have 100% coverage. This is because you're going for confidence. Once you're confident in a piece of code, move on. You can always come back to it at a later time.
From my experience, Cucumber is easier for tests that cover a large portion of the application. I think the reason for this is that writing out the tests in plain english makes you think of things you wouldn't have otherwise. It also allows you to focus on the behavior instead of the code and can make refactoring a less daunting task.
You don't really get much out of adding tests to existing code if you never touch that code again. Start with testing the code you want to make changes (i.e. refactor) to first.
I also recommend the book Rails Test Prescriptions, specifically one of the last chapters called "Testing a Legacy Application".

Feature-scoped step definitions with SpecFlow?

I'm using SpecFlow to do some BDD-style testing. Some of my features are UI tests, so they use WatiN. Some aren't UI tests, so they don't.
At the moment, I have a single StepDefinitions.cs file, covering all of my features. I have a BeforeScenario step that initializes WatiN. This means that all of my tests start up Internet Explorer, whether they need it or not.
Is there any way in SpecFlow to have a particular feature file associated with a particular set of step definitions? Or am I approaching this from the wrong angle?
There is a simple solution to your problem if you use tags.
First tag you feature file to indicate that a particular feature needs WatiN like that:
Feature: Save Proportion Of Sample Pool Required
As an <User>
I want to <Configure size of the Sample required>
so that <I can advise the deployment team of resourcing requirments>.
#WatiN
Scenario: Save valid sample size mid range
Given the user enters 10 as sample size
When the user selects save
Then the value is stored
And then decorate the BeforeScenario binding with an attribute that indicates the tag:
[BeforeScenario("WatiN")]
public void BeforeScenario()
{
...
}
This BeforeScenario method will then only be called for the features that use WatiN.
Currently (in SpecFlow 1.3) step-definitions are global and cannot be scoped to particular features.
This is by design to have the same behavior as Cucumber.
I asked the same question on the cucumber group:
http://groups.google.com/group/cukes/browse_thread/thread/20cd7e1db0a4bdaf/fd668f7346984df9#fd668f7346984df9
The baseline is, that the language defined by all the feature files should also be global (one global behavior of the whole application). Therefore scoping definitions to features should be avoided. Personally I am not yet fully convinced about this ...
However your problem with starting WatiN only for scenarios that need UI-Integration can be solved in two different ways:
Tags and tagged hooks: You can tag your scenarios (i.e with #web) and define ina BeforeScenario-Hook that only should run for scenarios with a certain tag (i.e. [BeforeScenario("web")]). See the Selenium integration in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/blob/master/ASP.NET-MVC/BookShop/BookShop.AcceptanceTests.Selenium/Support/SeleniumSupport.cs
We often completely separate scenarios that are bound to the UI and scenarios that are bound to a programmatic API (i.e controller, view-model ...) into different projects. We tried to illustrate this in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/tree/master/ASP.NET-MVC/BookShop/ .
Check this out (new feature in SpecFlow 1.4): https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings
I originally assumed that a step file was associated with a particular feature file. Once I realized this was not true it helped me to improve all my SpecFlow code and feature files. The language of my feature files is now less context depended, which has resulted in more reusable step definitions and less code duplication. Now I organize my step files according to general similarities and not according to which feature they are for. As far as I know there is no way to associate a step with a particular feature, but I am not a SpecFlow expert so don't take my word for it.
If you still would like to associate your step files with a particular feature file, just give them similar names. There is no need for it to be forced to only work for that feature even if the step code only makes sense for that feature. This is because even if you happen to create a duplicate step for a different feature, it will detect this as an ambiguous match. The behavior for ambiguous matches can be specified in an App.config file. See
http://cloud.github.com/downloads/techtalk/SpecFlow/SpecFlow%20Guide.pdf
for more details the about App.config file. By default ambiguous matches are detected and reported as an error.
[edit]:
Actually there is a problem with working this way (having step files associated with feature files in your mind only). The problem comes when you add or modify a .feature file and use the same wording you have used before, and you forget to add a step for it, but you don't notice this because you already created a step for that wording once before, and it was written in a context sensitive manner. Also I am no longer convinced of the usefulness of not associating step files with feature files. I don't think most clients would be very good at writing the specification in a context independent manner. That is not how we normally write or talk or think.
Solution for this is to implement Tags & Scoped Binding with the test scenario which is related to Web or related to Controller / Core logic in code.
And drill down the scope for each scenario to any of the below mentioned Before / After execution
BeforeTestRunScenario
BeforeFeature
BeforeScenario
BeforeScenarioBlock
BeforeStep
AfterStep
AfterScenarioBlock
AfterScenario
AfterFeature
AfterTestRunScenario
Also consider using implementation-agnostic DSL along with implementation-specific step definitions. For example, use
When I search for 'Barbados'
instead of
`When I type 'Barbados' in the search field and push the Search button
By implementing multiple step definition assemblies, the same Scenario can execute through different interfaces. We use this approach to test UI's, API's, etc. using the same Scenario.

Resources