Dependencies between Features - bdd

We're having our first attempt at writing some Gherkin specs for a greenfield application and I'm not sure how to tackle what appear to be inter-dependant features.
Essentially, we have a feature CreateADoor that is actually used as part of two other features BuildAHouse and BuildAShed.
The CreateADoor feature is relatively complex in terms of validation etc. which is why we have lifted it out as a seperate feature (to avoid duplication). The issue is that the result of scenarios for this feature are dependant on the context they were called from (should my newly built door be on a House or a Shed).
The only way I can really see to solve this is to get rid of CreateADoor and have its scenarios duplicated inside both BuildAHouse and BuildAShed. In this specific situation this would be (just about) bearable, but what about the situation where CreateADoor requires 10 scenarios to spec it out, and is used by 10 different features. Having 10 scenarios explode out into 100 doesn't seem good, but I can't see another option at the minute.
Can anyone suggest a different approach that allows us to avoid this explosion of scenarios?

Ideally you should not create these dependencies, but instead if creating a door is part of building a house then the building a house feature should create a door as part of its setup instead of reusing the feature to test creating a door.
This might look like this:
Given I have created a house door
When I create a house
Then I should be able to live in it
and the logic for creating the door should be in the code behind the Given step. This logic might be very different from what actually happens when you create a door in the tests.
If you can't separate things like this then one thing I have done in the past is to make the code behind the Given step call the other steps from the CreateADoor feature so that the code is not duplicated, but the existing steps are reused. This is not ideal, but pragmatically this is sometimes necessary.

Related

Multiple steps workflow in cucumber

I'm developing an application in rails with cucumber.
The application includes a workflow that have multiple steps.
Some steps are
A user import files (3 different files),
Other user make make some checks to date that was imported,
Other user input some parameter,
Other user apply the parameter to the data that was imported,
etc.
The steps must be executed in the correct order, and I is necessary to run all the previous steps in order to execute each one, for example to apply the parameter its necessary to have the data imported and the parameters defined.
My problem is how to build cucumber scenarios/features in this situation.
I know that a scenario is not suppose to call all the previous scenario.But the only other idea that I have is to create a very long scenario performing all this steps, and that make sense because it will be a scenario with more than 2 hundred steps.
Any thought on a pragmatical way of implementing cucumber in this kind of situation?
Many Tks
It sounds as if you have to perform every thing every time.
Will every usage of your system include importing three files? Are there any cases where the user may only need to import two files? If the case is that there will always be three files imported, then you might abstract that step as
given the files are imported
Things that always have to be done may be combined into some generic setup. As the setup never changes, the details may not be necessary to mention explicit.
My experience though, is that at the beginning it is hard to separate scenarios and try to do too much in a few scenarios with many steps. If you don't see any other way, start there. Look at your scenario and see if they possible to separate into perhaps two independent scenarios. It may be possible to separate it into two scenarios that are independent. Next step would be to see if these two new scenarios are possible to divide into two smaller, independent scenarios. It happens that it is possible.
It is obviously always possible that Cucumber is not the tool you need. It is possible that you would be better off with a unit test framework.

How to shift development of an existing MVC3 app to a TDD approach?

I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.

Acceptance criteria (and other things) for a BDD story

We have a workflow engine that presents a list of available workflows (I mean workflow definitions, not instances) and user can click on the "Execute" link next to any workflow to ,well, execute a new instance of that workflow. I want to do this "execute a workflow" story(feature?) in the BDD way.
Story: execute a workflow
Scenario: execute a workflow by clicking on execute link in workflow list and nothing goes wrong
Given I am a user with sufficient rights
And I have added a workflow called "wf"
When I click on the execute link next to "wf" in the workflows list
When I view the list of workflow executions
Then the output is:
"""
1 | wf1 | not started
"""
(1st column: item#, 2nd: workflow name, 3rd: state)
I kinda feel this is more like a mess than a nice cut DBB scenario, I'm specially concerned with the acceptance criteria. My mind is not clear about how exactly I should approach something coarse-grained and user-coupled like "executing a workflow". I mean when it is API you are doing, everything is clear but what if you are describing some behavior which is initiated through (human)user interaction and the result of which is evident from initiating another use-case with complex output (like a list of items). The criteria for knowing that the workflow is indeed executed is to see a new item in the list of workflow executions, which is another story for itself. I kinda feel confused here.
Should I talk to database layer and check for the row that stores the newly created workflow instance -or- should I check for the presence of the item that points to the new instance in the list of workflow executions? If the second, then how exactly? should I check for all columns with correct values in one scenario or each column in it's own scenario?
May I refer you to a post I did quite recently on Acceptance Criteria vs. Scenarios? I think the example might be more illuminating if you use something resembling a specific use of the workflow engine, rather than a generic one. For instance, here's a fake pet shop which I'm using to try out an automation tool. I've then written scenarios around the pet shop, rather than trying to specify generic automation concerns.
If your customers are sometimes in healthcare, for instance, knock up a fake diagnosis tool which uses your engine and write a scenario around that. It might seem like a bit of work to start with, but I've found it pays back for itself very quickly.
Story: A doctor diagnoses black death
Scenario: The doctor starts the diagnosis
Given I am doctor with rights to use the system
And I've added a workflow called "diagnosis"
When I choose the "diagnosis" workflow
Then it should tell me that it's not started.
This is the benefit you're looking for - that a user gets some information, not that something is stored in a database. As far as possible, a scenario should push for the end value! So maybe it should even say something like:
Story: A doctor diagnoses Black Death
Scenario: The doctor starts the diagnosis
Given I am doctor with rights to use the system
And I want to diagnose a patient
When I choose "Diagnosis"
Then the system should prompt me to start diagnosing.
Given that all the symptoms match Black Death
When I perform the diagnosis
Then I should be able to diagnose the patient with Black Death.
Any smaller steps which are needed to make this easy and aesthetic are really usability issues. Don't use BDD frameworks to describe usability concerns (though they can step through them, thus giving you your regression tests). Instead, try it manually. BDD isn't a substitute for manual testing, it just helps out a bit.
If you can create a vaguely realistic use of the workflow engine, it'll help you to think of scenarios which you might miss. For instance, I have no idea right now how this workflow can be associated with a particular patient. I find specific, imaginative examples tend to help people visualise other scenarios more than something vague, generic and all-encompassing.
Also, try phrasing it in the same language the business might use, thinking about the business outcomes that you really want. Try not to think about how to implement the scenario - instead, just write it. This will be much, much easier if you actually go and talk to your business people or customers about the scenarios they can think of!
Any complexity needed to make the scenario run then goes into the code, where it's easier to maintain and refactor.
As an additional benefit, by identifying particular customers with particular needs, you can help your customer avoid the trap of putting every possible feature into the workflow engine "in case someone needs it". By talking through real scenarios with real people, they'll be able to help identify who needs which features the most, reducing scope and helping you to deliver as much value as possible.

Things you look for when trying to understand new software

I wonder what sort of things you look for when you start working on an existing, but new to you, system? Let's say that the system is quite big (whatever it means to you).
Some of the things that were identified are:
Where is a particular subroutine or procedure invoked?
What are the arguments, results and predicates of a particular function?
How does the flow of control reach a particular location?
Where is a particular variable set, used or queried?
Where is a particular variable declared?
Where is a particular data object accessed, i.e. created, read, updated or deleted?
What are the inputs and outputs of a particular module?
But if you look for something more specific or any of the above questions is particularly important to you please share it with us :)
I'm particularly interested in something that could be extracted in dynamic analysis/execution.
I like to use a "use case" approach:
First, I ask myself "what's this software's purpose?": I try to identify how users are going to interact with the application;
Once I have some "use case", I try to understand what are the objects that are more involved and how they interact with other objects.
Once I did this, I draw a UML-type diagram that describe what I've just learned for further reference. What happens after depends on the task I've been assigned, i.e. modify the code, document the code etc.
There is the question of what motivation do I have for learning the new system:
Bug fix/minor enhancement - In this case, I may focus solely on that portion of the system that performs a specific function that needs to be altered. This is a way to break down a huge system but also is a way to identify if the issue is something I can fix or if it is something that I have to hand to the off-the-shelf company whose software we are using,e.g. a CRM, CMS, or ERP system can be a customized off-the-shelf system so there are many pieces to it.
Project work - This would be the other case and is where I'd probably try to build myself a view from 30,000 feet or so to know what are the high-level components and which areas of the system does the project impact. An example of this is where I'd join a company and work off of an existing code base but I don't have the luxury of having the small focus like in the previous case. Part of that view is to look for any patterns in the code in terms of naming conventions, project structure, etc. as this may be useful once I start changing some code in the system. I'd probably do some tracing through the system and try to see where are the uglier parts of the code. By uglier I mean those parts that are kludge-like and may have some spaghetti code as this was rushed when first written and is now being reworked heavily.
To my mind another way to view this is the question of whether I'm going to be spending days or weeks wrapping my head around a system like in the second case or should this be a case where it hopefully takes only a few hours, optimistically that is, to get my footing to make the necessary changes.

How do you plan your Rails app?

I'm starting a Rails app for a customer and am considering either creating a mind map or jumping straight to a Cucumber specification.
How do you plan your Rails app?
As an additional question, say you also start with Cucumber, at which point would you write Unit tests? Before satisfying the specifications?
I've got a 6 step process.
I prefer to work out the model relationship and uses before doing anything. Generally I try to define models into units containing coherent chunks of information. Usually this starts by identifying the orthogonal resources my application will need (Users, Posts, etc). I then figure out what information each of those resources absolutely need (attributes) and may potentially need (associations), and how that information will likely be operated on (methods), from there I define a set of rules to govern resource consistency (validations).
I usually iterate over my design a few times because the act of defining other models usually makes me rethink ones I've already done. Once I have a model design I like, I will start refactoring or specializing(subclassing) models to clarify the design.
I write the migration and make skeletons for my models. I usually won't write tests until I have a first draft of methods and validations implemented. It's not always obvious how to implement things until giving it some moderate thought.
Next comes the test suite. Doesn't matter what I used to write the tests, so long as I can be certain the backend is sane.
This is when I piece together the control flow. What happens on a successful request? Unsuccessful request? Which controller actions will link to others? Usually there is a 1-1 mapping between controllers and models (not counting sub classes of models), every so often I'll encounter situations where I need to act on multiple model types, for that I'll probably create a new controller. Depending on how complex my app is I may model the flow as a state machine.
Lastly I create the views. I start by sketching out the UI based which is heavily influenced by my model's relationships and attributes. Abstract out common parts, then write the views.
Polish the UI. I create a CSS, and start to replace links with remote calls, or even just javascript when appropriate.
I may interleave steps 2 and 3. I find it's very easy to write a test just after I write the code to be tested. Especially because I'm usually testing things in a console as I write, and half the test is written by pasting from the console.
I may also compartmentalize steps 4 and 5 for each model/controller. Any point I may go back and revise, a previous decision, and propagate those changes through my steps.
I start with sketches of the user interface and then progress to HTML mockups. Once the UI design is finalised I can identify the RESTful resources in the application and their relationships.
I don't think writing only cucumber features as specifications is a good idea. Writing test code without be able to test it pass leads to errors in the tests and increases the time you'll need to correct them later.
So I'd do the following :
Write some mindmap. But keep it simple with the major ideas of the project.
Start writing tests and coding at the sime time (write one test, make it pass, write an other, ...).
So you'll write your specifications while driving your application. Keeping it clean but also remaining agile and being able to change some ideas in the middle of the project.

Resources