How specific should my acceptance test example be? - bdd

I'm new to Gherkin / ATDD / BDD. I'm drafting the following acceptance test:
Given a user is waiting for an operation to complete
And the operation is <percent>% complete
When <threshold> seconds elapse
Then a progress indicator should be displayed
And the progress indicator should display "<percent>%"
Is this specific enough or should I modify the Given clause to represent a more concrete example (thinking in SBE terms), for instance by citing a specific persona instead of just "user" or citing the exact "operation" that is in progress (e.g.: fetching customer list)?
Thanks,
Tony

Progress bars are asethetics.
The only real way to test an aesthetic is to show it to people and see what they think. A/B testing is really good for this. BDD isn't particularly well-suited to aesthetics, because aesthetics are not really about the desired behaviour of the system, but about the desired behaviour of the users.
We're still learning how to program people effectively. Until then, test aesthetics with humans, not scripts.
If there's some algorithm that lends itself to an aspect of behaviour of the progress bar then sure, that would be worth testing... but as others have said, that's something best left for class-level BDD, where the examples are tied more directly to the code.
At that level, you can just put the "Given, When, Then" statements in comments and it's good enough. Class-level steps aren't reused in the same way as system-level steps are, so making them into reusable abstractions isn't as important as making them easy to change. Stick with J/N/WhateverUnit and mock the rest out.

BDD
Behaviour Driven Development is all about conversations between the development team and the business. The feature files and scenarios within them should always relate to a specific business need, a feature or an ability that means that both the business and the development team are fully clear between them on exactly what is outlined.
As an example:
Feature: Rewards for frequent flyers
As a frequent flyer
I should receive points to my account
So that I am more likely to book with BDD Airlines again in the future
Scenario: I get flyer miles
Given I book a flight
And this flight earns 100 miles
When I land
Then my account should have 100 miles added to it
The question is, does this outline the entire problem or is there more information needed?
Would the development team be able to build something using this conversation (As you are on about SBE)?
Would this be better?:
Feature: Rewards for frequent flyers
As a frequent flyer
I should receive points to my account
So that I am more likely to book with BDD Airlines again in the future
Scenario: Passenger gets flyer miles
Given the account number 12341234 has a ticket for the "LGW-MAN" flight
And this route earns 100 miles
And the ticket is scanned at "LGW"
When the flight lands at "MAN"
Then the account 12341234 is rewarded 100 miles
Scenario: Passenger missed their flight
Given the account number 12341234 has a ticket for the "LGW-MAN" flight
And this route earns 100 miles
And the ticket is not scanned at "LGW"
When the flight lands at "MAN"
Then the account 12341234 is not rewarded any miles
Scenario: Passenger gets kicked off the plane
Given the account number 12341234 has a ticket for the "LGW-MAN" flight
And this route earns 100 miles
And the ticket is scanned at "LGW"
But the ticket is removed from the flight
When the flight lands at "MAN"
Then the account 12341234 is not rewarded any miles
It's all about clarity, and is usually more about how the behaviours of the system are described in relation to the business.
Your Example
I personally wouldn't write a Scenario for the purpose of testing a progress bar, as the business shouldn't be interested in the implementation of any components used (they don't care about the loading bar, they just care that the information actually loads).
This would be better as a unit test in my opinion.

Yes, you should be more specific. If you have only one type of user or if this test case applies to every user group "user" is probably good enough for your test. However, I think you should specify the operation that has to be waited on, because TDD is all about feeling safe about your code and how can you be sure it is working everywhere if you haven't tested it for all the operations that could cause a delay?

Related

How should BDD statements be properly constructed? Is there a convention used in teams?

Is there a preferred way of creating BDD scenarios in small agile teams and amongst the community? I'm using courgette and it gives an example on https://courgette-testing.com/bdd
Scenario: Refunded items should be returned to stock
Given a customer previously bought a black sweater from me
And I have three black sweaters in stock.
When they return the black sweater for a refund
Then I should have four black sweaters in stock.
Does this sound like a good idea? Would this work well for communication in teams?
I've used their web steps bit, and am now doing the refactor bit to make it clear to the business.
Any links would help. Thanks
The conversations in BDD are more important than the tools. Rather than starting with the finely-grained specification in Courgette's example, try talking to the business first. Ask them for an example of the kind of behaviour they want.
When you write it down, start by just writing it the way they describe it. It's amazing how few people listen properly! After you've got the example from them, take a look at it. Can you see which bits are the contexts (Givens) and which are the outcomes (Thens)? Which is the step that's associated with triggering the behaviour you're interested in (Whens)?
Once you've worked that out, there are a couple more questions I like to ask:
Is there any other context which, for this same event, gives a different outcome?
Is there any other outcome that's important?
For instance, if I was implementing this behaviour for a big supermarket, I might come across an example like:
"Oh! No, don't add food back to stock. We don't know how it's been stored. We refund it if there's something wrong with it, but we bin it."
You can probably see how that might change your code!
Testers are really great at asking these questions and spotting missing scenarios! This leads us to the "Three Amigos" pattern. I like to include:
A business person, Product Owner, subject matter expert or person with the problem
A tester
A dev (or a pair of devs).
You can also include UI designers, technical writers, etc. - Matt Wynne says it's "Three Amigos where three is a number between 3 and 7".
I really like it when the developer writes the scenarios down, in any form that allows them to get to the "Given, When, Then". Sometimes I'll do it in the meeting; sometimes I do it later and show it or send it to my business person.
Courgette's example is something that typically happens when people don't have these conversations. If you start with the conversations, you're much more likely to get something that matches the above. Not only are those declarative steps easier for business to read and for the whole team to talk about, but they're also easier to maintain, as the detail of how they're achieved is hidden (usually in Step Definitions, and further in Page Objects).
There's all kinds of useful posts for BDD newcomers on my blog if you want to know more!

How to document non-functional requirements (NFRs) in a story/feature?

The Specification By Example book states the non-functional requirements (commonly referred to as NFRs) can be specified using examples.
I've also been told by a colleague that non-functional requirements may be specified using SBE stories using the format:
Scenario: ...
Given ...
When ...
Then ...
Here is an example functional and non-functional requirement taken from wikipedia:
A system may be required to present the user with a display of the
number of records in a database. This is a functional requirement. How
up-to-date this number needs to be is a non-functional requirement. If
the number needs to be updated in real time, the system architects
must ensure that the system is capable of updating the displayed
record count within an acceptably short interval of the number of
records changing.
Question 1: Can the non-functional requirement be specified as a story?
Question 2: Should the non-functional requirement be specified as a story?
Question 3: What would the story look like?
I'll give an answer by working through an example.
Let us say that your team has already implemented the following story:
Scenario: User can log in to the website
Given I have entered my login credentials
When I submit these credentials
Then I get navigated to my home screen
To answer Question 1) - Can the non-functional requirement be specified as a story?
The project stakeholders have given you a NFR which reads:
For all website actions, a user should wait no longer that five
seconds for a response.
You could create a story for this as follows:
Scenario: User can log in to the website in a timely fashion
Given I have entered my login credentials
When I submit these credentials
Then I get navigated to my home screen
And I should have to wait no longer than the maximum acceptable wait time
Note that instead of imperatively specifying '5' seconds, I have kept the scenario declarative and instead specified "wait no longer than the maximum acceptable wait time".
To answer question 2) - Should the non-functional requirement be specified as a story?
The NFRs should definitely be specified as a story.
Creating a story will allow this task's complexity to be estimated (so that the team can determine how difficult it is relative to past stories), plus the team can break the story down into tasks (which can be estimated in hours, so that you can work out if the team can implement this story in the current sprint).
Hence in my contrived example, the team would have already implemented the code to log-in, but they'd then determine how to implement the requirement that it must take no longer than 5 seconds to log in. You will also allow be able to explore the inverse of this problem i.e. what happens if it takes longer than five seconds to log-in? e.g.
Scenario: User encounters a delay when logging in to the website
Given I have entered my login credentials
When I submit these credentials
And I wait for over the the maximum acceptable wait time
Then the Production team is informed
And the problem is logged
And I get navigated to my home screen
And finally, regarding question 3) - What would the story look like?
I've detailed how the stories would look like in answers 1) and 2)
Q1: Yes, definitely they can.
Take a look on that article describing Handling Non Functional Requirements in User Stories.
Q2. From my perspective if you able to create them it's really worth of keeping and tracking them in such a way. But citing this article
There is no magical agile practice that helps you uncover NFR. The
first step is to take responsibility. NFR can be represented as User
Stories if the team finds a that this helps to keep these visible.
However, be aware that surfacing such stories may create issues around
the priority of work done on them against more obvious features.
Q3. Take a look on the mentioned article from Q1.
I think the boundaries of NFRs are still not fully agreed upon by everyone. Consider a story that says "As a manager, my employee must get all responses within 5 seconds to avoid hiring a second data entry person and adding $50,000 in payroll expenses." I consider that a fully functional business requirement, along with any performance requirements that focus on the end user experience.
I categorize "traditional" NFRs as stories where the impacted person is not in the end user's or stakeholder's organization. "As a support person I need logs of the web site traffic to help me troubleshoot problems," or "As a software maintainer, I need a block architecture diagram to help me make changes." Including the role as you would with any user story helps with prioritization. It also helps identify the stakeholder for that NFR, should you have any questions about it.
NFRs may include some aspects of performance, at least those that don't impact the end user. "As a system administrator, I want to allocate no more than 10GB of disk space to the database in order to use SQL Express and avoid expensive SQL Server licenses."
Consider a typical NFR that might only state "Databases are limited to 10GB." It's an arbitrary number with no meaning or rationale, and there's no way to question it. Having the story-like role and explanation helps everyone understand that there is a valid reason for the NFR, so when you're prioritizing them you can ask smart questions. They lead to conversations like "I need to expand my table space to 20GB, but the sysadmin has this NFR about database size. How much do SQL Server licenses really cost him? OMG, that much? OK, I'll denormalize a few tables and save a few GB to fit it in there."
As both #bensmith and #siemic show, yes, you can can capture NFRs as stories.
Should you capture them in this way?
I don't think you want to capture NFRs as part of regular feature stories.
Most NFRs apply to more than one story. "The system must be responsive" means every story needs to define maximum wait times. "The system must not consume more than 10GB of disk space" means every story needs to consider disk space. The list of "and"s in the story becomes unmanageable in even trivial cases.
You may want to capture NFRs as independent stories, if both the product owner and team are comfortable with this.
For instance:
Given I have a PC with at least a dual core processor
and 8GB of RAM
and a gigabit connection to the system
when I interact with the system
then I never have to wait more than 5 seconds for a response
and 90% of attempts respond within 1 second
This provides a clear requirement, with measurable targets. You just have to make sure that each story takes all of the NFRs into account.
I think you need to look at a few things,
NFRs should follow the life span of the application, software, product etc. backup and recovery scenarios should be covered regularly, security scans and performance should be measured in prod as well as in development.
Many NFRs need validation from teams outside of the development group so would not be expected to have a script or code written to verify. So obviously security, performance, scalability, resilience etc can and should be tested within the development phase or before code gets promoted into live.
Most NFRs can be written up as stories but as said I dont think all need development effort to cover them.
regards
Martin

edx SaaS course - Which of the following are true about user stories? Select all that apply

Hi all I'm doing the edx course and in this answer I get an error
So if any of you knows the correct answer please tell me why I get an error, if I select all options as true.
Which of the following are true about user stories? Select all that apply.
They should describe how the application is expected to be used
They should have business value
They do not need to be testable
They should be implemented across multiple iterations of the Agile lifecycle
From the SaaS textbook:
The BDD version of requirements is user stories, which describe how the application is expected to be used. They are lightweight versions of requirements that are better suited to Agile. User stories help stakeholders plan and prioritize development. Thus, like BDUF, you start with requirements, but in BDD user stories take the place of design documents in BDUF.
...
User stories came from the Human Computer Interface (HCI) community. They developed them
using 3-inch by 5-inch (76 mm by 127 mm) index cards, known as “3-by-5 cards.” (We’ll see other examples of paper and pencil technology from the HCI community shortly.) These cards contain one to three sentences written in everyday nontechnical language written jointly by the customers and developers. The rationale is that paper cards are nonthreatening and easy to rearrange, thereby enhancing brainstorming and prioritizing. The general guidelines for the user stories themselves is that they must be testable, be small enough to implement in one iteration, and have business value.
Therefore, the answer to:
Which of the following are true about user stories? Select all that apply.
They should describe how the application is expected to be used
They should have business value
They do not need to be testable
They should be implemented across multiple iterations of the Agile lifecycle
is (1) and (2).

BDD, what's a feature?

I'm just starting with BDD and I'm trying to build a small app, so I can see it working in a real environment, but I'm having trouble deciding what should be a feature. I'm building a tiny shop.
I decided that "Compare products" will be a feature and "User can checkout as guest" will be one, but to get to that, I first need to list products.
My question is, should "There should be a list of products" be a feature?
Thanks!
It should probably be a feature, but try wording it from a user's point of view. What does this list of product offer him?
User should be able to get an overview of offered products
User should be able to order and reorder products on name, price, availability.
It's pretty hard to begin doing BDD. The only thing that helps feeling confident in your abilities and the whole approach is to write test scenarios and the code that executes them. I would suggest you not to make already complex and confusing situation harder. Pick whatever task that you need to implement, open a blank text file and try to explain using simple sentences the behavior. Every sentence should start with one of three keywords: given, when and then. Using your favorite BDD framework write the code that will parse these sentences and stimulate the application to get into the start state (given), execute some commands (when) and assert the transitioned state (then). Application code may start from mere mocks. Replace gradually those mocks with gradually built code and grow your application with higher confidence and quality levels.
User story is a feature. Something that can be expressed in format:
As role
I would like to do something
So that goal
E.g.
As user
I would like to be able to compare products
So that I can select a product satisfying my needs in a best way
As guest
I would like to checkout my shopping cart
So that I can complete the purchase
Each feature have to be confirmed by a series of Given-When-Then scenarios of course.
You're basically asking what is a feature. Think about it, you have a story, a story describes a feature you (or other people involved) want for your app. Usually it has the form of: As a user I want to view list of products. You may add notes to this story, in order to make it more clear. But then comes the specific behaviour (that eventually you will test against) - there are infinite number of behaviours that conform to this story (think about view of products and the many ways to present them). Your focus, in BDD, is on finding the behaviour that suite your app needs (I use app and not user, because sometimes you should decide for the user) - by talking to as many people as you can, by trying stuff out and iterating it over.
It's like going from top to bottom - always trying to focus on behaviour - being more specific as you go. If you think about it, given a behaviour (meaning a set of tests) there is an infinite number of implementations. That's why the focus of BDD is to truly understand the behaviour by experimenting and talking - there is always a degree of freedom.
More important would be to figure out what the user wants to do with the list of products?
the feature would be providing something valuable to the user.
so in your case it would be
Choose a product to view from a list of products
Choose x products to compare from a list of products
Others
I would classify a feature as a minimum useful set of stories that deliver some coherent (business) value.
for BDD framework see http://kernowcode.wordpress.com
To determine, if a requirement is an explicit feature / user story, you could use the guidelines of task based design / documentation (e.g. http://www.sprez.com/articles/task-documentation-design.html). Such concepts acknowledge that a user of a system wants to achieve a specific result. Usually, to know something (such as: which products are available) is only a step in the process of purchasing / selling / building / etc.
A good starting point in BDD is to write the topics you would use as chapters in your user manual. These topics are usually the features you are going to provide in your software solution.
A nice framework that supports such an approach of specification-by-example is Concordion (http://concordion.org). Please have a look at the description of “acceptance tests in plain English” (http://gojko.net/2009/09/01/acceptance-testing-in-plain-english-with-concordion-net/).

Best practices for customer involvement in Agile development? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We need to involve our customer development partners in our development process. We're more or less following Agile methodologies. Some customer partners are remote, others closer. We need to minimize travel costs.
Our customers are in health care and tend to be busy, expensive, and hard to schedule.
What practices and technologies have worked to support customer involvement? We're using phone calls, phone conferences and email. We're curious about leveraging wiki techniques and would love to hear what's worked for others.
it doesn't matter whether the customer is in the same cubicle or halfway around the planet, except for communication delays - the critical factor is availability.
a customer that is too busy to answer your emails for several days is going to cause your iteration to be late, or fail
the customer has two critical commitments for agile:
available to answer questions in a timely manner
not to change their mind/priorities during an iteration
the customer must commit to a reasonable service-level agreement (SLA) on availability, e.g. 1-hour response time, or 24-hour response time, etc., and you will need to adjust all estimates and schedules by the lag factor. If the customer will not commit or does not follow through, cancel the iteration and re-plan, bringing the customer's commitment to the forefront again. Do not just "guess" at what you think the customer might want.
Bottom line: without a customer commitment, agile will not work.
My experience with Agile methods is mostly for desktop applications. When our customers are remote, we've spent time to get an engineer to the customer site to configure/install a demo rig. The engineer works with the customer on a test and demo setup/plan that will provide an environment that the customer believes replicates the important aspects of the deployment environment but isolates the demo system from existing infrastructure (so that we can push updates whenever we need to). The engineer also sets up deployment systems to move our applications into production, so that we can "deploy" without being on site. Our applications can self-update (either for each release or each build) and we carefully instrument the releases to log all errors and submit all crashes as bugs to our bug tracker. This way we at least know what went wrong, even if we don't know what's going right.
For each release/build that shows up on the customer's test rig, we provide a (short) screencast, narrated by the project lead or primary developer, demo-ing any new features. The release notes contain any long-term issues or questions we want the customer to think about (i.e. issues that can't be resolved immediately by a phone call or email), and the application displays these notes for the user.
Finally, and possibly most importantly, we get the customer and/or the customer's liaison an account on our calendar server and configure their calendar app to make use of that account. This then goes both ways--we can schedule time (on site, phone, email, etc.) with the customer and they can do the same with our developers.
One option: Install a customer proxy at the "customer partner" site who can extract the information that you need when those customers are available. Have these proxies build the solid relationships that allow them to represent the customer view. Their time is all yours. And when questions arise that they cannot answer, they have ready access to your customer partners - even if in the coffee line.
The whole point of the customer in agile is to have open and free discourse with the developers (IE immediate feedback). If your actual customers cannot provide this, then you need an intermediary/proxy that can fill this role. You don't need actual customers, you just need someone that can represent the customers' interests well enough to meet your customers' needs.
Just a few ideas:
If you do choose to use a Wiki, make sure it supports a whole-wiki-wide "recent changes" list, and preferably one that is specific to the users. The less distant from development people are, the more likely to have email as a metaphor for their computer use. If they can't immediately tell when there's something new for them to see, they will never explore it. You also preferably need ways to signal to them that you need their attention to matters, or they will treat changes like CCs.
I'm a big believer in creating video screen captures of interactions (narrated) and distributing them to users. Unlike a real demo, customers don't feel like they need to interrupt, and they can rewind and re-watch the same interaction over and over, paying attention to little details.
Finally, if you do distribute prototypes, make sure to send someone (or at least a screen sharing session) to see how the prototypes are used. Contextual design is effective. You can count on people using your prototype differently from the way you expect, and you have to understand how they use it to really understand where the issues are, even if they don't report them.
Have you considered something like LogMeIn.
This would allow customers to either log-in to a PC on your network already running your application, or alternatively allow you to install/update the application on one of their computers.
This would solve the remote customer issue and would also support the ongoing continual customer feedback requirement in the agile process.
I used it a previous company for technical support, but there is no reason (except maybe cost) that it would not work for your situation.
It is also a great way to actually see how users are using your application and therefore find out what works and what doesn't.
First of all, make sure that you have a product manager or a product owner close the the developers. This person will be managing the relationship with the customer.
Then, the product manager can demonstrate the product to the customer at the end of each iteration and also ask customers question when the developer need feedback to implement a user story.
It is amazing the positive feedback you can get from customers when you involve them.
We did not use a wiki and most of the communication is done via E-Mail, phone, and a screen sharing application (we are using GoToMeeting, but there are tons of alternative out there).
You should probably do a kick-off once with everyone at one place. Face-to-face time is invaluable. That includes all developers. Prepare some metaplan questions, but also have enough time to just mingle.
I think by most definitions of Agile processes that have high dependence on customer involvement you've already missed "best practice", which would be for an on-site, and preferably "in-team" customer present at all times. So I suppose we're looking for a "next-best practice". :)
There's the possibility of introducing a "proxy customer" on-site. I have to admit to being very sceptical about the value of a proxy customer. I'm concerned about the risk of introducing some sort of second-rate and otherwise unnecessary business analyst function to the mix, with the increased signal-to-noise ratio and potential for garbled messages. It also carries the risk of allowing busy real customers to reduce their involvement in the process, which is likely to lead to dissatisfaction. I wonder if there might be someone with good domain knowledge who has recently retired and might be available to act in this capacity as a consultant?
Communication bandwidth with remote customers is astonishingly lower than face-to-face, something I had not fully realised until I started dealing with users in another country. Even with video the loss is significant.
How long are your iterations? How hard is planning iterations? Might it be easier to go for longer iterations and get more planning done less frequently, or reduce iteration length and go to smaller, but more frequent planning sessions? Are more than one customer involv
Do you have a useable and available build at the end of each iteration? Is there time for involved users to have hands-on time before the next planning session? Keeping users engaged by shipping frequently would seem on the surface to be a Good Idea, which perhaps legislates for small frequent iterations (a week? two weeks?)
The wiki idea might work: have you looked at the FIT Framework? It's a sort of integrated acceptance test/wiki, which might help in getting acceptance tests from remote customers. I think I'd also look to provide some sort of (separate or integrated) "project dashboard", possibly pushed regularly to key customers as well as available on demand. use it as a substitute for things like post-its on whiteboards, Big Visible Charts and the like. There are a number of open-source or low-cost options that may serve - writing your own simple alternative need not be too time-consuming or costly, either.
Above all, remember that "Agile" is a kind-of catch-all label for developments that are carried out with an emphasis on the values and principles espoused in the Agile manifesto. What is considered "best" in one situation may not be so in another. If you understand the principles and regularly review your methods with a critical eye then you're probably going to be close enough to the best practice application to your situation.
I haven't looked at it for some time but with Beck and Fowler on the author list, there should be something useful in Planning Extreme Programming.
In my previous position #drchrono.com I aggregated data/feedback/iteration requests from 20,000 clinicians across the country. The best way to do this is to to evangelize a site like uservoice.com. I held "daily live web demonstrations" with sometimes 50 to 100 doctors (doctors signed up right from our website). In these demos I would demonstrate our current product and evangelize user voice to drive their feedback into a useful tool for our development team. All of this was done remotely and led to a 1,400% overall increase in recurring revenue growth.

Resources