Input and test data for a SpecFlow scenario - bdd

I have started recently using SpecFlow and I have 2 basic questions I need to clarify, also to confirm I am on the right way:
As I understand, it is a must that all the input data (test parameters for the scenarios) to be provided by the tester, the same about the test data (input data for the tables involved in the test scenarios)
Are there any existing tools for a quick way of generating test data (inserting it into the DB) ? I am using Entity Framework as part of the Data access layer. I was wondering about some tool that would read the data from a file or probably some Desktop application to provide values for the table's fields (which could also then generate a file from which some other tool could read all the data and generate all the required objects etc).
I also had a look at Preparing data for a SpecFlow scenario - I was thinking if there is already a framework which would achieve insert\delete of test data to use alongside with SpecFlow.

I don't think you are on the right track. SpecFlow is a BDD tool, but in some ways it only covers part of the process. Have a read of http://lizkeogh.com/2013/07/01/behavior-driven-development-shallow-and-deep/ and see if any if the scenarios sound familiar?
To move forwards I would recommend you start with http://dannorth.net/introducing-bdd/ to get a good idea of how it all began. Now lets consider your points;
The tester provides all the test data. Well yes and no. The idea is that between yourself and the feature expert, you are able to have a conversation that provides all the examples that you need to develop your feature. If you don't involve yourself in that conversation, then yes all the data will come from the other side, but the chances are it won't be such high quality as if you are able to ask the right questions and guide the conversation so the data follows a structure that you can code tests too.
As an example here, when I first started with BDD I thought I could get the business experts to write the plain text scenario files with less input from the development, but in practice the documents tended to be less useful than when we were involved. Not because they couldn't write decent specifications, but actually because they couldn't refactor them to reuse bindings etc. We were still needed to add our skills into the process.
Why does data go into a database? A good test is isolated to the scope that it is testing. For a UI layer test this means that we don't have a database. For a business tier test we shouldn't be reliant on the database to get data either.
In practice a database is one of the most difficult things to include in your testing because once any part of the data changes you cause cascading test failures.
Instead I would recommend making your features smaller and provide the data for your test in the scenario or binding. This also makes having your conversation easier, because the fiftieth row of test pack is not something either party is going to remember. ;-) I recommend instead trying to give you data identities, so "bob" might be individual in a test you can discuss, and both sides understand what makes him an interesting example.
good luck :-)
Update: With regard to using a database during testing, my experience is that there are a lot of complexities that make it a difficult choice to work with. Consider these points,
How will you reset the state of your data between tests?
How will you reset the state if one / some tests fail?
If you are using branches or even just if two developers are making changes at the same time, how will you support multiple test datasets?
How will you handle two instances of the tests running at the same time (don't forget the build server)?
Have a look at this question SpecFlow Integration Testing with Database Patterns which includes some patterns that you can use.

Related

TDD VS BDD: REST Service

I am all confused with TDD vs BDD :)
How does TDD and BDD differ in each of below point?
Development: Test case first, development follows next
RestService(HTTP): Don't make rest calls? If so,
a) do we return only hardcoded json using a mock object?
b) how to handle REST call failures? We should have test case for that too?
Especially for item 2, i have googled so many articles, but couldn't find a sample (code) approach on how to handle rest calls.
BDD and TDD are not comparable to each other, although they are both used in test first development.
BDD is more than just writing tests with an English-like syntax, e.g. Kiwi. BDD (also known as ATDD—Acceptance Test Driven Development) starts with developers, QA, and designers (e.g. business, and interaction designers), working together to develop a shared understanding of the proposed solution. It is common to use examples to illustrate the behavior, also known as Specification by Example.
I have found that a useful way to think of abstraction is distinguishing between what you do (abstract, high-level policy), and how you do it (concrete, low-level details). Every concrete detail exists to fulfill a higher-level policy. When you see something concrete, it is beneficial to identify the policy it is serving.
The specification by example can be used to create high-level acceptance tests, which test what the application does, i.e. its behavior.
Unit tests are used to test how the app implements a solution, i.e. test that the appropriate messages are sent to its collaborators/dependencies at the appropriate time.
The phases of the standard TDD cycle are Red, Green, Refactor. During the green phase, your goal is to get the test passing as quickly as possible, by hook or by crook—it is acceptable to write ugly, unorganized code. Once the test passes, you refactor the code to make it more readable/changeable.
Similarly, with a BDD/ATDD cycle, you have Red, Green, Refactor. During the green phase of BDD, just get the acceptance test to pass. All of the code you write can exist within the test itself. During the refactor phase of BDD, you extract test code into production code. You can use TDD to guide the extraction.
So, for a given BDD acceptance test, you might have multiple TDD tests.
Regarding how to test REST calls, let's go back to the premise of abstraction—distinguishing what we do from how we do it.
Calling a REST service is a concrete action. The policy it satisfies may be to provide a list of model objects.
Let's say the use case you are implementing is to invite a friend to lunch. Part of the use case responsibility is to obtain the list of friends from a server; it doesn't care how the server finds the friends.
Your BDD tests would handle getting the list of friends, picking a friend, and completing the invitation. Your BDD tests would not worry about actually making REST calls.
When you use TDD to implement the the class that handles communication with the server, you could have tests that retrieve JSON from a remote data source (i.e. the server), and ensure the JSON is properly parsed into User model objects. You could also have tests to cover the data source responding with an error, etc.
At the point you actually make a REST call, in the implementation of a remote data source that uses REST to communicate with the backend server, I would classify that as an integration test, as you are testing the integration with a component you don't control, i.e. the actual backend server. The integration tests only need to confirm that the server returns JSON data in the format your app expects, or that errors are returned when appropriate.
BDD is actually derived from TDD, so it's not surprising there's a little confusion! BDD is exactly like TDD (or ATDD if you're doing it for a whole system), but without the word "Test". It turns out that can be pretty powerful.
Particularly, it lets developers have conversations with non-technical business people about what the system should do. You can also use it to have conversations about what a class should do, or a module of code should do, even with a technical expert.
So in the example of your REST service, you can imagine that I'm a dev and you're an expert who knows what the REST service should do.
Me: What should it do?
You: It should let me read a record.
Me: Great! Can you give me an example of a record?
You: I have one here...
Me: Is there any context in which someone shouldn't be able to read the record?
You: Sure, if they don't have permissions.
...
Me: Okay, so I've done Read, let's do Update. Can you give me an example of a typical update?
You: Here you go.
Me: Fantastic, and you want it to respond just with success or fail. Is there any scenario in which it should fail?
You: Sure. The record shows when it was last updated. If someone else has already updated it in the meantime, yours should fail when you submit it.
So you see you can use BDD to explore all kinds of scenarios, including those around a REST service. The trick is to ask, "Can you give me an example?" Then you get a concrete example, which you can then automate if you want to. The conversations help us look for other examples and scenarios which we might have missed.
Don't use BDD tools to automate for a technical audience! BDD tools like Cucumber, JBehave etc. work with real English that's a lot harder to refactor than code. Use JUnit, NUnit etc. if you're just doing something like a REST service. You can put "Given, When, Then" in comments, or make a little DSL.
So now you can see that with your REST call failure, if I were coding it, I'd have an example like:
Me: So, this call failure... can you give me an example?
You: Sure, if you access a record that's been deleted it's going to fail.
Me: Give me a typical example of a record that might get deleted?
You: The one we're using before is good.
Me: Okay, is there a situation in which we shouldn't delete a record?
You: Yes, if it's already been published...
Etc.
You can see that throughout, I'm not really using the word "test". Tests are a nice by-product in BDD. It's used more for exploration and specification of requirements. The conversations in BDD are the most important part of it.
The reason it's tricky to find examples of using BDD for REST is first because REST is deliberately simple and doesn't often have a lot of behaviour, and second because BDD's scenarios aren't generally phrased in terms of their implementation, focusing instead on the value of what the service or system provides ("read a record").
TDD and ATDD are exactly the same, if they're done well. It's just easier to have conversations about examples and scenarios than it is to have them about tests.

Understand BDD with a pratical example

I'm trying to join the behaviour driven development approach, but to use it I need to understand how to think in that way.
I'd like to test it on a new personal project I'm starting right now (I'll use RoR)
The project will provide APIs to collect data from external applications, it will provide an authentication system (devise), several models to collect data as needed, and a payment system to purchase subscriptions which will give some premium-only features.
What kind of tests should I perform in order to cover all these functionalities but DRY?
I thought I should use both RSpec and Cucumber. For Devise I'll follow the documentation on their website, but it's not clear to me what kind of tests should I perform to check the data has been collected correctly and it is displayed correctly to the user and which tools use for that task. Also If you could provide a simple example of how would you organize tests and development for this kind of project will help (i'm not asking about real testing code -because I see it really depends upon the implementation-, but about development process and KIND of tests you would perform). If you need some more details to take a choice please let me know and feel free to invent it since it's for educational purposes.
I don’t think there can be any mention of BDD without someone chiming in that it’s all about the communication. So I’ll be that guy: it’s all about the communication! The real value is the exploration of functionality in accessible terms with different stakeholders to establish what the system needs to do transparently. With everybody talking the same language, it’s much easier to communicate goals. When it comes to implementation, developers know what they’re doing, stakeholders know what they’re getting and there shouldn’t be too many surprises (except for the things you missed / captured incorrectly / haven’t realised yet, perhaps).
So, get out there, speak to your stakeholders and work out what the person commissioning the system wants it do. If this is a solo effort, you're going to need to wear a number of different hats.
Your BDD tests will cover the behaviour of the system - units of desired functionality. You'll still need to do unit tests etc. - no change there.
As product owner, think about what you want the system to do – not how. You likely don’t care how things work, as long as they do work. If you’re a developer, this will likely be the difficult shift in thinking. When I first started looking into BDD, I was convinced that it made sense to go through UI journeys and technical details etc. in scenarios. It doesn’t. That stuff belongs in the step definitions. As developer, you can define all of the how stuff there.
As for keeping it DRY… Write your scenarios exactly how you need to in order to capture the required behaviour. Then you can worry about refactoring and identifying opportunities for step re-use. The use of regular expressions will help here too, to some extent. It’s tempting to go super-imperative and have a whole suite of re-usable steps, but you’ll likely realise it’s all very brittle when a change to a single step ripples through all of your scenarios, not just the one it was supposed to affect. If you're interested into any form of web automation, check out the web automation pyramid.
Useful resources (and lots of examples):
http://books.openlibra.com/pdf/cuke4ninja-2011-03-16.pdf < awesome free eBook – fun to read, too.
http://www.slideshare.net/lunivore/behavior-driven-development-11754474 < Liz really knows her stuff!
http://dannorth.net/2011/01/31/whose-domain-is-it-anyway/
https://www.google.co.uk/search?q=declarative+vs+imperative+BDD < go Team Declarative!
I'm not sure this meets your exact requirements, but this is how I do BDD (example is a webapp):
Pretend you're sitting in front of the computer as a user of your application. Write down the steps you need to perform to achieve one of the use cases, for instance:
Navigate to the system's url
Login
Select the function you require
Enter the relevant data to execute the function
Click the button to initiate the function
Wait for the system to respond
Ensure the data on the screen matches the data you expect.
If any of these steps fail to execute, or the data is incorrect, the test has failed.
Once you have this in a test file, you then use Gherkin/Cucumber/Webdriver to implement the code required to execute each of the steps. Each method of this is re-usable, so once you've implemented login in one place, it should work everywhere you call it.
for testing with cucumber or rspec for devise try this
see this - cucumber/rspec
or on github

How to shift development of an existing MVC3 app to a TDD approach?

I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.

How do you plan your Rails app?

I'm starting a Rails app for a customer and am considering either creating a mind map or jumping straight to a Cucumber specification.
How do you plan your Rails app?
As an additional question, say you also start with Cucumber, at which point would you write Unit tests? Before satisfying the specifications?
I've got a 6 step process.
I prefer to work out the model relationship and uses before doing anything. Generally I try to define models into units containing coherent chunks of information. Usually this starts by identifying the orthogonal resources my application will need (Users, Posts, etc). I then figure out what information each of those resources absolutely need (attributes) and may potentially need (associations), and how that information will likely be operated on (methods), from there I define a set of rules to govern resource consistency (validations).
I usually iterate over my design a few times because the act of defining other models usually makes me rethink ones I've already done. Once I have a model design I like, I will start refactoring or specializing(subclassing) models to clarify the design.
I write the migration and make skeletons for my models. I usually won't write tests until I have a first draft of methods and validations implemented. It's not always obvious how to implement things until giving it some moderate thought.
Next comes the test suite. Doesn't matter what I used to write the tests, so long as I can be certain the backend is sane.
This is when I piece together the control flow. What happens on a successful request? Unsuccessful request? Which controller actions will link to others? Usually there is a 1-1 mapping between controllers and models (not counting sub classes of models), every so often I'll encounter situations where I need to act on multiple model types, for that I'll probably create a new controller. Depending on how complex my app is I may model the flow as a state machine.
Lastly I create the views. I start by sketching out the UI based which is heavily influenced by my model's relationships and attributes. Abstract out common parts, then write the views.
Polish the UI. I create a CSS, and start to replace links with remote calls, or even just javascript when appropriate.
I may interleave steps 2 and 3. I find it's very easy to write a test just after I write the code to be tested. Especially because I'm usually testing things in a console as I write, and half the test is written by pasting from the console.
I may also compartmentalize steps 4 and 5 for each model/controller. Any point I may go back and revise, a previous decision, and propagate those changes through my steps.
I start with sketches of the user interface and then progress to HTML mockups. Once the UI design is finalised I can identify the RESTful resources in the application and their relationships.
I don't think writing only cucumber features as specifications is a good idea. Writing test code without be able to test it pass leads to errors in the tests and increases the time you'll need to correct them later.
So I'd do the following :
Write some mindmap. But keep it simple with the major ideas of the project.
Start writing tests and coding at the sime time (write one test, make it pass, write an other, ...).
So you'll write your specifications while driving your application. Keeping it clean but also remaining agile and being able to change some ideas in the middle of the project.

Why Fit/FitNesse?

What's the point of using Fit/FitNesse instead of xUnit-style integration tests? It has really strange and very unclear syntax in my opinion.
Is it really only to make product owners write tests? They won't! It's too complicated for them. So why should anyone Fit/FitNesse?
Update So it's totally suitable for business-rules tests only?
The whole point is to work with non-programmers, often even completely non-technical people like prospect users of a business application, on what application should do and then put it into tests. While making tests work is certainly too complicated for them, they should be able to discuss tables of sample data filled out in e.g. Word. And the great thing is, unlike traditional specification, those documents live with your application because automated tests force you to update them.
See Introduction To Fit and Fit Workflow by James Shore and follow links to the rest of documentation if you want.
Update: Depends on what you mean by business rules? ;-) Some people would understand it very narrowly (like in business rules engines etc), others---very broadly.
As I see it, Fit is a tool that allows you to write down business (as in domain) use cases with rich realistic examples in a document, which the end users or domain experts (in some domains) can understand, verify and discuss. At the same time these examples are in machine readable form so they can be used to drive automated testing, You neither write the document entirely by yourself, nor requre them to do it. Instead it's a product of callaboration and discussion that reflects growing understanding of what application is going to do, on both sides. Examples get richer as you progress and more corner cases are resolved.
What application will do, not how, is important. It's a form of functional spec. As such it's rather broad and not really organized by modules but rather usage scenarios.
The tests that come out of examples will test external behavior of application in aspects important from business point of view. Yes, you might call it business rules. But lets look at Diego Jancic's example of credit scoring, just with a little twist. What if part of fit document is 1) listing attributes and their scores and then 2) providing client data and checking results, Then which are the actual business rules: scoring table (attributes and their scores) or application logic computing the score for each client (based on scoring table)? And which are tested?
Fit/FitNesse tests seem more suitable for acceptance testing. Other tests (when you don't care about cooperation with clients, users, domain experts, etc., you just want to automate testing) probably will be easier to write and maintain in more traditional ways. xUnit is nice for unit testing and api tests. Each web framework should have some tool for web app/service testing integrated in its modify-build-test-deploy cycle, eg. django has its little test client. You've lots to chose from.
And you always can write your own tool (or preferably tweak some existing) to better fit (pun intended) some testing in your particular domain of interest.
One more general thought. It's often (not always!!!) better to encode your tests, "business rules" and just about anything, in some form of well defined data that is interpreted by some simple, generic piece of code. Then it's easy to use the data in some other way: generate documentation, migrate to new testing framework, port application to new environment/programming language, use to check conformance with some external rules or other system (just use your imagination). It's much harder to pull such information out from code, eg. simple hardcoded unit tests or business rules.
Fit stores test cases as data. In very specific format because of how it's intended to be used, but still. Your domain specific tests may use different formats like simple CSV, JSON or YAML.
The idea is that you (the programmer) defines an easy to understand format, such as an excel sheet. Then, the product owner enters information that is hard to understand for people that is not in the business... and you just validate that your code works as the PO expects running Fit.
The way used in xUnit, can be used for programmers as an input for easy to understand or simple information.
If you're going to need to enter a lot of weird examples with multiple fields in your xUnit test, it will became hard to read.
Imagine a case where you have to decide whether to give a loan to a customer, based on the Age, Married/Single, Amount of Childrens, Wage, Activity, ...
As a programmer, you cannot write that information; and a risk manager cannot write a xUnit test.
Helps reduce redundancy in regression and bug testing. Build manageable repository of test cases. Its like build once and use for ever.
It is very useful during cooperation of the QA and devs teams: QA could show to developer the result of the failed test and a developer will easyly help to solve an environment issue and will understand steps for reproducing a bug.
It is suitable for UI and even for API testing.

Resources