I'm new to BDD and after reading through a few sources have got the following understanding:
BDD has two parts to it, Integration testing and Unit testing.
Integration testing which is done by specification tool like Cucumber.
Unit testing which is traditional junit+(jmock or mockito) etc.
Is this understanding correct?
Rgds.
I think it's much more a way of thinking about development, rather than the structure of unit vs. integration testing. To quote from here:
BDD focuses on obtaining a clear
understanding of desired software
behaviour through discussion with
stakeholders. It extends TDD by
writing test cases in a natural
language that non-programmers can
read. Behavior-driven developers use
their native language in combination
with the ubiquitous language of domain
driven design to describe the purpose
and benefit of their code. This allows
the developers to focus on why the
code should be created, rather than
the technical details, and minimizes
translation between the technical
language in which the code is written
and the domain language spoken by the
business, users, stakeholders, project
management, etc.
From the little I've done with it, our BDD focus was on developing a ubiquitous language shared by the business and developers, and writing the tests in a business-comprehensible fashion.
Related
There's still not much information out there on the XCode7 and Swift 2.0 real-world experiences from a unit testing and code coverage perspective.
While there're plenty of tutorials and basic how-to guides available, I wonder what is the experience and typical coverage stats on different iOS teams that actually tried to achieve a reasonable coverage for their released iOS/Swift apps. I specifically wonder about this:
1) while code coverage percentage doesn't represent the overall quality of the code base, is this being used as an essential metric on your team? If not, what is the other measurable way to assess the quality of your code base?
2) For a bit more robust app, what is your current code coverage percentage? (just fyi, we have hard time getting over 50% for our current code base)
3) How do you test things like:
App life-cycle, AppDelegate methods
Any code related to push/local notifications, deep linking
Defensive programming practices, various piece-of-mind (hardly reproducible) safe guards, exception handling etc.
Animations, transitions, rendering of custom controls (CG) etc.
Popups or Alerts that may include any additional logic
I understand some of the above is more of a subject for actual UI tests, but it makes me wonder:
Is there a reasonable way to get the above tested from the UTs perspective? Should we be even trying to satisfy an arbitrary minimal code coverage percentage with UTs for the whole code base or should we define that percentage off a reasonably achievable coverage given the app's code base?
Is it reasonable to make the code base more inflexible in order to achieve higher coverage? (I'm not talking about a medical app where life would be in stake here)
are there any good practices on testing all the things mentioned above, other than with UI tests?
Looking forward to a fruitful discussion.
You do ask a very big and good question. Although your question includes:
I wonder what is the experience and typical coverage stats on different iOS teams ...
I think the issue is language/OS agnostic. Sure some languages and platform are more unit testable than others. So some are more expensive to unit test (as opposed to other forms of automated/coded testing). I think you are searching for a cost/benefit equation to maximize productivity. Ah the fun of software development processes.
To jump to the end to give you the quick sound grab answer:
You should unit test all code that you want to work and is appropriate to unit testing.
So now why the all and why the emphasis on unit testing ...
What is a unit test?
The language in the development community is corrupted, so please bear with me. Unit testing is just one type of automated testing. Others are Automated Acceptance Tests, Application tests, Integration Tests, and Components test. These all test different things. They have different purposes.
However, when I hear unit testing two things pop into mind:
What is a unit test?
As part of TDD (Test Driven Development)?
TDD is about writing tests before writing code. It is a very low level coding practice/process (XP - eXtreme Programming) as you write a test to write a statement and then another test. Very much a coding practice but not an application/requirements practice as it is about writing code that does what you intended, not what the product requirements are (oh gosh I feel the points being lost).
Writing code and then unit testing it is ... in my experience ... fun, short term team building, but not productive. Sure some defects are found, but not many. TDD leads to better "healthy" code.
My point here is that unit testing is:
A subset of automated/coded testing.
Is part of a coding process.
Is about code health (maintainability).
Does note prove that your application works (sound of falling points).
Why all?
If you're team delivers zero defect software (ZDFD is real and achievable .. but that a flat earth discussion) all the time without unit testing then this is nonsense and you would not be asking any questions here.
The only valid reason for a team to include unit testing as part of its coding process is to improve productivity. If all team members commit to team productivity then the only issue is identifying which code profits from unit testing. This is the context of the all.
The easiest way I think to illustrate this is to list types I do not unit test:
Factories - They only instantiate types.
Builders / writing (IoC) - Same as factories - No domain logic.
Third party libraries - We call 3rd party libraries as documented. If you want to test these then use integration/component tests.
Cyclomatic Complexity of one - Every method of of type has a CC of 1. That is, no conditions. Unit tests will tell you nothing useful, peer review is more useful.
The practical answer
My teams have expected 100% unit test coverage on all new code that should be unit tested. This is achieved by attributing code that does not meeting the unit testing criteria. All code must go through code review and the attributes must be specific to the why options listed above. -- Simple.
A long answer, and perhaps not easy to digest, nor what people want to hear. But, from long experience, I know it is the best answer that can lead to best profitability.
Post comment
My answer is aimed at the unit testing aspects of the question. As for defensive programming and other practices, TDD is a process that mitigates that by making it harder to do the wrong thing. But build system static code analysis tools may help you capture these before they get to peer review (they can fail a build on new issues). Look at others like SonarQube, Resharper, CppDepend, NDepend (yes language dependent).
I am trying to get started with BDD and found a view blog posts about MSpec and SpecFlow. I'm currently not quite sure when I would use which and what the advantages/disadvantages of either framework are.
Looking at the documentation it seems that MSpec uses the context specification style whereas SpecFlow uses Given/When/Then style. I don't really mind either but I would like to know if there are any pitfalls to watch out for further down the track when the project/test suite grows.
Basically some real world advice/feedback of someone who uses it in their every day work would be great.
So I've used both.
I like the mspec workflow in away because its an easier sell for me to speak to users and say.
"When logging in"
"I should return to the page I requested"
When I've worked for organisations that have bought more into active collaboration (read agile) I've used the Given When Then pattern. That organisation was used to user stories so they were used to a more rigid style of specification. Also we were using more than one tool to feed the specs into. so the 'text only' feature files could be reused between tools.
In my own projects I use SpecFlow for the 'outside' and 'mspec' for inside of tests.
If I was to give someone advice it would be to use specflow if non technical people are writing the outside specs and mspec if a developer is writing the.
Bad points:
Mspec is class explosion
SpecFlow is a slower workflow
Good points:
Mspec is a more natural language
Specflow is better for reusability for steps.
The bottom line is they work well together.
One disadvantage of mspec is you cannot run in parallel whereas with specflow runner you can. That is a big performance issue.
I am learning Behavior Driven Development with ASP.NET MVC and, based on a post from Steve Sanderson, understand that BDD can mean, at least, the following test types: individual units of code & UI interactions. Something similar is mentioned in this post. Do I need two different test frameworks if I want both unit and integration testing?
Unit testing repositories, controllers, & services using a context/specification framework, like MSpec. The results of testing with this will be useful to the development team.
Testing complete behaviors (integration) using a given/when/then framework, like SpecFlow with Watin. The results of this testing will be useful for my client.
The videos I have seen so far on using BDD have only been limited to testing the behaviour of entities without testing the behaviour of repositories, controllers, etc... Is there a sample project where I can see both automated Unit and Integration testing using a BDD approach?
Personally I use SpecFlow for building feature specific tests (i.e. "User creates new company record") where I'll sometimes (but not always) use Watin. For testing my respositories, or service classes, I'll use unit/integration tests with NUnit. Integration tests are for when I need to talk to the database during the test, unit is for when I simply run code in the target object under test without external interactions.
I would say that you don't need to use a BDD framework for your non UI tests. You can if you want, but there is no hard and fast rule on this. If you are going to do this, then I highly recommend creating more then one project for your tests. Keeping them split is a good idea, rather then mixing all the test into one project. You could name them:
MyProject.Tests.Features <-- For BDD
SpecFlow tests.
MyProject.Tests.Integration <-- For
tests that access an
external resource i.e. database.
MyProject.Tests.Unit
If you're not wanting to use two BDD frameworks, you can still use MSTest/NUnit in a BDD way. For example, this blog article describes a nice naming convention which is close to BDD, but aimed at MSTest/NUnit unit tests. You could use this for your non SpecFlow tests when your testing things like repositories.
In summary - you don't have to use SpecFlow and MSpec in your testing, but if you do, then I recommend separate test projects.
I generally agree with what Jason posted.
You might want to divide your specs into two categories, system/integration and unit-level tests. You can describe both categories with any framework, but keep in mind that code-only approaches (NUnit, MSpec, etc.) require a business analyst to be capable of writing C#. SpecFlow/Gherkin can be a better approach if you want to involve analysts and users in writing specifications. Since the syntax and rules (Given, When, Then) are easy to understand and writing specifications from a user's perspective are easy to jot down after little training. It's all about bridging the communication gap and having users helping your team form the ubiquitous language of your domain.
I recommend having specifications support both working "outside in" and "inside out". You may start with an "outside in" SpecFlow specification written by the user/analyst/product owner and work your way from "unimplemented" towards "green" writing the actual code. The code supporting the feature is developed using TDD with a more technically oriented framework like MSpec (the "inside out" part).
Here's a repository that use MSpec for both unit and integration tests: https://github.com/agross/duplicatefinder.
I am looking at SpecFlow examples, and it's MVC sample contains several alternatives for testing:
Acceptance tests based on validating results generated by controllers;
Integration tests using MvcIntegrationTestFramework;
Automated acceptance tests using Selenium;
Manual acceptance tests when tester is prompted to manually validate results.
I must say I am quite impressed with how well SpecFlow examples are written (and I managed to run them within minutes after download, just had to configure a database and install Selenium Remote Control server). Looking at the test alternatives I can see that most of them complement each other rather than being an alternative. I can think of the following combinations of these tests:
Controllers are tested in TDD style rather than using SpecFlow (I believe Given/When/Then type of tests should be applied on higher, end-to-end level; they should provide good code coverage for respective components;
MvcIntegrationTestFramework is useful when running integration tests during development sessions, these tests are also part of daily builds;
Although Selenium-based tests are automated, they are slow and are mainly to be started during QA sessions, to quickly validate that there are no broken logic in pages and site workflow;
Manual acceptance tests when tester is prompted to confirm result validity are mainly to verify page look and feel.
If you use SpecFlow, Cucumber or other BDD acceptance test framework in you Web development, can you please share your practices regarding choosing between different test types.
Thanks in advance.
It's all behaviour.
Given a particular context, when an event occurs (within a particular scope), then some outcome should happen.
The scope can be a whole application, a part of a system or a single class. Even a function behaves this way, with inputs as context and the output as outcome (you can use BDD for functional language as well!)
I tend to use Unit frameworks (NUnit, JUnit, RSpec, etc.) at a class or integration level, because the audience is technical. Sometimes I document the Given / When / Then in comments.
At a scenario level, I try to find out who actually wants to help read or write the scenarios. Even business stakeholders can read text containing a few dots and brackets, so the main reason for having a natural language framework like MSpec or JBehave is if they want to write scenarios themselves, or show them to people who will really be put off by the dots and brackets.
After that, I look at how the framework will play with the build system, and how we'll give the ability to read or write as appropriate to the interested stakeholders.
Here's an example I wrote to show the kind of thing you can do with scenarios using simple DSLs. This is just written in NUnit.
Here's an example in the same codebase showing Given, When, Then in class-level example comments.
I abstract the steps behind, then I put screens or pages behind those, then in the screens and pages I call whatever automation framework I'm using - which could be Selenium, Watir, WebRat, Microsoft UI Automation, etc.
The example I provided is itself an automation tool, so the scenarios are demonstrating the behaviour of the automation tool through demonstrating the behaviour of a fake gui, just in case that gets confusing. Hope it helps anyway!
Since acceptance tests are a kind of functional tests, the general goal is to test your application with them end-to-end. On the other hand, you might need to consider efficiency (how much effort is to implement the test automation), maintainability, performance and reliability of the test automation. It is also important that the test automation can easily fit into the development process, so that it supports a kind of "test first" approach (to support outside-in development).
So this is a trade off, that can be different for each situation (that's why we provided the alternatives).
I'm pretty sure, that today the most widely fitting option is to test at the controller layer. (Maybe later as UI and UI automation frameworks will evolve, this will change.)
What would you recommend to start learning and applying BDD on a casual game development studio?
While I can't speak to using BDD specifically with games, I can't pass up the opportunity to introduce you to this excellent article:
http://www.code-magazine.com/article.aspx?quickid=0805061&page=1
One of my favorite overviews of BDD as a development methodology. Covers the process very well, and explains creating specifications via concern, context, and observations very nicely.
I also highly recommend using xUnit.NET and Moq as your testing platform (if you are lucky enough to be using .NET, that is). The following article provides an excellent specification-centric testing platform built on xUnit.NET, and follows the tennent of single-assertion-per-test-case very nicely:
http://iridescence.no/post/Extending-xUnit-with-a-Custom-ObservationAttribute-for-BDD-Style-Testing.aspx
Depending on your language and learning preference:
The RSpec Book talks about BDD using Ruby, RSpec and Cucumber. It is an EXCELLENT source for learning about the Concentric circles of BDD.
jrista's link to Bellware's article in Code Magazine is another EXCELLENT resource.
Just remember that BDD is about describing requirements/specifications so succinctly that they are executable. Then write the code that satisfies that spec. Rinse and repeat.
Hope this helps.
Lee
I think there are two aspects of BDD to consider if you want to use it. One part is "BDD is TDD done right", i.e. the way to learn TDD is not to think of it as writing tests first but to write behaviour/specifications first.
The second part is that BDD, as implemented in JBehave is a side that was long forgotten in the .Net community I think. Only recently NBehave implemented something similar to what JBehave is i.e. a way to have non-programmers writing the specifications (behaviours) for you. This only applies to pretty high level behaviours as user stories and scenarios so you can't do only this. You need the "first part BDD" and/or TDD too. The second type of BDD I describe is a complement to "regular TDD".