Does TDD mean I have to write a test for every method no matter how complicated? - ruby-on-rails

Suppose I have a user class that has a method is_old_enough?
The method just checks that the age is over 18. Seems pretty simple.
Does TDD mean I have to write a test for this, even though it is trivial?
class User
def is_old_enough?
self.age >= 18
end
end
And if so, why? What is the benefit of writing a test for this? You'd just be testing that x >= y works the way you expect the >= operator to work.
Because the most likely scenario I see happening is the following:
It turns out the age should actually be 21. That's a bug that the test didn't catch, because they had the wrong assumptions when we wrote the code. So then they go in and change the method to >= 21. But now the test fails! So they have to go and change the test too. So the test didn't help, and actually gave a false positive alarm when they were refactoring.
Looks like tests for simple methods like this are not actually testing anything useful, and are actually hurting you.

I think you're confusing test coverage and Test-Driven Development. TDD is just a practice of developing an automated test that is going to verify the use cases of some new feature. Usually it starts off failing because you've stubbed the functionality out or simply haven't implemented it. Next, you develop the feature until the test passes.
The idea is that you are in the mindset of developing tests that verify your important use cases/features. This doesn't necessarily mean you need to test simple functions if you think they are covered by your regular feature tests.
In terms of coverage, that's really up to you as the developer (or team) to decide. Obviously having around 1-to-1 coverage of tests to API is desired, but you have a choice as to whether you think it's always going to be easy enough to implement is_old_enough?. It may seem like an easy implementation now, but perhaps that will change in the future. You just need to be mindful of those kinds of decisions when you choose whether to write a test or not. More than likely, though, your use case won't change and the test is easy to write. It doesn't hurt to feel confident in all areas of your code.

I think the question has more to do with unit testing than TDD in particular.
Short answer: focus on behaviors
Long answer: Well, there is a nice phrase out there: BDD is TDD done right, and I completely agree. While BDD and TDD are is large part the "same" thing (not equal, mind you!), BDD for me gives the context for doing TDD. You can read a lot on the Internet around this, so I will not write essays here, but let me say this:
In your example, yes, the test is necessary because the rule that
user is old enough is a behavior of User entity. Test serves as a
safety net for many other things yet to come which would rely on this
piece of information, and test for me would document this behavior
very well (I actually tend to read tests to find out what the developer had in mind when writing a class - you learn what to expect, how the class behaves, what are the edge cases etc.)
I don't really see how the test would not catch the refactoring, since I would write the test with numbers 18, 19, 25 and 55 in mind (just a bunch of asserts typed very fast very easily)
Very important piece of the puzzle is that unit tests are just one technique that you need. If your design is lacking, you will find yourself writing too many meaningless tests, or you will have hell testing classes doing multiple things etc. You need to have very good SOLID skills to be able to shape out classes in a way that testing only their public interfaces (this includes protected methods as well) actually tests the entire class. As said before, focusing on behaviors is the key here.

Related

Testing: how to focus on behavior instead of implementation without losing speed?

It seems, that there are two totally different approaches to testing, and I would like to cite both of them.
The thing is, that those opinions were stated 5 years ago (2007), and I am interested, what has changed since then and which way should I go.
Brandon Keepers:
The theory is that tests are supposed to be agnostic of the
implementation. This leads to less brittle tests and actually tests
the outcome (or behavior).
With RSpec, I feel like the common approach of completely mocking your
models to test your controllers ends up forcing you to look too much
into the implementation of your controller.
This by itself is not too bad, but the problem is that it peers too
much into the controller to dictate how the model is used. Why does it
matter if my controller calls Thing.new? What if my controller decides
to take the Thing.create! and rescue route? What if my model has a
special initializer method, like Thing.build_with_foo? My spec for
behavior should not fail if I change the implementation.
This problem gets even worse when you have nested resources and are
creating multiple models per controller. Some of my setup methods end
up being 15 or more lines long and VERY fragile.
RSpec’s intention is to completely isolate your controller logic from
your models, which sounds good in theory, but almost runs against the
grain for an integrated stack like Rails. Especially if you practice
the skinny controller/fat model discipline, the amount of logic in the
controller becomes very small, and the setup becomes huge.
So what’s a BDD-wannabe to do? Taking a step back, the behavior that I
really want to test is not that my controller calls Thing.new, but
that given parameters X, it creates a new thing and redirects to it.
David Chelimsky:
It’s all about trade-offs.
The fact that AR chooses inheritance rather than delegation puts us in
a testing bind – we have to be coupled to the database OR we have to
be more intimate with the implementation. We accept this design choice
because we reap benefits in expressiveness and DRY-ness.
In grappling with the dilemma, I chose faster tests at the cost of
slightly more brittle. You’re choosing less brittle tests at the cost
of them running slightly slower. It’s a trade-off either way.
In practice, I run the tests hundreds, if not thousands, of times a
day (I use autotest and take very granular steps) and I change whether
I use “new” or “create” almost never. Also due to granular steps, new
models that appear are quite volatile at first. The valid_thing_attrs
approach minimizes the pain from this a bit, but it still means that
every new required field means that I have to change
valid_thing_attrs.
But if your approach is working for you in practice, then its good! In
fact, I’d strongly recommend that you publish a plugin with generators
that produce the examples the way you like them. I’m sure that a lot
of people would benefit from that.
Ryan Bates:
Out of curiosity, how often do you use mocks in your tests/specs?
Perhaps I'm doing something wrong, but I'm finding it severely
limiting. Since switching to rSpec over a month ago, I've been doing
what they recommend in the docs where the controller and view layers
do not hit the database at all and the models are completely mocked
out. This gives you a nice speed boost and makes some things easier,
but I'm finding the cons of doing this far outweigh the pros. Since
using mocks, my specs have turned into a maintenance nightmare. Specs
are meant to test the behavior, not the implementation. I don't care
if a method was called I just want to make sure the resulting output
is correct. Because mocking makes specs picky about the
implementation, it makes simple refactorings (that don't change the
behavior) impossible to do without having to constantly go back and
"fix" the specs. I'm very opinionated about what a spec/tests should
cover. A test should only break when the app breaks. This is one
reason why I hardly test the view layer because I find it too rigid.
It often leads to tests breaking without the app breaking when
changing little things in the view. I'm finding the same problem with
mocks. On top of all this, I just realized today that mocking/stubbing
a class method (sometimes) sticks around between specs. Specs should
be self contained and not influenced by other specs. This breaks that
rule and leads to tricky bugs. What have I learned from all this? Be
careful where you use mocking. Stubbing is not as bad, but still has
some of the same issues.
I took the past few hours and removed nearly all mocks from my specs.
I also merged the controller and view specs into one using
"integrate_views" in the controller spec. I am also loading all
fixtures for each controller spec so there's some test data to fill
the views. The end result? My specs are shorter, simpler, more
consistent, less rigid, and they test the entire stack together
(model, view, controller) so no bugs can slip through the cracks. I'm
not saying this is the "right" way for everyone. If your project
requires a very strict spec case then it may not be for you, but in my
case this is worlds better than what I had before using mocks. I still
think stubbing is a good solution in a few spots so I'm still doing
that.
I think all three opinions are still completely valid. Ryan and I were struggling with the maintainability of mocking, while David felt the maintenance tradeoff was worth it for the increase in speed.
But these tradeoffs are symptoms of a deeper problem, which David alluded to in 2007: ActiveRecord. The design of ActiveRecord encourages you to create god objects that do too much, know too much about the rest of the system, and have too much surface area. This leads to tests that have too much to test, know too much about the rest of the system, and are either too slow or brittle.
So what's the solution? Separate as much of your application from the framework as possible. Write lots of small classes that model your domain and don't inherit from anything. Each object should have limited surface area (no more than a few methods) and explicit dependencies passed in through the constructor.
With this approach, I've only been writing two types of tests: isolated unit tests, and full-stack system tests. In the isolation tests, I mock or stub everything that is not the object under test. These tests are insanely fast and often don't even require loading the whole Rails environment. The full stack tests exercise the whole system. They are painfully slow and give useless feedback when they fail. I write as few as necessary, but enough to give me confidence that all my well-tested objects integrate well.
Unfortunately, I can't point you to an example project that does this well (yet). I talk a little about it in my presentation on Why Our Code Smells, watch Corey Haines' presentation on Fast Rails Tests, and I highly recommend reading Growing Object Oriented Software Guided by Tests.
Thanks for compiling the quotes from 2007. It is fun to look back.
My current testing approach is covered in this RailsCasts episode which I have been quite happy with. In summary I have two levels of tests.
High level: I use request specs in RSpec, Capybara, and VCR. Tests can be flagged to execute JavaScript as necessary. Mocking is avoided here because the goal is to test the entire stack. Each controller action is tested at least once, maybe a few times.
Low level: This is where all complex logic is tested - primarily models and helpers. I avoid mocking here as well. The tests hit the database or surrounding objects when necessary.
Notice there are no controller or view specs. I feel these are adequately covered in request specs.
Since there is little mocking, how do I keep the tests fast? Here are some tips.
Avoid excessive branching logic in the high level tests. Any complex logic should be moved to the lower level.
When generating records (such as with Factory Girl), use build first and only switch to create when necessary.
Use Guard with Spork to skip the Rails startup time. The relevant tests are often done within a few seconds after saving the file. Use a :focus tag in RSpec to limit which tests run when working on a specific area. If it's a large test suite, set all_after_pass: false, all_on_start: false in the Guardfile to only run them all when needed.
I use multiple assertions per test. Executing the same setup code for each assertion will greatly increase the test time. RSpec will print out the line that failed so it is easy to locate it.
I find mocking adds brittleness to the tests which is why I avoid it. True, it can be great as an aid for OO design, but in the structure of a Rails app this doesn't feel as effective. Instead I rely heavily on refactoring and let the code itself tell me how the design should go.
This approach works best on small-medium size Rails applications without extensive, complex domain logic.
Great questions and great discussion. #ryanb and #bkeepers mention that they only write two types of tests. I take a similar approach, but have a third type of test:
Unit tests: isolated tests, typically, but not always, against plain ruby objects. My unit tests don't involve the DB, 3rd party API calls, or any other external stuff.
Integration tests: these are still focused on testing one class; the differences is that they integrate that class with the external stuff I avoid in my unit tests. My models will often have both unit tests and integration tests, where the unit tests focus in the pure logic that can be tested w/o involving the DB, and the integration tests will involve the DB. In addition, I tend to test 3rd party API wrappers with integration tests, using VCR to keep the tests fast and deterministic, but letting my CI builds make the HTTP requests for real (to catch any API changes).
Acceptance tests: end-to-end tests, for an entire feature. This isn't just about UI testing via capybara; I do the same in my gems, which may not have an HTML UI at all. In those cases, this exercises whatever the gem does end-to-end. I also tend to use VCR in these tests (if they make external HTTP requests), and like in my integration tests, my CI build is setup to make the HTTP requests for real.
As far as mocking goes, I don't have a "one size fits all" approach. I've definitely overmocked in the past, but I still find it to be a very useful technique, especially when using something like rspec-fire. In general, I mock collaborators playing roles freely (particularly if I own them, and they are service objects) and try to avoid it in most other cases.
Probably the biggest change to my testing over the last year or so has been inspired by DAS: whereas I used to have a spec_helper.rb that loads the entire environment, now I explicitly load just the class-under test (and any dependencies). Besides the improved test speed (which does make a huge difference!) it helps me identify when my class-under-test is pulling in too many dependencies.

Testing "output" with rspec and rails

I'm having a hard time learning how to implement appropriate testing with rspec and rails.
Speaking generally, how does one test the "hard cases" such as did a file get generated or was an image properly resized? Currently, to me it seems that unit testing is designed for testing the transformation of values rather than making sure data is processed and output appropriately.
TL;DR You should explore for different tools and don't shy away from the "hard cases" that may not seem like traditional "unit tests". The hard cases often are testable.
Testing is definitely a rabbit hole. From the surface, it seems so nice and clean, but as you get in to it, the rabbit hole goes quite far down, and this is one example: how do you test these things? You want to be confident that your code is doing the right thing, but you also don't want to create too complicated and unmanageable tests, and you don't want to test too fine-grained either.
However, to your specific questions, I do have some ideas you may wish to look in to.
For testing that a file got generated, you can check to see that initially the file did not exist (ruby has File.exists?) and then you can, after some method, check to see if it does exist. Of course you have questions like, "Does it have the right content?", "Did it finish to completion?", and you can test that stuff too by opening the file and checking it.
For the images, you probably can find facilities that allow you to check the properties of an image (perhaps Paperclip? Never used it but it's a well known gem). So you can do something like this (in sort-of psuedo code, because I don't know of a tool to do this):
it "resizes the image" do
img = Image.open_image("pic.png")
img[:size].should eq [100, 100]
img.close
resize_image
image = Image.open_image("pic.png")
imge[:size].should eq [25, 25]
img.close
end
Testing often relies on finding more and more helpful gems/tools to massage situations. "Unit" tests will, yes, only check the unit level code and they may be very simple, but not always. But then you start looking in to library specs, request specs, routing specs, acceptance tests, controller specs, etc. There are lot of tools out there and research is key.
But again, the examples you listed may not be unit tests in the way you think of. If your resizing or file generation is being done off of a Model, then yes it is a unit test, but you're no longer writing that simple code (like accessors and mutators). A lot of the time people new to thorough testing won't think that everything is testable, and it may not be, but if you play around and explore, you can often find a way to do so.

How to shift development of an existing MVC3 app to a TDD approach?

I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.

What not to test in Rails?

I've been writing tests for a while now and I'm starting to get the hang of things. But I've got some questions concerning how much test coverage is really necessary. The consensus seems pretty clear: more coverage is always better. But, from a beginner's perspective at least, I wonder if this is really true.
Take this totally vanilla controller action for example:
def create
#event = Event.new(params[:event])
if #event.save
flash[:notice] = "Event successfully created."
redirect_to events_path
else
render :action => 'new'
end
end
Just the generated scaffolding. We're not doing anything unusual here. Why is it important to write controller tests for this action? After all, we didn't even write the code - the generator did the work for us. Unless there's a bug in rails, this code should be fine. It seems like testing this action is not all too different from testing, say, collection_select - and we wouldn't do that. Furthermore, assuming we're using cucumber, we should already have the basics covered (e.g. where it redirects).
The same could even be said for simple model methods. For example:
def full_name
"#{first_name} #{last_name}"
end
Do we really need to write tests for such simple methods? If there's a syntax error, you'll catch it on page refresh. Likewise, cucumber would catch this so long as your features hit any page that called the full_name method. Obviously, we shouldn't be relying on cucumber for anything too complex. But does full_name really need a unit test?
You might say that because the code is simple the test will also be simple. So you might as well write a test since it's only going to take a minute. But it seems that writing essentially worthless tests can do more harm than good. For example, they clutter up your specs making it more difficult to focus on the complex tests that actually matter. Also, they take time to run (although probably not much).
But, like I said, I'm hardly an expert tester. I'm not necessarily advocating less test coverage. Rather, I'm looking for some expert advice. Is there actually a good reason to be writing such simple tests?
My experience in this is that you shouldn't waste your time writing tests for code that is trivial, unless you have a lot of complex stuff riding on the correctness of that triviality. I, for one, think that testing stuff like getters and setters is a total waste of time, but I'm sure that there'll be more than one coverage junkie out there who'll be willing to oppose me on this.
For me tests facilitate three things:
They garantuee unbroken old functionality If I can check that
nothing new that I put in has broken
my old things by running tests, it's
a good thing.
They make me feel secure when I rewrite old stuff The code I
refactor is very rarely the trivial
one. If, however, I want to refactor
untrivial code, having tests to
ensure that my refactorings have not
broken any behavior is a must.
They are the documentation of my work Untrivial code needs to be
documented. If, however, you agree
with me that comments in code is the
work of the devil, having clear and
concise unit tests that make you
understand what the correct behavior
of something is, is (again) a must.
Anything I'm sure I won't break, or that I feel is unnessecary to document, I simply don't waste time testing. Your generated controllers and model methods, then, I would say are all fine even without unit tests.
The only absolute rule is that testing should be cost-efficient.
Any set of practical guidelines to achieve that will be controversial, but here are some advices to avoid tests that will be generally wasteful, or do more harm than good.
Unit
Don't test private methods directly, only assess their effects indirectly through the public methods that call them.
Don't test internal states
Only test non-trivial methods, where different contexts may get different results (calculations, concatenation, regexes, branches...)
Don't assess things you don't care about, e.g. full copy on some message or useless parts of complex data structures returned by an API...
Stub all the things in unit tests, they're called unit tests because you're only testing one class, not its collaborators. With stubs/spies, you test the messages you send them without testing their internal logic.
Consider private nested classes as private methods
Integration
Don't try to test all the combinations in integration tests. That's what unit tests are for. Just test happy-paths or most common cases.
Don't use Cucumber unless you really BDD
Integration tests don't always need to run in the browser. To test more cases with less of a performance hit you can have some integration tests interact directly with model classes.
Don't test what you don't own. Integration tests should expect third-party dependencies to do their job, but not substitute to their own test suite.
Controller
In controller tests, only test controller logic: Redirections, authentication, permissions, HTTP status. Stub the business logic. Consider filters, etc. like private methods in unit tests, tested through public controller actions only.
Others
Don't write route tests, except if you're writing an API, for the endpoints not already covered by integration tests.
Don't write view tests. You should be able to change copy or HTML classes without breaking your tests. Just assess critical view elements as part of your in-browser integration tests.
Do test your client JS, especially if it holds some application logic. All those rules also apply to JS tests.
Ignore any of those rules for business-critical stuff, or when something actually breaks (no-one wants to explain their boss/users why the same bug happened twice, that's why you should probably write at least regression tests when fixing a bug).
See more details on that post.
More coverage is better for code quality- but it costs more. There's a sliding scale here, if you're coding an artificial heart, you need more tests. The less you pay upfront, the more likely it is you'll pay later, maybe painfully.
In the example, full_name, why have you placed a space between, and ordered by first_name then last_name- does that matter? If you are later asked to sort by last name, is it ok to swap the order and add a comma? What if the last name is two words- will that additional space affect things? Maybe you also have an xml feed someone else is parsing? If you're not sure what to test, for a simple undocumented function, maybe think about the functionality implied by the method name.
I would think your company's culture is important to consider too. If you're doing more than others, then you're really wasting time. Doesn't help to have a well tested footer, if the main content is buggy. Causing the main build or other developer's builds to break, would be worse though. Finding the balance is hard- unless one is the decider, spend some time reading the test code written by other team members.
Some people take the approach of testing the edge cases, and assume the main features will get worked out through usage. Considering getter/setters, I'd want a model class somewhere, that has a few tests on those methods, maybe test the database column type ranges. This at least tells me the network is ok, a database connection can be made, I have access to write to a table that exists, etc. Pages come and go, so don't consider a page load to be a substitute for an actual unit test. (A testing efficiency side note- if having automated testing based on the file update timestamp (autotest), that test wouldn't run, and you want to know asap)
I'd prefer to have better quality tests, rather than full coverage. But I'd also want an automated tool pointing out what isn't tested. If it's not tested, I assume it's broken. As you find failure, add tests, even if it's simple code.
If you are automating your testing, it doesn't matter how long it takes to run. You benefit every time that test code is run- at that point, you know a minimum of your code's functionality is working, and you get a sense of how reliable the tested functionality has been over time.
100% coverage shouldn't be your goal- good testing should be. It would be misleading to think a single test of a regular expression was accomplishing anything. I'd rather have no tests than one, because my automated coverage report reminds me the RE is unreliable.
The primary benefit you would get from writing a unit test or two for this method would be regression testing. If, sometime in the future, something was changed that impacted this method negatively, you would be able to catch it.
Whether or not that's worth the effort is ultimately up to you.
The secondary benefit I can see by looking at it would be testing edge cases, like, what it should do if last_name is "" or nil. That can reveal unexpected behavior.
(i.e. if last_name is nil, and first_name is "John", you get full_name => "John ")
Again, the cost-vs-benefit is ultimately up to you.
For generated code, no, there's no need to have test coverage there because, as you said, you didn't write it. If there's a problem, it's beyond the scope of the tests, which should be focused on your project. Likewise, you probably wouldn't need to explicitly test any libraries that you use.
For your particular method, it looks like that's the equivalent of a setter (it's been a bit since I've done Ruby on Rails) - testing that method would be testing the language features. If you were changing values or generating output, then you should have a test. But if you are just setting values or returning something with no computation or logic, I don't see the benefit to having tests cover those methods as if they are wrong, you should be able to detect the problem in a visual inspection or the problem is a language defect.
As far as the other methods, if you write them, you should probably have a test for them. In Test-Driven Development, this is essential as the tests for a particular method exist before the method does and you write methods to make the test pass. If you aren't writing your tests first, then you still get some benefit to have at least a simple test in place should this method ever change.

Are BDD tests acceptance tests?

Do you need something like Fitnesse, if you have BDD tests?
BDD "tests" exist at multiple different levels of granularity, all the way up to the initial project vision. Most people know about the scenarios. A few people remember that BDD started off with the word "should" as a replacement for JUnit's "test" - as a replacement for TDD. The reason I put "tests" in quotes is because BDD isn't really about testing; it's focused on finding the places where there's a lack or mismatch of understanding.
Because of that focus, the conversations are much more important than the BDD tools.
I'm going to say that again. The conversations are much more important than the BDD tools.
Acceptance testing doesn't actually mandate the conversations, and usually works from the assumption that the tests you're writing are the right tests. In BDD we assume that we don't know what we're doing (and probably don't know that we don't know). This is why we use things like "Given, When, Then" - so that we can have conversations around the scenarios and / or unit-level examples. (Those are the two levels most people are familiar with - the equivalent of acceptance tests and unit tests - but it goes up the scale).
We don't call them "acceptance tests" because you can't ask a business person "Please help me with my acceptance test". They'll look at you with a really weird, squinty gaze and then dismiss you as that geek girl. 93% of you don't want that.
Try "I'd like to talk to you about the scenario where..." instead. Or, "Can you give me an example?" Either of these are good. Calling them "Acceptance tests" starts making people think that you're actually doing testing, which would imply that you know what you're doing and just want to make sure you've done it. At that point, conversations tend to focus on how quickly you can get the wrong thing out, instead of about the fact you're getting out the wrong thing.
And you're getting the wrong thing out. Really, honestly, you are. Even if you think you're not, it's only because you don't understand second-order ignorance. You don't know that you don't know, and that's OK, as long as you've found the places where you could know you don't know. (You won't find all of them. Don't let the categorisation paradox keep you up at night.)
The only way to really get it right is to get all the requirements up front, and you know what happens when you try that. That's right. It's Waterfall. Remember the overtime? The weekend work? The seven years in which not one thing you created made it to production? If you want to avoid that, you only have one chance: assume you're wrong, have some conversations about it to be less wrong, then accept that you're still wrong and go for it anyway. Writing tests too early means you have even more chance to be wrong, and now it's harder to change and everyone thinks you're right and the PM is measuring your velocity and now you're committed to being wrong for another 2 weeks. And - worse - you're about to test that you're wrong, too.
Once again. The conversations are much more important than the BDD tools.
Please, please, don't fixate on the tools. The tools are just a mechanism for capturing the conversations and making sure that they get played into the code. Scenarios are not a replacement for conversations, any more than a 3 x 5 index card is a replacement for requirements.
Having said that, if you must start with a tool, put Slim behind Fitnesse so that it can run lovely Given / When / Thens without having to mess with Fit's tables and fixtures. GivWenZen is based on Slim and either of them rocks. FitSharp is the equivalent for those of you in the .NET space. Or just use Cucumber, or SpecFlow, or knock up a little custom DSL* that will do the job fine for years.
Transparency: *I wrote that one. And bits of JBehave. I wish we had called it "Dont-concentrate-on-BDD-tools-Behave". I might be heavily involved in other bits of BDD. Plus Dan North will buy me a pint if I can get this message out, so it's not exactly impartial advice.
Regardless - have the conversations already. It's just people. Go talk.
I don't know if there's such a thing, strictly speaking, as a "BDD test". BDD is a philosophy that suggests how you can best interact and collaborate with stakeholders to complete a complex project. It doesn't directly make any prescriptions for the best way to write tests. In other words, you'll probably still have all the usual kinds of tests (including acceptance tests) under a BDD-philosophy project.
When you hear of "BDD frameworks", the speaker usually means a framework for writing all your usual kinds of tests but with a BDD twist. For example, in RSpec, you still write unit tests; you just add the BDD flavor to them.
While BDD is larger than the scope of just tests, there are indeed BDD tests. These tests are Unit Tests that follow the BDD language.
Given some initial context (the givens),
When an event occurs,
then ensure some outcomes.
There are a few good BDD frameworks available depending on your language of preference.
JBehave for Java
RSpec for Ruby
NBehave for .NET
I like to draw a distinction between "specs" and "tests."
If I am covering a method called GetAverage(IEnumerable<int> numbers), I am going to write a more or less standard unit test.
If I am covering a method called CalculateSalesTax(decimal amount, State state, Municipality municipality), I am still going to write the unit test, but I'll call it a specification because I'm going to change it up (1) to verify the behaviour of the routine, and (2) because the test itself will document both the routine and its acceptance criteria.
Consider:
[TestFixture]
public class When_told_to_calculate_sales_tax_for_a_given_state_and_municipality() // the name defines the context
{
// set up mocks and expected behaviour
StateSalesTaxWebService stateService
= the_dependency<IStateSalesTaxWebService>;
MunicipalSurchargesWebService municipalService
= the_dependency<IMunicipalSurchargesWebService>;
stateService.Stub(x => x.GetTaxRate(State.Florida))
.Return(0.6);
municipalService.Stub(x => x.GetSurchargeRate(Municipality.OrangeCounty))
.Return(0.05);
// run what's being tested
decimal result = new SalesTaxCalculator().CalculateSalesTax
(10m, State.Florida, Municipality.OrangeCounty);
// verify the expected behaviour (these are the specifications)
[Test]
public void should_check_the_state_sales_tax_rate()
{
stateService.was_told_to(x => x.GetTaxRate(State.Florida)); // extension methods wrap assertions
}
[Test]
public void should_check_the_municipal_surcharge_rate()
{
municipalService.was_told_to(x => x.GetSurchargeRate(Municipality.OrangeCounty));
}
[Test]
public void should_return_the_correct_sales_tax_amount()
{
result.should_be_equal_to(10.65m);
}
}
JBehave (and NBehave recently added the same support) work with regular test files so while many other frameworks add "BDD taste tounit tests" the text based behaviour specifications/examples created with JBehave are suitable for acceptance tests. And no, you don't need fitnesse for that.
To get an idea of how it works I suggest JBehaves 2min tutorial.
For BDD testing in Flex you can try GivWenZen-flex check it out http://bitbucket.org/loomis/givwenzen-flex.
Cheers,
Kris
xBehavior BDD tests implemented well are robo-driven user acceptance criteria.
xSpecification BDD tests are normally unit tests and are unlikely to be acceptable user acceptance criteria.

Resources