I'm trying to join the behaviour driven development approach, but to use it I need to understand how to think in that way.
I'd like to test it on a new personal project I'm starting right now (I'll use RoR)
The project will provide APIs to collect data from external applications, it will provide an authentication system (devise), several models to collect data as needed, and a payment system to purchase subscriptions which will give some premium-only features.
What kind of tests should I perform in order to cover all these functionalities but DRY?
I thought I should use both RSpec and Cucumber. For Devise I'll follow the documentation on their website, but it's not clear to me what kind of tests should I perform to check the data has been collected correctly and it is displayed correctly to the user and which tools use for that task. Also If you could provide a simple example of how would you organize tests and development for this kind of project will help (i'm not asking about real testing code -because I see it really depends upon the implementation-, but about development process and KIND of tests you would perform). If you need some more details to take a choice please let me know and feel free to invent it since it's for educational purposes.
I don’t think there can be any mention of BDD without someone chiming in that it’s all about the communication. So I’ll be that guy: it’s all about the communication! The real value is the exploration of functionality in accessible terms with different stakeholders to establish what the system needs to do transparently. With everybody talking the same language, it’s much easier to communicate goals. When it comes to implementation, developers know what they’re doing, stakeholders know what they’re getting and there shouldn’t be too many surprises (except for the things you missed / captured incorrectly / haven’t realised yet, perhaps).
So, get out there, speak to your stakeholders and work out what the person commissioning the system wants it do. If this is a solo effort, you're going to need to wear a number of different hats.
Your BDD tests will cover the behaviour of the system - units of desired functionality. You'll still need to do unit tests etc. - no change there.
As product owner, think about what you want the system to do – not how. You likely don’t care how things work, as long as they do work. If you’re a developer, this will likely be the difficult shift in thinking. When I first started looking into BDD, I was convinced that it made sense to go through UI journeys and technical details etc. in scenarios. It doesn’t. That stuff belongs in the step definitions. As developer, you can define all of the how stuff there.
As for keeping it DRY… Write your scenarios exactly how you need to in order to capture the required behaviour. Then you can worry about refactoring and identifying opportunities for step re-use. The use of regular expressions will help here too, to some extent. It’s tempting to go super-imperative and have a whole suite of re-usable steps, but you’ll likely realise it’s all very brittle when a change to a single step ripples through all of your scenarios, not just the one it was supposed to affect. If you're interested into any form of web automation, check out the web automation pyramid.
Useful resources (and lots of examples):
http://books.openlibra.com/pdf/cuke4ninja-2011-03-16.pdf < awesome free eBook – fun to read, too.
http://www.slideshare.net/lunivore/behavior-driven-development-11754474 < Liz really knows her stuff!
http://dannorth.net/2011/01/31/whose-domain-is-it-anyway/
https://www.google.co.uk/search?q=declarative+vs+imperative+BDD < go Team Declarative!
I'm not sure this meets your exact requirements, but this is how I do BDD (example is a webapp):
Pretend you're sitting in front of the computer as a user of your application. Write down the steps you need to perform to achieve one of the use cases, for instance:
Navigate to the system's url
Login
Select the function you require
Enter the relevant data to execute the function
Click the button to initiate the function
Wait for the system to respond
Ensure the data on the screen matches the data you expect.
If any of these steps fail to execute, or the data is incorrect, the test has failed.
Once you have this in a test file, you then use Gherkin/Cucumber/Webdriver to implement the code required to execute each of the steps. Each method of this is re-usable, so once you've implemented login in one place, it should work everywhere you call it.
for testing with cucumber or rspec for devise try this
see this - cucumber/rspec
or on github
Related
I am all confused with TDD vs BDD :)
How does TDD and BDD differ in each of below point?
Development: Test case first, development follows next
RestService(HTTP): Don't make rest calls? If so,
a) do we return only hardcoded json using a mock object?
b) how to handle REST call failures? We should have test case for that too?
Especially for item 2, i have googled so many articles, but couldn't find a sample (code) approach on how to handle rest calls.
BDD and TDD are not comparable to each other, although they are both used in test first development.
BDD is more than just writing tests with an English-like syntax, e.g. Kiwi. BDD (also known as ATDD—Acceptance Test Driven Development) starts with developers, QA, and designers (e.g. business, and interaction designers), working together to develop a shared understanding of the proposed solution. It is common to use examples to illustrate the behavior, also known as Specification by Example.
I have found that a useful way to think of abstraction is distinguishing between what you do (abstract, high-level policy), and how you do it (concrete, low-level details). Every concrete detail exists to fulfill a higher-level policy. When you see something concrete, it is beneficial to identify the policy it is serving.
The specification by example can be used to create high-level acceptance tests, which test what the application does, i.e. its behavior.
Unit tests are used to test how the app implements a solution, i.e. test that the appropriate messages are sent to its collaborators/dependencies at the appropriate time.
The phases of the standard TDD cycle are Red, Green, Refactor. During the green phase, your goal is to get the test passing as quickly as possible, by hook or by crook—it is acceptable to write ugly, unorganized code. Once the test passes, you refactor the code to make it more readable/changeable.
Similarly, with a BDD/ATDD cycle, you have Red, Green, Refactor. During the green phase of BDD, just get the acceptance test to pass. All of the code you write can exist within the test itself. During the refactor phase of BDD, you extract test code into production code. You can use TDD to guide the extraction.
So, for a given BDD acceptance test, you might have multiple TDD tests.
Regarding how to test REST calls, let's go back to the premise of abstraction—distinguishing what we do from how we do it.
Calling a REST service is a concrete action. The policy it satisfies may be to provide a list of model objects.
Let's say the use case you are implementing is to invite a friend to lunch. Part of the use case responsibility is to obtain the list of friends from a server; it doesn't care how the server finds the friends.
Your BDD tests would handle getting the list of friends, picking a friend, and completing the invitation. Your BDD tests would not worry about actually making REST calls.
When you use TDD to implement the the class that handles communication with the server, you could have tests that retrieve JSON from a remote data source (i.e. the server), and ensure the JSON is properly parsed into User model objects. You could also have tests to cover the data source responding with an error, etc.
At the point you actually make a REST call, in the implementation of a remote data source that uses REST to communicate with the backend server, I would classify that as an integration test, as you are testing the integration with a component you don't control, i.e. the actual backend server. The integration tests only need to confirm that the server returns JSON data in the format your app expects, or that errors are returned when appropriate.
BDD is actually derived from TDD, so it's not surprising there's a little confusion! BDD is exactly like TDD (or ATDD if you're doing it for a whole system), but without the word "Test". It turns out that can be pretty powerful.
Particularly, it lets developers have conversations with non-technical business people about what the system should do. You can also use it to have conversations about what a class should do, or a module of code should do, even with a technical expert.
So in the example of your REST service, you can imagine that I'm a dev and you're an expert who knows what the REST service should do.
Me: What should it do?
You: It should let me read a record.
Me: Great! Can you give me an example of a record?
You: I have one here...
Me: Is there any context in which someone shouldn't be able to read the record?
You: Sure, if they don't have permissions.
...
Me: Okay, so I've done Read, let's do Update. Can you give me an example of a typical update?
You: Here you go.
Me: Fantastic, and you want it to respond just with success or fail. Is there any scenario in which it should fail?
You: Sure. The record shows when it was last updated. If someone else has already updated it in the meantime, yours should fail when you submit it.
So you see you can use BDD to explore all kinds of scenarios, including those around a REST service. The trick is to ask, "Can you give me an example?" Then you get a concrete example, which you can then automate if you want to. The conversations help us look for other examples and scenarios which we might have missed.
Don't use BDD tools to automate for a technical audience! BDD tools like Cucumber, JBehave etc. work with real English that's a lot harder to refactor than code. Use JUnit, NUnit etc. if you're just doing something like a REST service. You can put "Given, When, Then" in comments, or make a little DSL.
So now you can see that with your REST call failure, if I were coding it, I'd have an example like:
Me: So, this call failure... can you give me an example?
You: Sure, if you access a record that's been deleted it's going to fail.
Me: Give me a typical example of a record that might get deleted?
You: The one we're using before is good.
Me: Okay, is there a situation in which we shouldn't delete a record?
You: Yes, if it's already been published...
Etc.
You can see that throughout, I'm not really using the word "test". Tests are a nice by-product in BDD. It's used more for exploration and specification of requirements. The conversations in BDD are the most important part of it.
The reason it's tricky to find examples of using BDD for REST is first because REST is deliberately simple and doesn't often have a lot of behaviour, and second because BDD's scenarios aren't generally phrased in terms of their implementation, focusing instead on the value of what the service or system provides ("read a record").
TDD and ATDD are exactly the same, if they're done well. It's just easier to have conversations about examples and scenarios than it is to have them about tests.
I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.
I know there is a lot of talk about BPM these days and I am conscious that some may see it to be a craze rather than a fundamentally important piece of software.
As someone from what most would call 'The Business', I have been doing my best to learn about BPM to ensure we continue to make decisions that not only make sense to the business, but IT as well.
I have noticed while reading that mention is made to application workflow when sometimes discussing BPM. I hadn't given this much thought until recently.
Therefore, what is the difference? When would you use one and not the other?
BPM is about the process and improving it, which takes into account users and potentially more than one application,e.g. an ERP system may have more than one application to it, though there may be other uses of the term. Note that the process could be viewed without what applications or technologies are used.
Application workflow is how an application is used to go from a to b. Here it is a specific set of code that is used and what happens over the course of an application getting from a to b. In this case, the application is front and center rather than the process.
Does that provide an answer? Another way to think of it is that multiple application workflows can make up a system which is used in a process that can have BPM applied to it.
Late to the game, but workflow is to database as BPMS is to DBMS. (Convenient how the letters line up, huh?)
IOW, BPM(S) is traditionally meant to refer to a particular framework/application that allows you to manage business processes: defining them, storing them, versioning them, measuring them, etc. This is similar to how a DBMS manages databases.
Now, a workflow is a definition, much like a database is a definition. In the former case, it is a definition of operations/work (Fufill Order), steps thereof (Send Invoice) and rules/constraints on the work (If no stock, send notice). In the latter, similar case, it is a definition of data structure (CREATE TABLE) and constraints (InvoiceTotal must be > $0.00).
I think this is a potentially confusing subject, particular as some development environments use a type of process flow model to generate user facing applications (I'm thinking about Outsystems here, for example).
But, for me, the distinction is crystal clear. Application workflow, as people talk about it, refers to a user's path through an application, i.e. the pages they complete/visit, the data they enter, etc. on their way to completing a transaction of some sort. Application orkflow is a poor term for this though, I think application flow would be more meaningful.
BPM on other hand, is about modelling and executing a workflow process. By workflow, in this context, I mean a series of discrete steps (or tasks) that have to be completed (either programmatically or via human interaction) in a certain order to complete a process. These tasks can be implemented as individual application modules (each with their own "application workflow", see above). The job of the workflow engine is to make sure that these separate steps are assigned to the right people (of groups of people) in the right sequence, and that overall the process completes in an orderly way.
I don't think there's a clear answer to this at all. These are words, as opposed to theoretical concepts. If you add the word "checklist" into the mix - that just turns out to be a linear version of a process (but you can have conditionals in checklists - making them a workflow).
I am not sure how to help in reframing this question, but it's almost as if no answer can ever be possible. My own thoughts are at https://tallyfy.com/improving-efficiency-workflow-vs-business-process-management/
What's the point of using Fit/FitNesse instead of xUnit-style integration tests? It has really strange and very unclear syntax in my opinion.
Is it really only to make product owners write tests? They won't! It's too complicated for them. So why should anyone Fit/FitNesse?
Update So it's totally suitable for business-rules tests only?
The whole point is to work with non-programmers, often even completely non-technical people like prospect users of a business application, on what application should do and then put it into tests. While making tests work is certainly too complicated for them, they should be able to discuss tables of sample data filled out in e.g. Word. And the great thing is, unlike traditional specification, those documents live with your application because automated tests force you to update them.
See Introduction To Fit and Fit Workflow by James Shore and follow links to the rest of documentation if you want.
Update: Depends on what you mean by business rules? ;-) Some people would understand it very narrowly (like in business rules engines etc), others---very broadly.
As I see it, Fit is a tool that allows you to write down business (as in domain) use cases with rich realistic examples in a document, which the end users or domain experts (in some domains) can understand, verify and discuss. At the same time these examples are in machine readable form so they can be used to drive automated testing, You neither write the document entirely by yourself, nor requre them to do it. Instead it's a product of callaboration and discussion that reflects growing understanding of what application is going to do, on both sides. Examples get richer as you progress and more corner cases are resolved.
What application will do, not how, is important. It's a form of functional spec. As such it's rather broad and not really organized by modules but rather usage scenarios.
The tests that come out of examples will test external behavior of application in aspects important from business point of view. Yes, you might call it business rules. But lets look at Diego Jancic's example of credit scoring, just with a little twist. What if part of fit document is 1) listing attributes and their scores and then 2) providing client data and checking results, Then which are the actual business rules: scoring table (attributes and their scores) or application logic computing the score for each client (based on scoring table)? And which are tested?
Fit/FitNesse tests seem more suitable for acceptance testing. Other tests (when you don't care about cooperation with clients, users, domain experts, etc., you just want to automate testing) probably will be easier to write and maintain in more traditional ways. xUnit is nice for unit testing and api tests. Each web framework should have some tool for web app/service testing integrated in its modify-build-test-deploy cycle, eg. django has its little test client. You've lots to chose from.
And you always can write your own tool (or preferably tweak some existing) to better fit (pun intended) some testing in your particular domain of interest.
One more general thought. It's often (not always!!!) better to encode your tests, "business rules" and just about anything, in some form of well defined data that is interpreted by some simple, generic piece of code. Then it's easy to use the data in some other way: generate documentation, migrate to new testing framework, port application to new environment/programming language, use to check conformance with some external rules or other system (just use your imagination). It's much harder to pull such information out from code, eg. simple hardcoded unit tests or business rules.
Fit stores test cases as data. In very specific format because of how it's intended to be used, but still. Your domain specific tests may use different formats like simple CSV, JSON or YAML.
The idea is that you (the programmer) defines an easy to understand format, such as an excel sheet. Then, the product owner enters information that is hard to understand for people that is not in the business... and you just validate that your code works as the PO expects running Fit.
The way used in xUnit, can be used for programmers as an input for easy to understand or simple information.
If you're going to need to enter a lot of weird examples with multiple fields in your xUnit test, it will became hard to read.
Imagine a case where you have to decide whether to give a loan to a customer, based on the Age, Married/Single, Amount of Childrens, Wage, Activity, ...
As a programmer, you cannot write that information; and a risk manager cannot write a xUnit test.
Helps reduce redundancy in regression and bug testing. Build manageable repository of test cases. Its like build once and use for ever.
It is very useful during cooperation of the QA and devs teams: QA could show to developer the result of the failed test and a developer will easyly help to solve an environment issue and will understand steps for reproducing a bug.
It is suitable for UI and even for API testing.
When you start working on an existing Rails project what are the steps that you take to understand the code? Where do you start? What do you use to get a high level view before drilling down into the controllers, models, helpers and views? Do you have any specific techniques, tricks or tools that help speed up the process?
Please don't respond with "Learn Rails & Ruby" (like one of the responses the last guy who asked this got - he also didn't get much response to his question so I thought I would ask again and prompt a bit more). I'm pretty comfortable with my own code. It's sorting other people's that does my head in and takes me a long time to grok.
Look at the models. If the app was written well, this should give you a picture of its domain model, which is where the interesting logic should live. I also look at the tests for the models.
The way that the controllers/views were implemented should be apparent just by using the Rails app and observing the URLs.
Unfortunately, there are many occasions where too much logic lives in controllers and even views. That means you'll have to take a look into those directories too. Doubley-unfortunate, tests for these layers tend to be much less clear.
First I use the app, noting the interesting controller and action names.
Then I start reading the code for these controllers, and for the relevant models when necessary. Views are usually less important.
Unlike a lot of the people so far, I actually don't think tests are the place to start. I think they're too narrow, too focused. It'd be like trying to understand basic physics/mechanics by first zooming into intra-molecular forces and quantum mechanics. I also think you're relying too much on well-written tests, and in my experience, a lot of people don't write sufficient tests or write poor tests (which don't give an accurate sense of what the code should actually do).
1) I think the first thing to do is to understand what the hell the app actually does. Use it, at least long enough to build an idea of what its main purpose is and what the different types of data might be and which actions you can perform, and most importantly, why.
2) You need to step back and see the big picture. I think the best way to do that is by starting with schema.rb. This tells you a few really important things:
What is the vocabulary/concepts of this project. What does "User" actually mean in this app? Why does the app have both "User" and "Account" models and how are they different/related?
You could learn what models there are by looking in app/models but this will actually tell you what data each model holds.
Thanks to *_id fields, you'll learn the associations between the models, which helps you understand how it all fits together.
I'd follow this up by looking at each model's *.rb file for (hopefully) comments, validations, associations, and any additional logic relevant to each. Keep an eye out for regular ol' Ruby classes that might live in lib/.
3) I, personally, would then take a brief glance at routes.rb as it will tell you two key things: a brief survey of all of the actions in the app, and, if the routes and controllers/actions are well named and thought out, a quick sense of where different functionality might live.
At this point you're probably ready to dig into a specific thing you need to learn. Find the controller for the feature you're most interested in and crack it open. Start reading through the relevant actions, see which models are involved, and now maybe start cracking open tests if you want to.
Don't forget to use the rest of your tools: Ruby/Rails debuggers, browser dev tools, logs, etc.
I would say take a look at the tests (or specs if the project uses RSpec) to get an idea at the high-level of what the application is supposed to do. Once you understand from the top level how the models/views/controllers are expected to behave, you can drill into the implementations.
If the Rails project is in a somewhat stable state than I have always been a big fan of using the debugger to help navigate the code base. I'll fire up the browser and begin interacting with the app then target some piece of functionality and set a breakpoint at the beginning of the associated function. With that in place I just study the parameters going into the function and the value being returned to get a better understanding of what's going on. Once you get comfortable you can modify the functionality a little bit to ensure you understand what's going on. Just performing some static analysis on the code can be cumbersome! Good luck!
I can think of two reasons to be looking at an existing app with which I have no previous involvement: I need to make a change or I want to understand one or more aspects because I'm considering using them as input to changes I'm considering making to another app. I include reading-for-education/enlightenment in that second case.
A real benefit of the MVC pattern in particular, and many web apps in general is that they are fairly easily split into request/response pairs, which can to some extent be comprehended in isolation. So you can start with a single interaction and grow your understanding out from that.
When needing to modify or extend existing code, I should have a good idea of what the first change will be - if not then I probably shouldn't be fooling with the code yet! In a Rails app, the change is most likely to involve view, model or a combination of both and I should be able to identify the relevant items fairly quickly. If there are tests, I check that they run, then attempt to write a test that exposes the missing functionality and away we go. If there are no tests then it's a bit trickier - I'm going to worry that I might inadvertently break something: I'd consider adding tests to give myself a more confidence, which will in turn start to build some understanding of the area under study. I should fairly quickly be able to get into a red-green-refactor loop, picking up speed as I learn my way around.
Run the tests. :-)
If you're lucky it'll have been built on RSpec, and that'll describe the behavior regardless of the implementation.
I run rake test in a terminal
If the environment does not load, I take a look at the stack trace to figure out what's going on, and then I fix it so that the environment loads and run the tests again
I boot the server and open the app in a browser. Clicking around.
Start working with the tasks at hand.
If the code rocks, I'm happy. If the code sucks, I hurt it for fun and profit.
Aside from the already posted tips of running specs, and decomposing the MVC, I also like:
rake routes
as another way to get a high-level view of all the routes into the app
./script/console
The rails irb console is still my favorite way to inspect models and model methods. Grab a few records and work with them in irb. I know it helps me during development and test.
Look at the documentation, there is pretty good documentation on some projects.
It's a little bit hard to understand other's code, but try it...Read the code ;-)