I'm trying to sift through the myriad of test solutions out there, and I'm not even sure if I'm headed the right direction. The story is: we're running a RESTful Web service, implemented as a Rails app, which backs our mobile clients. We're unit testing the Web service (of course), but that involves mocking out many parts of the application, for instance, the search stack (Apache SOLR).
Moreover, our tests don't (i.e. cannot!) cover critical routes such as the mobile sign-in/sign-on process, since that involves communication between the API application and the mobile website, where the user can enter credentials, e.g. for SSO (Janrain Engage). Hence, a standard Rails integration test won't do.
I realize that in theory, if the test suite is very well designed, where mocking only happens strictly at the those join points where the tests for the next layer start, then by unit or functional testing the service API and the mobile website separately, one could get the same test coverage. I find that in practice though, this is an illusion if you have several developers working on the test suite independently; I just admit that our unit tests are simply not that well designed. Particularly when exercising TDD I found that while the tests can drive the application code, the test code design is only tailored to the unit under test, resulting in a rather wildly grown test suite.
Another thing I found is that sometimes we didn't detect regressions purely using unit tests, where e.g. bad queries were sent to the SOLR server due to knock-on effects. That's why I thought the only true way to ensure that the entire stack works along the critical routes is to automatically end-to-end test it on a staging server before every deployment, i.e. having actual HTTP requests sent to the app.
My questions would be:
do you think this is a reasonable thing to do at all? I found very little information about end-to-end testing a live API on the Web, leaving me wondering whether I'm making any sense at all
which tools/setup would you suggest? We use Watir to run acceptance tests for our website, but it seems to be overkill for a Web service (no browser environment needed, no JS or anything UI-ish). Something as simple as a Ruby script even?
any general best practices or advice you can give me w.r.t. to designing such tests?
This might be of interest to you: http://groups.google.com/group/ruby-capybara/browse_thread/thread/5c27bf866eb7fad3
What you might want to to try is combining Cucumber (or similar) with one of the tools mentioned in the link above, so you can do something like
Given I have 2 posts
When I send "DELETE" to post with id 1
Then I should have 1 post
This way you can test the API's full stack and check the result.
We've faced similar problems lately, like testing only on the unit level was fast but not enough to spot integration related anomalies.
I really like Cucumber's gherkin syntax, the value for me in it is that we can run the exercise once and then make several expectations after with separate titles, like if they would be separate RSpec it blocks.
There is a gem called Turnip, which makes possible to write RSpec tests with the gherkin syntax. We have been using it for a while and I'm quite happy with it, we even started having less integration level defects.
I collected my experience regarding in a blog post: https://blog.kalina.tech/2019/10/end-to-end-testing-for-api-only-rails-apps-with-cucumber-syntax.html
Related
Suppose you are building an API in Rails. Is it enough if we write request specs alone without the model specs, controller specs and the view specs? Why do we need unit testing if we do acceptance and functional testing or feature testing for front end web projects. I insisted on doing unit testing as it allows you to write decoupled code but my colleague is against it. What are the best practices on this in the ruby on rails community?
If you only have time to do one type of testing, and you are writing an API, then it might make sense to only do feature testing by simply calling your API endpoints. After all, it's pretty important that those endpoints return the expected results.
However, when your feature tests start breaking, you will potentially have a terrible time figuring out the source of the problems without unit tests. Is there a core piece of your software that most of your endpoints are using? Good luck refactoring that without a robust set of unit tests.
But that really speaks to what you have to figure out -- is your core set pretty stable? Are you really just adding features or new endpoints? If so, you can probably get away with a heavy feature test approach.
Rails and testing go together like peas and carrots. Here is a great resource that highlights (better than I ever could) the importance of using tests (of all degrees) in your Rails projects. Hope this helps!
Not sure about "the community", but in my opinion it depends on the complexity of the project. If it's a very straightforward API project, doing only feature tests may be fine.
But if the project becomes larger, unit tests allow you to better pinpoint errors in case anything breaks. I.e. you don't see "there's a bug somewhere in feature X" but "this or that class does not work when invoking a particular method with specific arguments".
I'm really getting frustrated with learning how to properly develop software using TDD. It seems that everyone does it differently and in a different order. At this point, I'd just like to know what are all the considerations? This much is what I've come up with: I should use rspec, and capybara. With that said, what are all the different types of test I need to write, to have a well built and tested application. I'm looking for a list that comprises the area of my application being tested, the framework needed to test it, and any dependencies.
For example, it seems that people advise to start by unit testing your models, but when I watch tutorials on TDD it seems like they only write integration test. Am I missing something?
Well, the theme "how do you TDD" is as much out there in the open as the theme "how do you properly test?". In Ruby, and more specifically in Rails, rspec should be the tool to start with, but not be done with. RSpec allows you to write Unit Tests for your components, to test them separately. In the Rails context, that means:
test your models
test your controllers
test your views
test your helpers
test your routes
It is a very good tool not exactly rails-bound, it is also used to test other frameworks.
After you're done with RSpec, you should jump to cucumber. Cucumber (http://cukes.info/) is the most used tool (again, for the Rails environment) to write integration tests. You can then integrate capybara on cucumber.
After you're done with cucumber, you'll be done with having tested your application backend and (part of) its HTML output. That's when you should also test your javascript code. How to do that? First, you'll have to Unit test it. Jasmine (http://pivotal.github.com/jasmine/) is one of the tools you might use for the job.
Then you'll have to test its integration in your structure. How to do that? You'll come back to cucumber and integrate selenium (http://seleniumhq.org/) with your cucumber framework, and you'll be able to test your integration "live" in the browser, having access to your javascript magic and testing it on the spot.
So, after you're done with these steps, you'll have covered most of the necessary steps to have a well-integrated test environment. Are we done? Not really. You should also set a coverage tool (one available: https://github.com/colszowka/simplecov) to check if your code is being really well tested and no loose ends are left.
After you're done with these morose steps, you should also do one last thing, in case you are not developing it all alone and the team is big enough to make it still unmanageable by itself: you'll set a test server, which will do nothing other than run all the previous steps regularly and deliver notifications about its results.
So, all of this sets a good TDD environment for the interested developer. I only named the most used frameworks in the ruby/rails community for the different types of testing, but that doesn't mean there aren't other frameworks as or more suitable for your job. It still doesn't teach you how to test properly. For that there's more theory involved, and a lot of subdebates.
In case I forgot something, please write it in a comment below.
Besides that, you should approach how you test properly. Namely, are you going for the declarative or imperative approach?
Start simple and add more tools and techniques as you need them. There are many way to TDD an app because every app is different. One way to do that is to start with an end-to-end test with Rspec and Capybara (or Cucumber and Capybara) and then add more fine-grained tests as you need them.
You know you need more fine-grained tests when it takes more than a few minutes to make a Capybara test pass.
Also, if the domain of your application is non-trivial it might be more fruitful for you to start testing the domain first.
It depends! Try different approaches and see what works for you.
End-to-end development of real-world applications with TDD is an underdocumented activity indeed. It's true that you'll mostly find schoolbook examples, katas and theoretical articles out there. However, a few books take a more comprehensive and practical approach to TDD - GOOS for instance (highly recommended), and, to a lesser extent, Beck's Test Driven Development by Example, although they don't address RoR specifically.
The approach described in GOOS starts with writing end-to-end acceptance tests (integration tests, which may amount to RSpec tests in your case) but within that loop, you code as many TDD unit tests as you need to design your lower-level objects. When writing those you can basically start where you want -from the outer layers, the inner layers or just the parts of your application that are most convenient to you. As long as you mock out any dependency, they'll remain unit tests anyway.
I also have the same question when I started learning rails, there're so many tools or methods to make the test better but after spending to much time on that, I finally realized that you could simply forget the rule that you must do something or not, test something that you think it might have problem first, then somewhere else. Well ,it needs time.
that's just my point of view.
I am looking at SpecFlow examples, and it's MVC sample contains several alternatives for testing:
Acceptance tests based on validating results generated by controllers;
Integration tests using MvcIntegrationTestFramework;
Automated acceptance tests using Selenium;
Manual acceptance tests when tester is prompted to manually validate results.
I must say I am quite impressed with how well SpecFlow examples are written (and I managed to run them within minutes after download, just had to configure a database and install Selenium Remote Control server). Looking at the test alternatives I can see that most of them complement each other rather than being an alternative. I can think of the following combinations of these tests:
Controllers are tested in TDD style rather than using SpecFlow (I believe Given/When/Then type of tests should be applied on higher, end-to-end level; they should provide good code coverage for respective components;
MvcIntegrationTestFramework is useful when running integration tests during development sessions, these tests are also part of daily builds;
Although Selenium-based tests are automated, they are slow and are mainly to be started during QA sessions, to quickly validate that there are no broken logic in pages and site workflow;
Manual acceptance tests when tester is prompted to confirm result validity are mainly to verify page look and feel.
If you use SpecFlow, Cucumber or other BDD acceptance test framework in you Web development, can you please share your practices regarding choosing between different test types.
Thanks in advance.
It's all behaviour.
Given a particular context, when an event occurs (within a particular scope), then some outcome should happen.
The scope can be a whole application, a part of a system or a single class. Even a function behaves this way, with inputs as context and the output as outcome (you can use BDD for functional language as well!)
I tend to use Unit frameworks (NUnit, JUnit, RSpec, etc.) at a class or integration level, because the audience is technical. Sometimes I document the Given / When / Then in comments.
At a scenario level, I try to find out who actually wants to help read or write the scenarios. Even business stakeholders can read text containing a few dots and brackets, so the main reason for having a natural language framework like MSpec or JBehave is if they want to write scenarios themselves, or show them to people who will really be put off by the dots and brackets.
After that, I look at how the framework will play with the build system, and how we'll give the ability to read or write as appropriate to the interested stakeholders.
Here's an example I wrote to show the kind of thing you can do with scenarios using simple DSLs. This is just written in NUnit.
Here's an example in the same codebase showing Given, When, Then in class-level example comments.
I abstract the steps behind, then I put screens or pages behind those, then in the screens and pages I call whatever automation framework I'm using - which could be Selenium, Watir, WebRat, Microsoft UI Automation, etc.
The example I provided is itself an automation tool, so the scenarios are demonstrating the behaviour of the automation tool through demonstrating the behaviour of a fake gui, just in case that gets confusing. Hope it helps anyway!
Since acceptance tests are a kind of functional tests, the general goal is to test your application with them end-to-end. On the other hand, you might need to consider efficiency (how much effort is to implement the test automation), maintainability, performance and reliability of the test automation. It is also important that the test automation can easily fit into the development process, so that it supports a kind of "test first" approach (to support outside-in development).
So this is a trade off, that can be different for each situation (that's why we provided the alternatives).
I'm pretty sure, that today the most widely fitting option is to test at the controller layer. (Maybe later as UI and UI automation frameworks will evolve, this will change.)
I am working on a Rails app that allows you to create a configuration and then launch a server at EC2 with this configuration.
So far I've been using cucumber for BDD and was very happy with that. However, now I want to pick a configuration and actually launch the server. Due to cost and performance issues I don't want to actually launch a server every time I am running the cucumber features.
Are there any best practices for cases like this? I would like to keep the BDD up, but also don't want to spend too much time on working on an elaborate solution just to have feature descriptions for this. On the other hand I will have the same problems, when I will have to write Unit test for this.
When working on a rails app that required twitter integration, I found fakeweb to be extremely helpful. I've used it in conjunction with cucumber successfully.
I found that to support the BDD outside-in development style, I set fakeweb to disallow all web traffic and then added my faked-out web calls one at a time, as my tests failed. It seemed to fit right in with my BDD workflow.
I am starting a new project for a client today.
I have done some rails projects before but never bothered writing tests for them.
I'd like to change that starting with this new project.
I am aware there are several testing tools, but am a bit confused as to which I should be using.
I heard of RSpec, Mocha, Webrat, and Cucumber.
Please keep in mind I never really wrote any regular tests, so my knowledge of testing in general is quite limited.
How would you suggest I get started?
Thanks!
Thank you for all the responses! I posted a related question that may interest those who view this question in the future. You can find it here.
I prefer using Rspec and Cucumber in conjunction. I also prefer Capybara over Webrat.
Rspec is wonderful for it's rails integration and structure. It's technically behavior testing, but in practice it ends up feeling like unit testing. It also allows you to group and define "specs", or rspec tests, in any structure you like with 'describe' blocks.
Cucumber is higher level. It is what you will translate your user stories into. For instance, you may decide that your users should be able to change their password, so you'll write something like this:
Scenario: Change password
Given I am logged in
And I am on the change password page
When I fill in "p4ssw0rd" for "old_password"
And I fill in "newp4ssw0rd$$" for "new_password"
Then my password should be "newp4ssw0rd$$"
If you are working with a QA dept, you can show your tests to them, and they'll instantly know the status of any given feature, and how the feature is supposed to work. Additionally, if you have enterprising testers, they can write tests themselves, even if they are not programmers in general.
They all serve different function and if you really care about the quality of your software, you would use them all.
RSpec is more for unit testing. Unit Testing is testing the smallest part, usually your models methods.
WebRat is more for functionality testing as end user would test it from a browser. You are actually testing the interaction and test whether it returns the correct view/data.
Cucumber is more like for behaviour driven testing or end-to-end testing. It is more to test the business logic according to many different scenarios or data.
Of course you could use all of them together in conjunction. For example, I can use Cucumber to do end-to-end testing and it would involve functional testing using WebRat and Rspec as well. Hopefully this will get the idea for you to start off.
You can have a look at the most popular testing frameworks here http://ruby-toolbox.com/
Cheers