Should I be using random data in Capybara integration tests? - ruby-on-rails

I'm writing a very long integration test for a wizard that has around 15 steps. Each of these steps has around 20 inputs/select boxes.
I started out using static data in my tests, but now I've begun to write stuff like selecting a random value from a select box, and clicking a random radio button for an option. This does seem like it's more capable of catching bugs, for example; one of the buttons on the page might not be rendered correctly and therefore the value never gets saved to the database - this would never have been found using static data that selects the same option every time. Alternatively, I could manually write out every possible option that could be chosen, but that'd take an eternity to do.
I hear that one of the main reasons not to use random data is that you can not explicitly see the data used in your tests and it can make failing tests hard to resolve.
Is this path that I'm going down one to be avoided? or is testing in this manner something that's generally done?

This is inherently a QA question rather than an automation one. You'll need to ask yourself and your team whether or not testing every single permutation is even worth the time and effort. Usually it is not. In my experience it's best to get information on the most common user journeys in your wizard and branch out from there. I would tackle those first from an automation standpoint and then move onto lower risk paths.
I like to use random data in certain low-risk areas that the devs confirm are relatively inconsequential (for example, a true/false radio box) and you can always make sure you are logging output properly to catch bugs.

Related

Rails form testing: post directly or through form?

Most examples of Rails tests I have seen post directly to a url. I found out recently that Rails supports the manipulation of form elements using 'fill_in' and 'click_on'.
Should I be posting directly or submitting through the forms manually? Filling in the forms manually seems much more thorough, and the reason I ask is, well, all the examples I've seen are posting directly. Obviously there may be a little less work with posting directly, but I'm curious what cases I might be missing. Is there a best practice?
You have different levels of testing, that have their tradeoffs. What you are describing is what could be called an integration test with the browser, they have these characteristics
Very black boxy, so you perform a series of steps (Login, fill in some elements, click submit) and assert affects (that displays a message on the screen that it has been created and you see some proof of that).
These tests are normally associated with being slower and less reliable (false alerts) but give you a very high degree of certainty that something actually works between multiple systems (rails app server, javascript in browser, database, caching, other services)
That is contrasted with Unit testing which is more asserting that one little piece of your system works. Like some method on a class works properly. In your example, that could be asserting that your model is able to accept some set attributes and save. This allows you to isolate your dependencies through mocking/stubbing. These are normally faster and more reliable but they give you less certainty that everything works as expected.
There are a few variables, but normally unless it is a crazy high value form I will normally just test the model validations and any form objects that are called with specific arguments that could cause issues.

Can I reuse my integration test suite to profile a Rails app?

Most posts on Rails profiling recommend Ruby-Prof. To use Ruby-Prof I need to write at least one new test for each controller action, then manually compare the results to see what's taking the longest and might be a candidate for optimization.
This is good if I already know exactly what request I'm focusing on. It seems less good if I'm trying to identify the hot spots in the first place. Given that I already have a huge integration test suite covering all the app's functionality that I care about, it seems like what I really want to do is:
Run the entire test suite and capture the time spent in each controller action. (Or model method, or whatever level of granularity I want.)
Print two lists, of worst-case and average-case times in each controller action.
Sort each list and start investigating the longest-running controller actions, now using Ruby-prof or other profiling tools to drill down into the call stack. The worst-case times will identify request params that might be problematic (i.e., trigger slow code on the backend), without my having to think of them all when I write the performance test.
Is there some reason people don't use the integration test suite in this way, rather than basically duplicating it with a second performance test suite? I have not seen it suggested. Before I write code to do something like this (presumably with a before_action in ApplicationController, is there already a tool for this?
I think that the automated tests will not tell you anything about performance. You need real data. For example, your tests probably won't use indexes, but if you create 10,000 records without an index you may find a performance issue.
I need to write at least one new test for each controller action
Why would you performance test each controller action?
In my limited experience, performance testing was done after deploying the app and tested very specific things. I tested a chunk of code that was slow or code that I thought might be slow.
Also, if you use online performance tools it is not necessary to change your code. The online tool runs against an instance of the app that has been deployed.

Dependencies between Features

We're having our first attempt at writing some Gherkin specs for a greenfield application and I'm not sure how to tackle what appear to be inter-dependant features.
Essentially, we have a feature CreateADoor that is actually used as part of two other features BuildAHouse and BuildAShed.
The CreateADoor feature is relatively complex in terms of validation etc. which is why we have lifted it out as a seperate feature (to avoid duplication). The issue is that the result of scenarios for this feature are dependant on the context they were called from (should my newly built door be on a House or a Shed).
The only way I can really see to solve this is to get rid of CreateADoor and have its scenarios duplicated inside both BuildAHouse and BuildAShed. In this specific situation this would be (just about) bearable, but what about the situation where CreateADoor requires 10 scenarios to spec it out, and is used by 10 different features. Having 10 scenarios explode out into 100 doesn't seem good, but I can't see another option at the minute.
Can anyone suggest a different approach that allows us to avoid this explosion of scenarios?
Ideally you should not create these dependencies, but instead if creating a door is part of building a house then the building a house feature should create a door as part of its setup instead of reusing the feature to test creating a door.
This might look like this:
Given I have created a house door
When I create a house
Then I should be able to live in it
and the logic for creating the door should be in the code behind the Given step. This logic might be very different from what actually happens when you create a door in the tests.
If you can't separate things like this then one thing I have done in the past is to make the code behind the Given step call the other steps from the CreateADoor feature so that the code is not duplicated, but the existing steps are reused. This is not ideal, but pragmatically this is sometimes necessary.

Why should we use coded ui when we have Specflow?

We have utilized Specflow and WatIn for acceptance tests at my current project. The customer wants us to use Microsoft coded-ui instead. I have never tested coded ui, but from what I've seen so far it looks cumbersome. I want to specify my acceptance tests up front, before I have a ui, not as a result of some record/playback stuff. Anyway, can someone please tell me why we should throw away the Specflow/watin combo and replace it with coded ui?
I've also read that you can combine specflow with coded ui, but it looks like a lot of overhead for something which I am already doing fine in specflow.
I wrote a blog post on how to do this you might find useful
http://rburnham.wordpress.com/2011/03/15/bdd-ui-automation-with-specflow-and-coded-ui-tests/
The pro's and con's of Coded UI Test that i can think of is your testing the application exactly how the user will be using it. This is good for acceptance test but it also has its limitations. Its also really good for end to end testing. In the past UI Tests have been know to be fragile. For example when MS created the VS2010 UI almost all of the UI tests broke. The main reason being is the technology change. Coded UI tests do help to limit this from happening by the way it matches a control. It uses more of a probability based match. This mean it will try to find the best match based on the information it has such as control name. For us Coded UI tests was our choice because of technology limitations. Our Legacy app is VB and although CUIT does not work great, i'm in the progress of writing an extension to get better control information, it was still our only choice. Also keep in mind CUIT is new and has its own limitations. You should be prepared to be very structured in the way you lay out your project as maintaining your UIMaps can be a bit of manual work due to the current end to end behaviour in VS2010, for example creating a CUIT from an existing action recording always places the test in a UIMap called UIMap.uitest and there is no way to change that or transfer to another UIMap. If you use multiple ui maps this means you will need to record your steps first and then use them in your test. However being in .net it its still very flexible.
By far the best thing about specflow is its gerkin syntax for readability and living documentation. Normally your testing features or behaviours of your app which is where the value comes from It generally aims the test just below the UI. There is a little less chance of the test breaking when the UI changes here but there. Specflow to me is great when your application is under constant change and you want to ensure existing features remain working. It fits well in a Scrum environment as well where you can write your scenario's as a description about how it should work. One limitation to specflow i can see is its open for interpretation. Because of this it can be easy to write a test that is not very reusable and hard to maintain. I like to use more generic terms to describe my steps like "Log in as User1" instead of "Go to Login Page, Enter Username and Password, Click login". Describing it more granular makes it harder to reuse tightly couples it to the UI. How the login actually work should be up to the code behind not the specflow feature.
Combining the 2 however to us seems more beneficial than just using Coded UI Tests. If we decide to completely change the UI we would at least have the behaviours that are expected stored in our specflow features in a way anyone can understand. In the end you need to consider how the application will evolve and the type of application.

How to get a positive mental attitude towards testing?

I want to write tests for my app, though each time I look at rspec.info, I really don't see a definite path to take towards "doing things right" and testing first. I watched the peepcode videos on rspec more than once, yet it doesn't take. I want to take more pride in my work, and I think that testing will help. How can I break through this mental block?
Find tools that will reward you for testing. For example, make it very easy to run all the tests and get a message like
73 tests passed.
Try random testing because you can test against a lot of values quickly and easily.
See if your language provides a test-coverage analysis tool that gives you percentage of statement coverage or percentage of block coverage. It is very rewarding to drive code coverage from 60% up to 90%---and if you are lucky, you will find bugs.
My key advice is to quantify your progress in testing so that you can see the numbers going up. That will make it a lot more motivating. (Gee, I wonder what other numbers that go up can be found on this site...)
I was hating it until I started creating a few testing macros. Like logging in or getting to the homepage. I found it fun to start poking at what my testing framework could really do.
It also helped to have someone else get me started by writing a few. Right away I found obvious improvements which made me want to get in there and start improving things.
"Test things you don't want to break."
It might be helpful to prioritise at first. I know that typing out the full three layers of model, view, and controller specs on top of the cucumber acceptance tests can be a chore. So one idea is to just test the most critical things in your app, and add tests as you run into bugs you don't want to see again.
"Always start with a failing test."
Cucumber features plain text "stories" that are pretty awesome for getting some really concrete tests up & running. Maybe that would be one place where you could get started. Cucumber doesn't really work with an AJAX-based app though, for that you'd have to take Selenium or Watir instead. You can start with a failing story before writing a single line of code, and quickly proceed from there to make that story pass.
"Don't test, specify."
Instead of thinking of tests, try to make a mental switch: you're not testing but SPECIFYING how your application will behave. This is design work, not nearly as boring as testing. :)
Think of it like this: if you don't test, your code is broken.
You need to see the value that testing will bring in refactoring and extending your code. Once you have a set of tests that define the behavior of your classes, you can then feel free to start making changes to improve the code. Your tests will provide the confidence that what you're doing isn't breaking the system. When you go to add new functionality to your code, running your existing tests will give you confidence that the new code you've added doesn't break anything else.
The key is to take that long term view. Testing is an investment. It takes a little bit away from the code you could be writing but eventually it will start paying off with interest. The capital that you have stored up will make it much easier to move ahead more quickly when adding new features.
Assuming you already have a list of bugs to fix, I always like to go back through and where ever possible create an automated test that demonstrates the bug. Then fix the bug and watch the test pass. Since you have to test the bug anyway, and the bug should already give you enough information to recreate it, you can see an immediate return on your tests.
Eventually, you'll start to get a feel for putting the tests together and how to write them, and you won't need the "blueprint" of an existing bug.
I wrote a motivation post about just this case couple of days ago. Here is the summary:
Start writing tests whenever you have
an opportunity to do it (ie. whenever
you write some code). Choose any tool
that makes sense to you and write any
test that you feel could cover at
least some tiny behavior of your
application (don’t care about the
coverage or any other scary terms from
the day one). Don’t be afraid about
primitive tests and trivial assertions
- you’ll get more confidence as your test coverage will grow and you’ll
become more and more happier as you’ll
notice that you don’t need to hit F5
that often anymore. Think about
testing in other positive terms - the
better you are at it, the less time
you need to spend with activities you
don’t like (watching the spinning
refresh icon in the browser,
debugging) and more with things you
love.
And here is the whole thing, if you are interested.
As has been mentioned previously, the easiest way to break into testing is with regression testing.
I'd also avoid doing controller specs - they are a PITA. Do heavy model testing, because that's where the logic should be in the first place.
Try spec'ing / testing a plain ruby project before you go off into a rails project.
Well I'll tell you how!
FIRST DO THE FOLLOWING 10 TIMES MANUALLY ON DIFFERENT APPLICATIONS ,BEFORE YOU TRY TO AUTOMATE
the negative scenarios, where the result would come out negative.
it could be wrong data entered and gives you right outputs.
for example a login screen:
There could be many scenarios when correct User wrong PW,Wrong User correct PW.... the most important thing is YOU DONT GIVE UP UNLESS BREAK IT .this is your mantra.
HMMM NOW YOU ARE THINKING LIKE A TESTER NOW TURN TO UR SYSTEM,
JUS WRITE THE NEGATIVES TESTS AND THEIR RESULTS
AND THEM THE POSITVE TESTS
DESIGN IT.
NOW DEVELOP THE FRAMEWORK

Resources