How to run a scenario in a loop with Fitnesse - fitnesse

I am using Fitnesse for testing our project's API, and I have created scenario tables for each interface, so we could send the request anywhere by calling the scenario. And now we need to execute the scenarios in a loop, based on a random looping count. Are there any fixtures or tables in Fitnesse that could help with this?

Can't loop in Slim. FitNesse wiki format is not a programming language. This has been an active decision from the implementation of the Slim format/protocol. I don't see that changing.
If you want to loop, you should to do that inside your fixture code.
Technically, if you are using Fit style fixtures, there was a thing called a Decorator that let you do things to a decision table to run it multiple times. The old Fit Decorators are not compatible with Slim, if you are using Slim. Also, no one is maintaining these, as far as I know. http://fitdecorator.sourceforge.net

Related

How to run a Cucumber Background step once for all Scenarios under the same feature?

In Cucumber, is it possible to run a Background step for the whole feature? So it doesn't get repeated every scenario?
I am running some tests on a search engine, and I need to pre-seed the search engine with test data. Since this data can be quite long to generate and process (I'm using Elasticsearch and I need to build the indices), I'd rather do this background only once, but only for all tests under the same feature.
Is it possible with Cucumber?
Note that I am using MongoDB, so I don't use transactions but truncation, and I believe I have DatabaseCleaner running automatically after each test, that I suppose I'll have to disable (maybe with an #mention?)
EDIT :
Yes I'm using Cucumber with Ruby steps for Rails
EDIT2 : concrete examples
I need to test that my search engine always return relevant results (eg. when searching for "buyers" it should return results with "buyer", "buying", "purchase", etc. (has to do with ES configuration), and other contextual information gets updates correctly (eg in the sidebar
I have categories/filters with the number of hits in parenthesis, I must make sure those number gets refreshed as the user plays with filters)
For this I pre-seed the search engine with a dozen of results, and I run all those tests that are based on the same inputs. I often have "example" clauses that just do something slightly different, but based on the same seeding
Supposing the search data is a meaningful part of the scenario, something that someone reading the feature should know about, I'd put it in a step rather than hide it in a hook. There is no built-in way of doing what you want to do, so you need to make the step idempotent yourself. The simplest way is to use a global.
In features/step_definitions/search_steps.rb:
$search_data_initialized = false
Given /^there is a foo, a bar and a baz$/ do
# initialize the search data
$search_data_initialized = false
end
In features/search.feature:
Feature: Search
Background:
Given there is a foo, a bar and a baz
Scenario: User searches for "foo"
...
There are a number of approaches for doing this sort of thing:
Make the background task really fast.
Perhaps in your case you could put the search data outside of your application and then symlink it into the app in your background step? This is a preferred approach.
Use a unit test tool.
Consider if you really get any benefit out of having scenarios to 'test' search. If you don't use a tool that allows you greater control because your tests are being written in a programming language
Hack cucumber to work in a different way
I'm not going to go into this, because my answer is all about looking at the alternatives
For your particular example of testing search there is one more possibility
Don't test at all
Generally search engines are other peoples code that we use. They have thousands of unit tests and tens of thousands of happy customers, so what value do your additional tests bring?

Using Fitnesse to test external data

We would like to use Fitnesse to test externally produced data set. Specifically, the tests would contain invariants that must be valid in the data, but every time tests are run they would fetch the data from, let's say, a database and apply the checks to every row in the result set.
The tests would still be organised as wiki pages, but each one once running would be repeated for all applicable data rows. Should a particular row fail an assertion, we still want the tests to continue for other rows, but then receive a summary and a list of rows failed each particular assertion.
I understand this is not exactly what Fitnesse is for, but we do have skills in the team to write fixtures and tests, and we like the idea of haivng non-technical subject matter experts authoring some of the tests.
Is there a way of achieveing the above in Fitnesse, or is it completely outside of its intended usage? If it is possible, I would appreciate any guidance on how to achieve that, I couldn't find anything insightful in the documentation (or other websites).
Sounds like the Slim protocol is what you're looking for to write the fixtures.
The Query Table in particular.

Refactor ruby on rails

What is best way to refactor in Ruby on Rails. I have a project where i would like to refactor several of my objects - extra properties and renaming of existing properties. How do i do this this the easiest way?? I created the different objects with scaffolding "rails generate scaffold foo name:string...."
I do start from scratch or is there some cool ruby command, like "rails refactor foo name:string"..
As far as I am aware, there is no automated way to do this. The key things as far as I'm concerned is to make sure that your test coverage is good in the relevant area (and preferably across your whole app) before you start.
Once that's done, I've tended to apply fairly brute force tactics to do that actual refactor:
Run all tests.
Searching for relevant file names and strings within files, making a value judgement as to whether you're changing the right thing (don't just do search/replace).
Refactor your tests along side the code itself.
Run all tests again.
Repeat until it works.
The reason that refactoring isn't very automated in Ruby (in my view at least) is because the language is so dynamic, so it's very hard for any automated tools to make sure they've covered all the bases.
Refactoring is based on human perception. Sure, omission of some redundant things could be figured out by a computer, but otherwise when you refactor you're essentially improving your own hand-written code. Unless a computer has the intelligence and the ability to discern code like a human, then no, it's not possible.
Perhaps you're referring to something else?
You should learn some Rails best practices and use them to refactor your code
Firstly a good knowledge of your application and then start in a procedure and serialize it.
Use helpers to optimize.Methods should be small enough and minimize the possiblity of repetition of codes.
Use application files which helps a lot
You can also write methods in private to reduce the lines in methods.
Alwayz be sure where refactoring is done and its dependencies..
Thanx
.

Why should I test my HTMLHelpers?

Is there any tangible value in unit testing your own htmlhelpers? Many of these things just spit out a bunch of html markup - there's little if no logic. So, do you just compare one big html string to another? I mean, some of these thing require you to look at the generated markup in a browser to verify it's the output you want.
Seems a little pointless.
Yes.
While there may be little to no logic now, that doesn't mean that there isn't going to be more logic added down the road. When that logic is added, you want to be sure that it doesn't break the existing functionality.
That's one of the reasons that Unit Tests are written.
If you're following Test Driven Development, you write the test first and then write the code to satisfy test.
That's another reason.
You also want to make sure you identify and test any possible edge cases with your Helper (like un-escaped HTML literals, un-encoded special characters, etc).
I guess it depends on how many people will be using/modifying it. I typically create a unit test for an html helper if I know a lot of people could get their hands on it, or if the logic is complex. If I'm going to be the only one using it though, I'm not going to waste my time (or my employer's money).
I can understand you not wanting to write the tests though ... it can be rather annoying to write a few lines of html generation code that requires 5X that amount to test.
it takes a simple input and exposes a simple output. This is a good one for TDD, since the time you were going to spend on build->start site->fix that silly issue->start again->oops, missed this other tiny thing->start ... we are done, happy :). Dev 2 comes along and makes small change to "fix" it for something that wasn't working for then, same cycle goes on and dev 2 didn't notice at the time it broke your other scenarios.
Instead, you v. quickly do the v. simple simple text, y that simple output gave you that simple output you were expecting with all the closing tags and quotes you were expecting.
Having written HTML Helpers for sitemap menus, for example, or buttons for a wizard framework, I can assure you that some Helpers have plenty of logic that needs testing to be reliable, especially if intended to be used by others.
So it depends what you do with them really. And only you know the answer to that.
The general answer is that Html Helpers can be arbitrarily complex (or simple), depending on what you are doing. So the no brainer, as with anything else, is to test when you need to.
Yes, there's value. How much value is to be determined. ;-)
You might start with basic "returns SOMEthing" tests, and not really care WHAT. Basically just quick sanity tests, in case something fundamental breaks. Then as problems crop up, add more details.
Also consider having your tests parse the HTML into DOMs, which are much easier to test against than strings, particularly if you are looking for just some specific bit.
Or... if you have automated tests against the webapp itself, ensure there are tests that look specifically for the output of your helpers.
Yes it should be tested. Basic rule of thumb, if it is not worth testing it is not worth writing.
However, you need to be a bit carefull here when you write your tests. There is a danger that they can be very "brittle".
If you write your tests such that you get back a specific string, and you have some helpers that call other helpers. A change in one of the core helpers could cause very many tests to fail.
So it maybe better to test that you get back a non null value, or that a specific text is contained somewhere in the return value. Rather than testing for an exact string.

How do you plan your Rails app?

I'm starting a Rails app for a customer and am considering either creating a mind map or jumping straight to a Cucumber specification.
How do you plan your Rails app?
As an additional question, say you also start with Cucumber, at which point would you write Unit tests? Before satisfying the specifications?
I've got a 6 step process.
I prefer to work out the model relationship and uses before doing anything. Generally I try to define models into units containing coherent chunks of information. Usually this starts by identifying the orthogonal resources my application will need (Users, Posts, etc). I then figure out what information each of those resources absolutely need (attributes) and may potentially need (associations), and how that information will likely be operated on (methods), from there I define a set of rules to govern resource consistency (validations).
I usually iterate over my design a few times because the act of defining other models usually makes me rethink ones I've already done. Once I have a model design I like, I will start refactoring or specializing(subclassing) models to clarify the design.
I write the migration and make skeletons for my models. I usually won't write tests until I have a first draft of methods and validations implemented. It's not always obvious how to implement things until giving it some moderate thought.
Next comes the test suite. Doesn't matter what I used to write the tests, so long as I can be certain the backend is sane.
This is when I piece together the control flow. What happens on a successful request? Unsuccessful request? Which controller actions will link to others? Usually there is a 1-1 mapping between controllers and models (not counting sub classes of models), every so often I'll encounter situations where I need to act on multiple model types, for that I'll probably create a new controller. Depending on how complex my app is I may model the flow as a state machine.
Lastly I create the views. I start by sketching out the UI based which is heavily influenced by my model's relationships and attributes. Abstract out common parts, then write the views.
Polish the UI. I create a CSS, and start to replace links with remote calls, or even just javascript when appropriate.
I may interleave steps 2 and 3. I find it's very easy to write a test just after I write the code to be tested. Especially because I'm usually testing things in a console as I write, and half the test is written by pasting from the console.
I may also compartmentalize steps 4 and 5 for each model/controller. Any point I may go back and revise, a previous decision, and propagate those changes through my steps.
I start with sketches of the user interface and then progress to HTML mockups. Once the UI design is finalised I can identify the RESTful resources in the application and their relationships.
I don't think writing only cucumber features as specifications is a good idea. Writing test code without be able to test it pass leads to errors in the tests and increases the time you'll need to correct them later.
So I'd do the following :
Write some mindmap. But keep it simple with the major ideas of the project.
Start writing tests and coding at the sime time (write one test, make it pass, write an other, ...).
So you'll write your specifications while driving your application. Keeping it clean but also remaining agile and being able to change some ideas in the middle of the project.

Resources