Grails Test Suite - Specify test order - grails

I am trying to get my geb-spock functional tests to run in a specified order because SpecA will create data required for SpecB during its run.
This question is about running the specifications in order, not the individual test methods within the specification.
I have tried changing the specification name to indicate execution order but that didn't work. I found a solution where a Test Suite was used, and the tests were added to the suite in order, but I can't find how to make a test suite work in Grails.
Explicitly specifying them as grails test-app functional: SpecA SpecB , is not a long term option, as more specs will be added.

For sequential or whatever the sequence you want to run your tasks, I do the following thing in my build.gradle file:
def modules = ["X", "Y", "Z", "ZZ"]
if (modules.size() > 1) {
for(j in 1 .. modules.size()-1 ) {
tasks[modules[j]].mustRunAfter modules[values[j-1]]
}
}
Hope that helps. Cheers!

Not really an answer to your question but a general advice - don't do this. Introducing data setup dependencies between test classes will make your suite brittle in the long run. Reasoning about what the state is at a given point will get harder and harder as the amount of tests grows and the global state size with it. Later on hanging a test or introducing a new one might break many tests downstream. This is just asking for trouble.
Ideally, you want to setup the data needed by a test immediately before that test and tear it down afterwards. Grails Remote Control plugin and test data fixture builders are your friends here.

You should define your initialization code in a single place, and if it's shared between both Specs, it may be a good idea to create a superclass with methods you can call in each Spec's set up methods, or a whole class devoted to declare testing methods to reuse.
In any case, the purpose of a unit test is only to test a single functionality, and it shouldn't be responsible of setting up other tests as well.

Related

Rails, Is there a way to generate Unit Tests from Existing Controllers and methods defined in them?

I was wondering if there is a script that can take existing codebase and generate unit tests for each method in controllers. By default all would be passing since they would be empty and i can remove tests i for methods i dont feel important.
This would save huge time and increase testing. Since i'd have to define only what each method should output and not boilerplate that needs to be written.
You really shouldn't be doing this. Creating pointless tests is technical debt that you don't want. Take some time, go through each controller and write a test (or preferably a few) for each method. You'll thank yourself in the long run.
You can then also use test coverage tools to see which bits still need testing.
You can use shared tests to avoid repetition. So for example with rspec, you could add the following to your spec_helper/rails_helper
def should_be_ok(action)
it "should respond with ok" do
get action.to_sym
expect(response).to be_success
end
end
Then in your controller_spec
describe UserController do
should_be_ok(:index)
should_be_ok(:new)
end

Sharing code between multiple Kiwi test files

I have some Kiwi test helper code that is useful across most of my specs.
What's a nice way of sharing this code across multiple specs (i.e. multiple files)? A category on KiwiSpec might be one option. But that feels a bit off, since I'd be putting code in that category to make things work, rather than because it actually belongs in KiwiSpec.
The 'shared example' feature of Kiwi (since 4.2.0) seems to be better for DRY in a single spec/file, rather than across multiple specs.
The main reason I can't just call some external code from my test is that this external code isn't inside a test case/Kiwi spec, so its assertions either generate compile errors or warnings.
Update
I've tried injecting the assertion functionality needed by the external test helper code as blocks into the helper code. (This has the advantage that my test helper code is not then hard-coded to use any particular test framework.) This was only partially successful: for the test cases where I'm expecting an exception to be raised:
[[theBlock(...) should] raise];
none is raised. I suspect the problem is that I have another block being called inside the main block which has a raise against it.
Update 2
Another possible technique is suggested at https://github.com/kiwi-bdd/Kiwi/issues/138 by user gantaa, whereby we create a self variable pointing to the test suite object outside the context of the test suite.

Jbehave - Run method after a specific scenario

I'm trying to configure JBehave with Gherkin to run a teardown method after a specific scenario. So far I'm aware of the below:
JBehave supports Gherkin, which has a syntax for Lifecycle before
event, unfortunately Gherkin doesn't support the Lifecycle after.
http://jbehave.org/reference/latest/story-syntax.html
JBehave supports the annotation #AfterScenario which can only be specified to on the outcome of the scenario. This is run after all scenarios in a story rather after a specific scenario.
http://jbehave.org/reference/latest/annotations.html
At the moment I have include a Gherkin step (#Then teardown this sceanrio) at the end my scenario within my story. This contradicts the point of BDD which should only display what the user is doing and not what the test needs to do.
Unfortunately there isn't a way to access the META tags in the after scenario methods. As a workaround that doesn't require you to duplicate your entire scenario file couldn't you leave all of the scenarios from your current class that don't require the teardown where they are, and move the one that does need the teardown into its own class which inherits from the first class. Then, add the after scenario method to the second class.

Breaking down your RSpec tests

Some of my Rspec tests have gotten really really big (2000-5000 lines). I am just wondering if anyone has ever tried breaking these tests down into multiple files that meet the following conditions:
There is a systematic way of naming and placing your test (e.g. methods A-L gos to user_spec1.rb).
You can run a single file that will actually run the other tests inside other files.
You can still run a specific context within a file
and, good to have, RubyMine can run a specific test (and all tests) just fine.
For now, I have been successful in doing
#user_spec.rb
require 'spec_helper'
require File.expand_path("../user_spec1.rb", __FILE__)
include UserSpec
#user_spec1.rb
module UserSpec do
describe User do
..
end
end
If your specs are getting too big, it's likely that your model is too big as well -- since you used "UserSpec" here, you could say your user class is a "God class". That is, it does too much.
So, I would break this up into much smaller classes, each of which have one single responsibility. Then, test these classes in isolation.
What you may find is that your User class knows how to execute most logic in your system -- this is an easy trap to fall into, but can be avoided if you put your logic in a class that takes a user as an argument... Also if you steadfastly follow the law of demeter (where your user class could only touch 1 level below it, but not two).
Further Reading: http://blog.rubybestpractices.com/posts/gregory/055-issue-23-solid-design.html

How to skip certain tests with Test::Unit

In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.

Resources