I have some Kiwi test helper code that is useful across most of my specs.
What's a nice way of sharing this code across multiple specs (i.e. multiple files)? A category on KiwiSpec might be one option. But that feels a bit off, since I'd be putting code in that category to make things work, rather than because it actually belongs in KiwiSpec.
The 'shared example' feature of Kiwi (since 4.2.0) seems to be better for DRY in a single spec/file, rather than across multiple specs.
The main reason I can't just call some external code from my test is that this external code isn't inside a test case/Kiwi spec, so its assertions either generate compile errors or warnings.
Update
I've tried injecting the assertion functionality needed by the external test helper code as blocks into the helper code. (This has the advantage that my test helper code is not then hard-coded to use any particular test framework.) This was only partially successful: for the test cases where I'm expecting an exception to be raised:
[[theBlock(...) should] raise];
none is raised. I suspect the problem is that I have another block being called inside the main block which has a raise against it.
Update 2
Another possible technique is suggested at https://github.com/kiwi-bdd/Kiwi/issues/138 by user gantaa, whereby we create a self variable pointing to the test suite object outside the context of the test suite.
Related
I'm fairly new to using RSpec, so there's a lot I still don't know. I'm currently working on speccing out a section of functionality which is supposed to run a script when a button is pressed. The script is currently called in a controller, which I don't know if there's a good way to test.
I'm currently using
expect_any_instance_of(ConfigurationsController)
.to receive(:system)
.with('sh bin/resque/kill_resque_workers')
.and_return(true)
in a feature spec and it works, but rubocop is complaining about using expect_any_instance_of and I've been told to only use that method if there was no better way.
Is there any better way to test this? Like is there a way to get the instance of the controller being used, or a better kind of test for this?
A better pattern would be to not inline the system call in your controller in the first place. Instead create a seperate object that knows how to kill your worker processes and call that from your controller. The service object pattern is often used for this. It makes it much easier to stub/spy/mock the dependency and make sure it stops at your application boundry.
It also lets you test the object in isolation. Testing plain old ruby objects is really easy. Testing controllers is not.
module WorkerHandler
def self.kill_all
system 'sh bin/resque/kill_resque_workers'
end
end
# in your test
expect(WorkerHandler).to receive(:kill_all)
If your service object method runs on instances of a class you can use stub_const to stub out the new method so that it returns mocks/spies.
Another more novel solution is dependency injection via Rack middleware. You just write a piece of middleware that injects your object into env. env is the state variable thats passed all the way down the middleware stack to your application. This is how Warden for example works. You can pass env along in your spec when you make the http calls to your controller or use before { session.env('foo.bar', baz) }.
I am trying to get my geb-spock functional tests to run in a specified order because SpecA will create data required for SpecB during its run.
This question is about running the specifications in order, not the individual test methods within the specification.
I have tried changing the specification name to indicate execution order but that didn't work. I found a solution where a Test Suite was used, and the tests were added to the suite in order, but I can't find how to make a test suite work in Grails.
Explicitly specifying them as grails test-app functional: SpecA SpecB , is not a long term option, as more specs will be added.
For sequential or whatever the sequence you want to run your tasks, I do the following thing in my build.gradle file:
def modules = ["X", "Y", "Z", "ZZ"]
if (modules.size() > 1) {
for(j in 1 .. modules.size()-1 ) {
tasks[modules[j]].mustRunAfter modules[values[j-1]]
}
}
Hope that helps. Cheers!
Not really an answer to your question but a general advice - don't do this. Introducing data setup dependencies between test classes will make your suite brittle in the long run. Reasoning about what the state is at a given point will get harder and harder as the amount of tests grows and the global state size with it. Later on hanging a test or introducing a new one might break many tests downstream. This is just asking for trouble.
Ideally, you want to setup the data needed by a test immediately before that test and tear it down afterwards. Grails Remote Control plugin and test data fixture builders are your friends here.
You should define your initialization code in a single place, and if it's shared between both Specs, it may be a good idea to create a superclass with methods you can call in each Spec's set up methods, or a whole class devoted to declare testing methods to reuse.
In any case, the purpose of a unit test is only to test a single functionality, and it shouldn't be responsible of setting up other tests as well.
I'm trying to configure JBehave with Gherkin to run a teardown method after a specific scenario. So far I'm aware of the below:
JBehave supports Gherkin, which has a syntax for Lifecycle before
event, unfortunately Gherkin doesn't support the Lifecycle after.
http://jbehave.org/reference/latest/story-syntax.html
JBehave supports the annotation #AfterScenario which can only be specified to on the outcome of the scenario. This is run after all scenarios in a story rather after a specific scenario.
http://jbehave.org/reference/latest/annotations.html
At the moment I have include a Gherkin step (#Then teardown this sceanrio) at the end my scenario within my story. This contradicts the point of BDD which should only display what the user is doing and not what the test needs to do.
Unfortunately there isn't a way to access the META tags in the after scenario methods. As a workaround that doesn't require you to duplicate your entire scenario file couldn't you leave all of the scenarios from your current class that don't require the teardown where they are, and move the one that does need the teardown into its own class which inherits from the first class. Then, add the after scenario method to the second class.
Some of my Rspec tests have gotten really really big (2000-5000 lines). I am just wondering if anyone has ever tried breaking these tests down into multiple files that meet the following conditions:
There is a systematic way of naming and placing your test (e.g. methods A-L gos to user_spec1.rb).
You can run a single file that will actually run the other tests inside other files.
You can still run a specific context within a file
and, good to have, RubyMine can run a specific test (and all tests) just fine.
For now, I have been successful in doing
#user_spec.rb
require 'spec_helper'
require File.expand_path("../user_spec1.rb", __FILE__)
include UserSpec
#user_spec1.rb
module UserSpec do
describe User do
..
end
end
If your specs are getting too big, it's likely that your model is too big as well -- since you used "UserSpec" here, you could say your user class is a "God class". That is, it does too much.
So, I would break this up into much smaller classes, each of which have one single responsibility. Then, test these classes in isolation.
What you may find is that your User class knows how to execute most logic in your system -- this is an easy trap to fall into, but can be avoided if you put your logic in a class that takes a user as an argument... Also if you steadfastly follow the law of demeter (where your user class could only touch 1 level below it, but not two).
Further Reading: http://blog.rubybestpractices.com/posts/gregory/055-issue-23-solid-design.html
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.