I'm trying to configure JBehave with Gherkin to run a teardown method after a specific scenario. So far I'm aware of the below:
JBehave supports Gherkin, which has a syntax for Lifecycle before
event, unfortunately Gherkin doesn't support the Lifecycle after.
http://jbehave.org/reference/latest/story-syntax.html
JBehave supports the annotation #AfterScenario which can only be specified to on the outcome of the scenario. This is run after all scenarios in a story rather after a specific scenario.
http://jbehave.org/reference/latest/annotations.html
At the moment I have include a Gherkin step (#Then teardown this sceanrio) at the end my scenario within my story. This contradicts the point of BDD which should only display what the user is doing and not what the test needs to do.
Unfortunately there isn't a way to access the META tags in the after scenario methods. As a workaround that doesn't require you to duplicate your entire scenario file couldn't you leave all of the scenarios from your current class that don't require the teardown where they are, and move the one that does need the teardown into its own class which inherits from the first class. Then, add the after scenario method to the second class.
Related
I'm fairly new to using RSpec, so there's a lot I still don't know. I'm currently working on speccing out a section of functionality which is supposed to run a script when a button is pressed. The script is currently called in a controller, which I don't know if there's a good way to test.
I'm currently using
expect_any_instance_of(ConfigurationsController)
.to receive(:system)
.with('sh bin/resque/kill_resque_workers')
.and_return(true)
in a feature spec and it works, but rubocop is complaining about using expect_any_instance_of and I've been told to only use that method if there was no better way.
Is there any better way to test this? Like is there a way to get the instance of the controller being used, or a better kind of test for this?
A better pattern would be to not inline the system call in your controller in the first place. Instead create a seperate object that knows how to kill your worker processes and call that from your controller. The service object pattern is often used for this. It makes it much easier to stub/spy/mock the dependency and make sure it stops at your application boundry.
It also lets you test the object in isolation. Testing plain old ruby objects is really easy. Testing controllers is not.
module WorkerHandler
def self.kill_all
system 'sh bin/resque/kill_resque_workers'
end
end
# in your test
expect(WorkerHandler).to receive(:kill_all)
If your service object method runs on instances of a class you can use stub_const to stub out the new method so that it returns mocks/spies.
Another more novel solution is dependency injection via Rack middleware. You just write a piece of middleware that injects your object into env. env is the state variable thats passed all the way down the middleware stack to your application. This is how Warden for example works. You can pass env along in your spec when you make the http calls to your controller or use before { session.env('foo.bar', baz) }.
I have some Kiwi test helper code that is useful across most of my specs.
What's a nice way of sharing this code across multiple specs (i.e. multiple files)? A category on KiwiSpec might be one option. But that feels a bit off, since I'd be putting code in that category to make things work, rather than because it actually belongs in KiwiSpec.
The 'shared example' feature of Kiwi (since 4.2.0) seems to be better for DRY in a single spec/file, rather than across multiple specs.
The main reason I can't just call some external code from my test is that this external code isn't inside a test case/Kiwi spec, so its assertions either generate compile errors or warnings.
Update
I've tried injecting the assertion functionality needed by the external test helper code as blocks into the helper code. (This has the advantage that my test helper code is not then hard-coded to use any particular test framework.) This was only partially successful: for the test cases where I'm expecting an exception to be raised:
[[theBlock(...) should] raise];
none is raised. I suspect the problem is that I have another block being called inside the main block which has a raise against it.
Update 2
Another possible technique is suggested at https://github.com/kiwi-bdd/Kiwi/issues/138 by user gantaa, whereby we create a self variable pointing to the test suite object outside the context of the test suite.
I am trying to get my geb-spock functional tests to run in a specified order because SpecA will create data required for SpecB during its run.
This question is about running the specifications in order, not the individual test methods within the specification.
I have tried changing the specification name to indicate execution order but that didn't work. I found a solution where a Test Suite was used, and the tests were added to the suite in order, but I can't find how to make a test suite work in Grails.
Explicitly specifying them as grails test-app functional: SpecA SpecB , is not a long term option, as more specs will be added.
For sequential or whatever the sequence you want to run your tasks, I do the following thing in my build.gradle file:
def modules = ["X", "Y", "Z", "ZZ"]
if (modules.size() > 1) {
for(j in 1 .. modules.size()-1 ) {
tasks[modules[j]].mustRunAfter modules[values[j-1]]
}
}
Hope that helps. Cheers!
Not really an answer to your question but a general advice - don't do this. Introducing data setup dependencies between test classes will make your suite brittle in the long run. Reasoning about what the state is at a given point will get harder and harder as the amount of tests grows and the global state size with it. Later on hanging a test or introducing a new one might break many tests downstream. This is just asking for trouble.
Ideally, you want to setup the data needed by a test immediately before that test and tear it down afterwards. Grails Remote Control plugin and test data fixture builders are your friends here.
You should define your initialization code in a single place, and if it's shared between both Specs, it may be a good idea to create a superclass with methods you can call in each Spec's set up methods, or a whole class devoted to declare testing methods to reuse.
In any case, the purpose of a unit test is only to test a single functionality, and it shouldn't be responsible of setting up other tests as well.
Folks,
I am having some trouble working with the Afterhook. I have organized my tests in folders like this:
features/Accounts/accounts_api.feature
features/Accounts/step_definition/account_steps.rb
features/labs/create_lab.feature
features/labs/step_definition/labs_steps.rb
Now I have an After hook present in the step definition of the Accounts feature, I want that hook to run after every scenario of the "Accounts" feature, but I do not want it to run after every scenario of the "labs" feature. I tried this:
cucumber --tags #newlabs
the above should run all the scenarios present in the labs feature tagged as newlabs but what I am seeing is that once the scenario tagged as#newlabs runs the #after hook present in the step definition of Accounts starts to run. I am thinking why is this happening, am I using the hook in the wrong way or is my overall understanding of hooks wrong?
Thanks a lot for taking the time to respond, this helps a lot.
Hooks don't care what step definition script they're located in and will run for every scenario. Or, more specifically, your after hook will run after every scenario that runs, for every feature, regardless of the tags you pass in to Cucumber.
If you want a little more control over that, check out the Cucumber wiki page on hooks and look in the section called 'Tagged hooks'.
Possibly you define After hook in wrong place. Note that After hook (as well as other hooks) must be defined in the .rb, not in the .feature file. Common place for hooks is features/support/hooks.rb. You will define your hook this way:
# features/support/hooks.rb
After('#newlabs') do # will run after each scenario tagged with #newlabs
# your teardown ruby code
end
# features/Accounts/accounts_api.feature
#newlabs # tag all scenarious of this feature with #newlabs tag
Feature: your feature
Scenario: your scenario
Given: ...
When: ...
Then: ...
In cucumber output you won't see that After hook is executed (unless you output something to STDOUT from the hook definition) - hooks will run implicitly.
Some of my Rspec tests have gotten really really big (2000-5000 lines). I am just wondering if anyone has ever tried breaking these tests down into multiple files that meet the following conditions:
There is a systematic way of naming and placing your test (e.g. methods A-L gos to user_spec1.rb).
You can run a single file that will actually run the other tests inside other files.
You can still run a specific context within a file
and, good to have, RubyMine can run a specific test (and all tests) just fine.
For now, I have been successful in doing
#user_spec.rb
require 'spec_helper'
require File.expand_path("../user_spec1.rb", __FILE__)
include UserSpec
#user_spec1.rb
module UserSpec do
describe User do
..
end
end
If your specs are getting too big, it's likely that your model is too big as well -- since you used "UserSpec" here, you could say your user class is a "God class". That is, it does too much.
So, I would break this up into much smaller classes, each of which have one single responsibility. Then, test these classes in isolation.
What you may find is that your User class knows how to execute most logic in your system -- this is an easy trap to fall into, but can be avoided if you put your logic in a class that takes a user as an argument... Also if you steadfastly follow the law of demeter (where your user class could only touch 1 level below it, but not two).
Further Reading: http://blog.rubybestpractices.com/posts/gregory/055-issue-23-solid-design.html