After scaffolding a new class, rails creates the correspondig tests for each controller method.
What do you think, is best practice in a strikt TDD approach? Is it better, to leave these default tests untouched and to create new tests for each new logic? (Even, if they overlap and verify almost the same things?) Or is it ok to extend these default tests with new assertions?
TIA, rufus!
Remove the default tests if they don't test stuff you need tested. If you leave them, you're padding your numbers, but the tests won't actually help you in the long run.
Just like with the scaffolding views and controller, you end up replacing most of the default code with your own, it's just a great place to get started.
In general, I would say remove them if you're not using them, or build on them if they can be extended to fit your needs.
Related
I'm fairly new to using RSpec, so there's a lot I still don't know. I'm currently working on speccing out a section of functionality which is supposed to run a script when a button is pressed. The script is currently called in a controller, which I don't know if there's a good way to test.
I'm currently using
expect_any_instance_of(ConfigurationsController)
.to receive(:system)
.with('sh bin/resque/kill_resque_workers')
.and_return(true)
in a feature spec and it works, but rubocop is complaining about using expect_any_instance_of and I've been told to only use that method if there was no better way.
Is there any better way to test this? Like is there a way to get the instance of the controller being used, or a better kind of test for this?
A better pattern would be to not inline the system call in your controller in the first place. Instead create a seperate object that knows how to kill your worker processes and call that from your controller. The service object pattern is often used for this. It makes it much easier to stub/spy/mock the dependency and make sure it stops at your application boundry.
It also lets you test the object in isolation. Testing plain old ruby objects is really easy. Testing controllers is not.
module WorkerHandler
def self.kill_all
system 'sh bin/resque/kill_resque_workers'
end
end
# in your test
expect(WorkerHandler).to receive(:kill_all)
If your service object method runs on instances of a class you can use stub_const to stub out the new method so that it returns mocks/spies.
Another more novel solution is dependency injection via Rack middleware. You just write a piece of middleware that injects your object into env. env is the state variable thats passed all the way down the middleware stack to your application. This is how Warden for example works. You can pass env along in your spec when you make the http calls to your controller or use before { session.env('foo.bar', baz) }.
I was wondering if there is a script that can take existing codebase and generate unit tests for each method in controllers. By default all would be passing since they would be empty and i can remove tests i for methods i dont feel important.
This would save huge time and increase testing. Since i'd have to define only what each method should output and not boilerplate that needs to be written.
You really shouldn't be doing this. Creating pointless tests is technical debt that you don't want. Take some time, go through each controller and write a test (or preferably a few) for each method. You'll thank yourself in the long run.
You can then also use test coverage tools to see which bits still need testing.
You can use shared tests to avoid repetition. So for example with rspec, you could add the following to your spec_helper/rails_helper
def should_be_ok(action)
it "should respond with ok" do
get action.to_sym
expect(response).to be_success
end
end
Then in your controller_spec
describe UserController do
should_be_ok(:index)
should_be_ok(:new)
end
I am trying to get my geb-spock functional tests to run in a specified order because SpecA will create data required for SpecB during its run.
This question is about running the specifications in order, not the individual test methods within the specification.
I have tried changing the specification name to indicate execution order but that didn't work. I found a solution where a Test Suite was used, and the tests were added to the suite in order, but I can't find how to make a test suite work in Grails.
Explicitly specifying them as grails test-app functional: SpecA SpecB , is not a long term option, as more specs will be added.
For sequential or whatever the sequence you want to run your tasks, I do the following thing in my build.gradle file:
def modules = ["X", "Y", "Z", "ZZ"]
if (modules.size() > 1) {
for(j in 1 .. modules.size()-1 ) {
tasks[modules[j]].mustRunAfter modules[values[j-1]]
}
}
Hope that helps. Cheers!
Not really an answer to your question but a general advice - don't do this. Introducing data setup dependencies between test classes will make your suite brittle in the long run. Reasoning about what the state is at a given point will get harder and harder as the amount of tests grows and the global state size with it. Later on hanging a test or introducing a new one might break many tests downstream. This is just asking for trouble.
Ideally, you want to setup the data needed by a test immediately before that test and tear it down afterwards. Grails Remote Control plugin and test data fixture builders are your friends here.
You should define your initialization code in a single place, and if it's shared between both Specs, it may be a good idea to create a superclass with methods you can call in each Spec's set up methods, or a whole class devoted to declare testing methods to reuse.
In any case, the purpose of a unit test is only to test a single functionality, and it shouldn't be responsible of setting up other tests as well.
In cucucmber i want to run a step after all the scenarios in a feature are run, can I have an after hook for the entire feature, I currently have after hooks for each scenario.
I know its been a long time, but i havent been a user here for long but,
There is an exit hook that is used like this:
at_exit do
# Add code here
end
This should be placed in your env.rb file or the features/support directory
Here's a great link
It's a bit of a workaround, but you could just have scenarios at the beginning and the end of the feature for setup/teardown. Scenarios are run in the order that they are specified so as long as you have the setup scenario at the top and the teardown at the bottom then it works fine.
I also name the Scenario 'Scenario: feature setup' and 'Scenario: feature teardown' to make it more obvious when outputting the results to a formatter.
You can use a custom formatter, and use the after_feature method.
(I used to have a link with more information, but #katta just pointed out that its no longer available)
Sure, just tag your feature.
After('#mytag') do
#Do your magic here
end
This documentation might help: http://cukes.info/cucumber/api/ruby/latest/Cucumber/RbSupport/RbDsl.html#AfterStep-instance_method
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.