I want to test below organizer interactor for, calling the 2 specified interactors without executing the calling interactors('SaveRecord, PushToService') code.
class Create
include Interactor::Organizer
organize SaveRecord, PushToService
end
I found few examples where the overall result of all the interactors logic(record should be saved and pushed to other service) has been tested. But, i dont want to execute the other interactor's logic as they will be tested as part of their separate specs.
1. Is it possible to do so?
2. Which way of testing(testing the overall result/testing only this particular
organizer interactor behavior) is a better practise?
I believe we need to test the interactor organizer for included interactors without executing the included interacors. I am able to find a way stub and test the organizer with below lines
To Stub:
allow(SaveRecord).to receive(:call!) { :success }
allow(PushToService).to receive(:call!) { :success }
To Test:
it { expect(interactor).to be_kind_of(Interactor::Organizer) }
it { expect(described_class.organized).to eq([SaveRecord, PushToService]) }
Found call! method & organized variable from interactor organizer source files where it is trying to call and use internally. Stubbing the call! method and testing the organized variable has fulfilled my requirement.
You can test they are called and the order:
it 'calls the interactors' do
expect(SaveRecord).to receive(:call!).ordered
expect(PushToService).to receive(:call!).ordered
described_class.call
end
See: https://relishapp.com/rspec/rspec-mocks/docs/setting-constraints/message-order
Just iterating over #prem answer.
To Test:
it { expect(interactor).to be_kind_of(Interactor::Organizer) }
it { expect(described_class.organized).to eq([SaveRecord, PushToService]) }
interactor in this case is an instance of the Interactor class, or in Rspec syntax:
let(:interactor) { described_class.new }
Related
In my Test, I have some feature methods that only need to run in certain situations. My code looks something like this:
class MyTest extends GebReportingSpec{
def "Feature method 1"(){
when:
blah()
then:
doSomeStuff()
}
def "Feature method 2"(){
if(someCondition){
when:
blah()
then:
doSomeMoreStuff()
}
}
def "Feature method 3"(){
when:
blah()
then:
doTheFinalStuff()
}
}
I should note I am using a custom spock extension that allows me to run all feature methods of a spec even if a previous feature method fails.
The thing I just realized and the reason I am making this post, is because "Feature method 2" does not show up in my test results for some reason, but method 1 and 3 do. Even if someCondition is set to true, it does not appear in the build results. so I am wondering why this is, and how I can make this feature method conditional
Spock has special support for conditionally executing features, take a look at #IgnoreIf and #Requires.
#IgnoreIf({ os.windows })
def "I'll run everywhere but on Windows"() { ... }
You can also use static methods in the condition closure, they need to use the qualified version.
class MyTest extends GebReportingSpec {
#Requires({ MyTest.myCondition() })
def "I'll only run if myCondition() returns true"() { ... }
static boolean myCondition() { true }
}
Your test is not appearing in the report as you cant have the given, when, then blocks inside of a conditional.
You should always run the test but allow the test to fail gracefully:
Use the #FailsWith attribute. http://spockframework.org/spock/javadoc/1.0/spock/lang/FailsWith.html
#FailsWith(value = SpockAssertionError, reason = "Feature is not enabled")
def "Feature method 2"(){
when:
blah()
then:
doSomeMoreStuff()
}
Important to note that this test will be reported as passed when it fails with the specified exception. And it will also reported as passed if the feature is enabled and the test actually passed.
To Fix this I simply put a when/then block with a 10 ms sleep before the if statement and now that feature method is being executed
I don't want to execute certain tests if the feature is currently disabled. Is there a way to "skip" a test (and to get appropriate feedback on console)?
Something like this:
func testSomething() {
if !isEnabled(feature: Feature) {
skip("Test skipped, feature \(feature.name) is currently disabled.")
}
// actual test code with assertions here, but not run if skip above called.
}
You can disable XCTests run by Xcode by right clicking on the test symbol in the editor tray on the left.
You'll get this menu, and you can select the "Disable " option.
Right clicking again will allow you to re-enable. Also, as stated in user #sethf's answer, you'll see entries for currently disabled tests in your .xcscheme file.
As a final note, I'd recommend against disabling a test and committing the disabling code in your xcscheme. Tests are meant to fail, not be silenced because they're inconvenient.
Another possible solution which I found in some article: prefix your skipped tests with something like "skipped_"
Benefits:
XCode will not treat them as tests
You can easily find them using search
You can make them tests again, replacing "skipped_" to ""
Beginning with Xcode 11.4 you'll be able to using XCTSkipUnless(_:_:file:line:).
The release notes read,
XCTest now supports dynamically skipping tests based on runtime
conditions, such as only executing some tests when running on certain
device types or when a remote server is accessible. When a test is
skipped, Xcode displays it differently in the Test Navigator and Test
Report, and highlights the line of code where the skip occurred along
with an optional user description. Information about skipped tests is
also included in the .xcresult for programmatic access.
To skip a test, call one of the new XCTSkip* functions from within a
test method or setUp(). For example:
func test_canAuthenticate() throws {
try XCTSkipIf(AuthManager.canAccessServer == false, "Can't access server")
// Perform test…
}
The XCTSkipUnless(::file:line:) API is similar to
XCTSkipIf(::file:line:) but skips if the provided expression is
false instead of true, and the XCTSkip API can be used to skip
unconditionally. (13696693)
I've found a way to do this by modifying my ui test .xcscheme file and adding a section called SkippedTests under TestableReference, then adding individual Test tags with an 'Identifier' attribute with the name of your class and test method. Something like:
<SkippedTests>
<Test Identifier="ClassName/testMethodName" />
</SkippedTests>
Hope this helps
From Xcode 11.4+, you can use XCTSkipIf() or XCTSkipUnless().
try XCTSkipIf(skip condition, "message")
try XCTSkipUnless(non-skip condition, "message")
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests#overview
This is what test schemes are meant to do.
You can have different schemes targeting different testing situations or needs.
For example, you may want to create a scheme that runs all your tests (full regression scheme), or you may want to select a handful of them to do a quick smoke test on your app when small changes are made.
This way, you can select different schemes according to how much testing you need to do.
Just go to
Product >> Scheme
It's not that universal, but you can override invokeTest in XCTestCase and avoid calling super where necessary. I'm not sure about the appropriate feedback in console though.
For instance the following fragment makes the test run only on iOS Simulator with iPhone 7 Plus/iPad Pro 9.7"/iOS 11.4:
class XXXTests : XCTestCase {
let supportedModelsAndRuntimeVersions: [(String, String)] = [
("iPhone9,2", "11.4"),
("iPad6,4", "11.4")
]
override func invokeTest() {
let environment = ProcessInfo().environment
guard let model = environment["SIMULATOR_MODEL_IDENTIFIER"], let version = environment["SIMULATOR_RUNTIME_VERSION"] else {
return
}
guard supportedModelsAndRuntimeVersions.contains(where: { $0 == (model, version) }) else {
return
}
super.invokeTest()
}
If you use Xcode 11 and TestPlan, you can tweak your configuration to skip or allow specific tests. Xcode TestPlan is a JSON format after all.
By default, all tests are enabled, you can skip a list of tests or test file.
"testTargets" : [
{
"skippedTests" : [
"SkippedFileTests", // skip the whole file
"FileTests\/testSkipped()" // skip one test in a file
]
...
On the opposite, you can also skip all tests by default and enable only few.
"testTargets" : [
{
"selectedTests" : [
"AllowedFileTests", // enable the whole file
"FileTests\/testAllowed()" // enable only a test in a file
]
...
I'm not sure if you can combine both configurations though. It flips the logic based on Automatically includes new tests.
Unfortunately, there is no build-in test case skipping. The test case either passes or fails.
That means you will have to add that functionality by yourself - you can add a function to XCTestCase (e.g. XCTestCase.skip) via a category that will print the information into console. However, you will have to put a return after that to prevent the other asserts from running.
While there are answers which covered almost similar logic, if you don't want to have an extra file to manage conditions then you can mark your test function with throws then use XCTSkip with a nice description to explain why it is skipped. Note a clear message is important as it will make it easy for you to just read it on Report Navigator and understand why it is skipped without having to open a related XCTestCase.
Example:
func test_whenInilizedWithAllPropertiesGraphQLQueryVariableDict_areSetCorrectly() throws {
// Skip intentionally so that we can remember to handle this.
throw XCTSkip("This method should be implemented to test equality of NSMutableDictoinary with heterogenious items.")
}
Official iOS documentation
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests
Use XCTSkipIf() or XCTSkipUnless() when you have a Boolean condition that you can use to evaluate when to skip tests.
Throw an XCTSkip error when you have other circumstances that result in skipped tests. For example:
func testSomethingNew() throws {
guard #available(macOS <#VersionNumber#>, *) else {
throw XCTSkip("Required API is not available for this test.")
}
// perform test using <#VersionNumber#> APIs...
}
There is no test case skipping. You can use if-else block:nested and run/print your desired output.
I've got a question about how to share rspec-mocks' double between examples. I'm writing a new rails app with rspec-mocks 3.1.3. I'm used to using the old (< 2.14 and and trying to update my knowledge if current rspec usage.
I have a model method:
def self.from_strava(activity_id, race_id, user)
#client ||= Strava::Api::V3::Client.new(access_token: 'abc123')
activity = #client.retrieve_an_activity(activity_id)
result_details = {race_id: race_id, user: user}
result_details[:duration] = activity['moving_time']
result_details[:date] = Date.parse(activity['start_date'])
result_details[:comment] = activity['description']
result_details[:strava_url] = "http://www.strava.com/activities/#{activity_id}"
Result.create!(result_details)
end
And here is the spec:
describe ".from_strava" do
let(:user) { FactoryGirl.build(:user) }
let(:client) { double(:client) }
let(:json_response) { JSON.parse(File.read('spec/support/strava_response.json')) }
before(:each) do
allow(Strava::Api::V3::Client).to receive(:new) { client }
allow(client).to receive(:retrieve_an_activity) { json_response }
allow(Result).to receive(:create!)
end
it "sets the duration" do
expect(Result).to receive(:create!).with(hash_including(duration: 3635))
Result.from_strava('123', 456, user)
end
it "sets the date" do
expect(Result).to receive(:create!).with(hash_including(date: Date.parse("2014-11-14")))
Result.from_strava('123', 456, user)
end
end
When I run a single test on it's own it's fine, but when I run the whole describe ".from_strava" block it fails with the message
Double :client was originally created in one example but has leaked into another example and can no longer be used. rspec-mocks' doubles are designed to only last for one example, and you need to create a new one in each example you wish to use it for.
I understand what it's saying, but surely this is an appropriate use of a double being used in 2 examples. After all, the client double isn't important to the example, it's just a way for me to load the canned response. I guess I could use WebMock but that seems very low-level and doesn't translate well to the actual code written. We should only be asserting one thing per example after all.
I had thought about replacing the client double with a call to
allow(Strava::Api::V3::Client).to receive_message_chain(:new, :retrieve_an_activity) { json_response }
but that doesn't seem to be the right approach either, given that the documentation states that receive_message_chain should be a code smell.
So if I shouldn't use receive_message_chain, shared client double and also follow the standard DRY principle then how should I fix this?
I would love some feedback on this.
Thanks,
Dave
Caching clients for external components can often be really desired (keeping alive connections/any SSL setup that you might need, etc.) and removing that for the sake of fixing an issue with tests is not a desirable solution.
In order to fix your test (without refactoring your code), you can do the following to clear the instance variable after each of your tests:
after { Result.instance_variable_set("#client", nil) }
While admittedly, this is not the cleanest solution, it seems to be the simplest and achieves both, lets you have a clear setup with no state shared in between tests, and keep your client cached in "normal" operation mode.
surely this is an appropriate use of a double being used in 2 examples.
No, it's not. :) You're trying to use a class variable; do not do that because the variable doesn't span examples. The solution is to set the client each time i.e. in each example.
Bad:
#client ||= Strava::Api::V3::Client.new(access_token: 'abc123')
Good:
#client = Strava::Api::V3::Client.new(access_token: 'abc123')
I had the same use case in an app of mine, and we solved it by extracting the cacheing into a private method and then stubbing that method to return the double (instead of stubbing the new method directly).
For example, in the class under test:
def self.from_strava(activity_id, race_id, user)
activity = strava_client.retrieve_an_activity(activity_id)
...
end
private
def self.strava_client
#client ||= Strava::Api::V3::Client.new(access_token: 'abc123')
end
And in the spec:
let(:client) { double(:client) }
before { allow(described_class).to receive(:strava_client).and_return(client) }
...
TLDR: Add after { order.vendor_service = nil } to balance the before block. Or read on...
I ran into this, and it was not obvious where it was coming from. In order_spec.rb model tests, I had this:
describe 'order history' do
before do
service = double('VendorAPI')
allow(service).to receive(:order_count).and_return(5)
order.vendor_service = service
end
# tests here ..
end
And in my Order model:
def too_many_orders?
##vendor_service ||= VendorAPI.new(key: 'abc', account: '123')
return ##vendor_service.order_count > 10
end
This worked fine when I only ran rspec on order_spec.rb
I was mocking something completely different in order_controller_spec.rb a little differently, using allow_any_instance_of() instead of double and allow:
allow_any_instance_of(Order).to receive(:too_many_orders?).and_return(true)
This, too, tested out fine.
The confounding trouble is that when I ran the full suite of tests, I got the OP's error on the controller mock -- the one using allow_any_instance. This was very hard to track down, as the problem (or at least my solution) lay in the model tests where I use double/allow.
To fix this, I added an after block clearing the class variable ##vendor_service, balancing the before block's action:
describe 'order history' do
before do
service = double('VendorAPI')
allow(service).to receive(:order_count).and_return(5)
order.vendor_service = service
end
after do
order.vendor_service = nil
end
# tests here ..
end
This forced the ||= VendorAPI.new() to use the real new function in later unrelated tests, not the mock object.
Say I have the following test
describe "bob" do
subject {
response = get "/expensive_lookup"
JSON.parse(response.body)
}
its(["transaction_id"]) { should == 1 }
its(["order_id"]) { should == 33 }
end
Then for each its() {} the subject will be reevaluated, which in my case it is a very slow lookup.
I could bundle all my tests together in one like
describe "bob" do
subject(res) {
response = get "/expensive_lookup"
JSON.parse(response.body)
}
it "returns the right stuff" do
res["transaction_id"]).should == 1
res["order_id"].should == 33
end
end
But this makes it less obvious which line of the test has failed if there is a failure.
Is there a way I can stop the subject from being reevaluated for each it block?
You can put that in to a before(:all) block. I don't know if that syntax has changed in a new rspec version, but regardless, your test would become this:
before(:all) do
response = get "/expensive_lookup"
#res = JSON.parse(response.body)
end
it "returns the right transaction ID" do
#res["transaction_id"].should == 1
end
# etc
The pro is that the code in the before-all block gets run just once for your spec. The con is that, as you can see, you can't take advantage of the subject; you need to write each more explicitly. Another gotcha is that any data saved to the test database is not part of the transaction and will not be rolled back.
There are two possible source of issues
Network request is slow/prone to fail
You should really mock all you network requests, slow or not.
The gem VCR is excellent. It makes it trivial to run your request once and persist the result for subsequent testing.
Building the immutable subject is slow
If you have multiple it blocks, the subject will be rebuild every time. Assuming you don't modify the subject, you can build it once.
You can use before(:all):
before(:all) { #cache = very_long_computation.freeze }
subject { #cache }
Note: that I call freeze to avoid modifying it by mistake, but of course that's not a deep freeze so you still need to mind what you are doing. If you are mutating your subject, your tests are no longer independent and shouldn't share the subject.
I'm entirely new to Grails and it's testing functions since I started my current job approx 4 months ago. The person who trained me on testing left our group several weeks ago, and now I'm on my own for testing. What I've slowing been discovering is that the way I was trained on how to do Grails integration testing is almost entirely different from the way(s) that I've seen people do it on the forums and support groups. I could really use some guidance on which way is right/best. I'm currently working in Grails 2.4.0, btw.
Here is a sample mockup of an integration test in the style that I was trained on. Is this the right or even the best way that I should be doing it?
#Test
void "test a method in a controller"() {
def fc = new FooController() // 1. Create controller
fc.springSecurityService = [principal: [username: 'somebody']] // 2. Setup Inputs
fc.params.id = '1122'
fc.create() // 3. Call the method being tested
assertEquals "User Not Found", fc.flash.errorMessage // 4. Make assertions on what was supposed to happen
assertEquals "/", fc.response.redirectUrl
}
Since Grails 2.4.0 is used, you can leverage the benefit of using spock framework by default.
Here is sample unit test case which you can model after to write Integration specs.
Note:
Integration specs goes to test/integration
should inherit IntegrationSpec.
Mocking is not needed. #TestFor is not used as compared to unit spec.
DI can be used to full extent. def myService at class level will inject the service in
spec.
Mocking not required for domain entities.
Above spec should look like:
import grails.test.spock.IntegrationSpec
class FooControllerSpec extends IntegrationSpec {
void "test a method in a controller"() {
given: 'Foo Controller'
def fc = new FooController()
and: 'with authorized user'
fc.springSecurityService = [principal: [username: 'somebody']]
and: 'with request parameter set'
fc.params.id = '1122'
when: 'create is called'
fc.create()
then: 'check redirect url and error message'
fc.flash.errorMessage == "User Not Found"
fc.response.redirectUrl == "/"
}
}