I'm trying to mock the code below using MiniTest/Mocks. But I keep getting this error when running my test.
Minitest::Assertion: unexpected invocation: #<Mock:0x7fa76b53d5d0>.size()
unsatisfied expectations:
- expected exactly once, not yet invoked: #<Mock:0x7fa76b53d5d0>.getresources("_F5DC2A7B3840CF8DD20E021B6C4E5FE0.corwin.co", Resolv::DNS::Resource::IN::CNAME)
satisfied expectations:
- expected exactly once, invoked once: Resolv::DNS.open(any_parameters)
code being tested
txt = Resolv::DNS.open do |dns|
records = dns.getresources(options[:cname_origin], Resolv::DNS::Resource::IN::CNAME)
end
binding.pry
return (txt.size > 0) ? (options[:cname_destination].downcase == txt.last.name.to_s.downcase) : false
my test
::Resolv::DNS.expects(:open).returns(dns = mock)
dns.expects(:getresources)
.with(subject.cname_origin(true), Resolv::DNS::Resource::IN::CNAME)
.returns([Resolv::DNS::Resource::IN::CNAME.new(subject.cname_destination)])
.once
Right now you are testing that Resolv::DNS receives open returns your mock but
since you seem to be trying to test that the dns mock is receiving messages you need to stub the method and provide it with the object to be yielded
Try this instead:
dns = mock
dns.expects(:getresources)
.with(subject.cname_origin(true), Resolv::DNS::Resource::IN::CNAME)
.once
::Resolv::DNS.stub :open, [Resolv::DNS::Resource::IN::CNAME.new(subject.cname_destination)], dns do
# whatever code actually calls the "code being tested"
end
dns.verify
The second argument to stub is the stubbed return value and third argument to stub is what will be yielded to the block in place of the original yielded.
In RSpec the syntax is a bit simpler (and more semantic) such that:
dns = double
allow(::Resolv::DNS).to receive(:open).and_yield(dns)
expect(:dns).to receive(:getresources).once
.with(subject.cname_origin(true), Resolv::DNS::Resource::IN::CNAME)
.and_return([Resolv::DNS::Resource::IN::CNAME.new(subject.cname_destination)])
# whatever code actually calls the "code being tested"
You can write more readable integration tests with DnsMock instead of stubbing/mocking parts of your code: https://github.com/mocktools/ruby-dns-mock
I don't want to execute certain tests if the feature is currently disabled. Is there a way to "skip" a test (and to get appropriate feedback on console)?
Something like this:
func testSomething() {
if !isEnabled(feature: Feature) {
skip("Test skipped, feature \(feature.name) is currently disabled.")
}
// actual test code with assertions here, but not run if skip above called.
}
You can disable XCTests run by Xcode by right clicking on the test symbol in the editor tray on the left.
You'll get this menu, and you can select the "Disable " option.
Right clicking again will allow you to re-enable. Also, as stated in user #sethf's answer, you'll see entries for currently disabled tests in your .xcscheme file.
As a final note, I'd recommend against disabling a test and committing the disabling code in your xcscheme. Tests are meant to fail, not be silenced because they're inconvenient.
Another possible solution which I found in some article: prefix your skipped tests with something like "skipped_"
Benefits:
XCode will not treat them as tests
You can easily find them using search
You can make them tests again, replacing "skipped_" to ""
Beginning with Xcode 11.4 you'll be able to using XCTSkipUnless(_:_:file:line:).
The release notes read,
XCTest now supports dynamically skipping tests based on runtime
conditions, such as only executing some tests when running on certain
device types or when a remote server is accessible. When a test is
skipped, Xcode displays it differently in the Test Navigator and Test
Report, and highlights the line of code where the skip occurred along
with an optional user description. Information about skipped tests is
also included in the .xcresult for programmatic access.
To skip a test, call one of the new XCTSkip* functions from within a
test method or setUp(). For example:
func test_canAuthenticate() throws {
try XCTSkipIf(AuthManager.canAccessServer == false, "Can't access server")
// Perform test…
}
The XCTSkipUnless(::file:line:) API is similar to
XCTSkipIf(::file:line:) but skips if the provided expression is
false instead of true, and the XCTSkip API can be used to skip
unconditionally. (13696693)
I've found a way to do this by modifying my ui test .xcscheme file and adding a section called SkippedTests under TestableReference, then adding individual Test tags with an 'Identifier' attribute with the name of your class and test method. Something like:
<SkippedTests>
<Test Identifier="ClassName/testMethodName" />
</SkippedTests>
Hope this helps
From Xcode 11.4+, you can use XCTSkipIf() or XCTSkipUnless().
try XCTSkipIf(skip condition, "message")
try XCTSkipUnless(non-skip condition, "message")
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests#overview
This is what test schemes are meant to do.
You can have different schemes targeting different testing situations or needs.
For example, you may want to create a scheme that runs all your tests (full regression scheme), or you may want to select a handful of them to do a quick smoke test on your app when small changes are made.
This way, you can select different schemes according to how much testing you need to do.
Just go to
Product >> Scheme
It's not that universal, but you can override invokeTest in XCTestCase and avoid calling super where necessary. I'm not sure about the appropriate feedback in console though.
For instance the following fragment makes the test run only on iOS Simulator with iPhone 7 Plus/iPad Pro 9.7"/iOS 11.4:
class XXXTests : XCTestCase {
let supportedModelsAndRuntimeVersions: [(String, String)] = [
("iPhone9,2", "11.4"),
("iPad6,4", "11.4")
]
override func invokeTest() {
let environment = ProcessInfo().environment
guard let model = environment["SIMULATOR_MODEL_IDENTIFIER"], let version = environment["SIMULATOR_RUNTIME_VERSION"] else {
return
}
guard supportedModelsAndRuntimeVersions.contains(where: { $0 == (model, version) }) else {
return
}
super.invokeTest()
}
If you use Xcode 11 and TestPlan, you can tweak your configuration to skip or allow specific tests. Xcode TestPlan is a JSON format after all.
By default, all tests are enabled, you can skip a list of tests or test file.
"testTargets" : [
{
"skippedTests" : [
"SkippedFileTests", // skip the whole file
"FileTests\/testSkipped()" // skip one test in a file
]
...
On the opposite, you can also skip all tests by default and enable only few.
"testTargets" : [
{
"selectedTests" : [
"AllowedFileTests", // enable the whole file
"FileTests\/testAllowed()" // enable only a test in a file
]
...
I'm not sure if you can combine both configurations though. It flips the logic based on Automatically includes new tests.
Unfortunately, there is no build-in test case skipping. The test case either passes or fails.
That means you will have to add that functionality by yourself - you can add a function to XCTestCase (e.g. XCTestCase.skip) via a category that will print the information into console. However, you will have to put a return after that to prevent the other asserts from running.
While there are answers which covered almost similar logic, if you don't want to have an extra file to manage conditions then you can mark your test function with throws then use XCTSkip with a nice description to explain why it is skipped. Note a clear message is important as it will make it easy for you to just read it on Report Navigator and understand why it is skipped without having to open a related XCTestCase.
Example:
func test_whenInilizedWithAllPropertiesGraphQLQueryVariableDict_areSetCorrectly() throws {
// Skip intentionally so that we can remember to handle this.
throw XCTSkip("This method should be implemented to test equality of NSMutableDictoinary with heterogenious items.")
}
Official iOS documentation
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests
Use XCTSkipIf() or XCTSkipUnless() when you have a Boolean condition that you can use to evaluate when to skip tests.
Throw an XCTSkip error when you have other circumstances that result in skipped tests. For example:
func testSomethingNew() throws {
guard #available(macOS <#VersionNumber#>, *) else {
throw XCTSkip("Required API is not available for this test.")
}
// perform test using <#VersionNumber#> APIs...
}
There is no test case skipping. You can use if-else block:nested and run/print your desired output.
I have a controller that needs to run several asynchronous methods that interact with the data on the client and make no calls to the server. I have one method working fine in the browser, but I want to drive the methods with tests and I can't get it to work in the test environment (Karma and Mocha). The reason is that the empty array that $resource.query() returns never gets populated in the test environment because the promise doesn't get resolved. Here is my beforeEach in the test suite.
beforeEach(inject(function($rootScope, $controller, scheduleService){
scope = $rootScope.$new();
sc = $controller('scheduleCtrl', {
$scope: scope, service: scheduleService
});
scope.$apply();
}));
scheduleCtrl has a property schedule that is assigned to the result of Resource.query() in it's constructor. I can see the three returned objects loaded into the MockHttpExpectation.
But when I go to run the test sc.schedule is an still an empty array, so the test fails. How do I get the Resource.query() to resolve in the test?
Resource.query() works with promisses, which is a assync process.
It happens that your test execute before the assynchronous request have completed and before array gets populated.
You could use $httpBackend so you can call expect after $httpBackend.flush().
Or you could retrieve the $promisse returned from Resource.query().$promisse on your test and do the expectation inside its implementation.
Ex:
$scope.promisse = Resource.query().$promisse;
sc.promisse(function(values) {
expect(values.length).not.toBe(0);
});
I'm writing a yeoman generator and want to check some prerequisites, for example a git being installed. I can easily check this using .exec, but how do i gracefully abort generator and report error to user? I searched docs, but it seems that i'm missing some obvious way to do it. Any hints?
Throwing exception will of course abort generator, but is it a best way? Maybe something more user friendly? Not all yeoman users are able to read js exceptions.
The current state of error handling in the popular generators is quite diverse:
in the most cases they just log the error and return from the action and let the subsequnt actions run and return 0 status code:
generator-karma's setupTravis method:
if (err) {
this.log.error('Could not open package.json for reading.', err);
done();
return;
}
or set a custom abort property on error and skip further actions with cheking on the abort property but still return 0 status code:
generator-jhipster's CloudFoundryGenerator:
CloudFoundryGenerator.prototype.checkInstallation = function checkInstallation() {
if(this.abort) return;
var done = this.async();
exec('cf --version', function (err) {
if (err) {
this.log.error('cloudfoundry\'s cf command line interface is not available. ' +
'You can install it via https://github.com/cloudfoundry/cli/releases');
this.abort = true;
}
done();
}.bind(this));
};
or manually end the process with process.exit:
generator-mobile's configuringmethod:
if (err) {
self.log.error(err);
process.exit(1);
}
However none of these methods provide a good way to signal to the environment that something went wrong except the last one but directly calling process.exit is a design smell.
Throwing an exception is also an option but this presents also the stackstrace to the user which is not always a good idea.
The best option would be use the Environment.error method, which has some nice advantages:
the Environment is exposed thorough the env property of the yeoman.generators.Base
an error event is emitted which is handled by the yo cli code
the execution will result in a non zero (error) status code which is override-able
by default yo will display only the message and no stacktrace
the stacktrace can be optionally displayed with providing the --debug built-in option when re-running the generator.
With using this technique your action method would look like this:
module.exports = generators.Base.extend({
method1: function () {
console.log('method 1 just ran');
this.env.error("something bad is happened");
console.log('this won't be executed');
},
method2: function () {
console.log('this won't be executed');
}
});
I am using the sht_rails gem to render handlebars templates in my Rails 3.2/Backbone App.
I'm hoping to use this .handlebars template in both the backbone and rails portion of the app, but so far I just have it working in the backbone.
I'm using it like so:
class MyApp.views.MyView extends MyApp.views.BaseView
template: SHT['templates/feed_item']
render: ->
data = {}
#$el.html #template(data)
#
This works great in app, no problems at all, my handlebars template is looking sweet.
However, this is no good for my js testing (I'm using Jasmine with jasmine-headless-webkit)
This is what happens:
$ jasmine-headless-webkit
ReferenceError: Can't find variable: SHT
This makes total sense, as it seems that the sht_rails gem registers the SHT variable, however, it doesn't seem to do this in when I test.
Is there a good way to register the SHT variable when running jhw? or jasmine by itself? I don't even need the template to render for my test, just knowing that the template is called would be enough for me. But for now, all my jasmine tests are broken until I figure out how to register this SHT.
Thanks!
We faced the same problem when using jade templates in our Rails 3.2/Backbone/Marionette app via the tilt-jade gem (hence the JST variable in the code samples below). Our solution was to create a template abstraction and then use Jasmine spies to fake a response during spec execution. This approach also allows us to test template usage, construction, etc.
Spies In General
In case you're unfamiliar with Jasmine spies:
Jasmine integrates 'spies' that permit many spying, mocking, and faking behaviors. A 'spy' replaces the function it is spying on.
Abstraction
Create the abstraction:
YourApp.tpl = function(key, data) {
var path = "templates";
path += key.charAt(0) === "/" ? key : "/" + key;
var templateFn = JST[path];
if(!templateFn) {
throw new Error('Template "' + path + '" not found');
}
if(typeof templateFn !== "function") {
throw new Error('Template "' + path + '" was a ' + typeof(templateFn) + '. Type "function" expected');
}
return templateFn(data);
};
and the requisite monkeypatch:
// MONKEYPATCH - Overriding Renderer to use YourApp template function
Marionette.Renderer = {
render: function(template, model){
return YourApp.tpl(template, model);
}
};
Template Abstraction Spy (spec_helper.js)
Now we can spy as follows:
spyOn(YourApp, 'tpl').andCallFake(function(key, data) {
return function() {
return key;
};
});
Bonus
Since we are spying on the YourApp.tpl function, we can also test against it:
expect(YourApp.tpl).toHaveBeenCalledWith("your_template", { model: model, property: value });
Addendum
If you don't already know about the jasmine-headless-webkit --runner-out flag and are debugging your Jasmine specs in the wilderness, check out this post to see how to generate a runner output report with a full backtrace for any failures.