Spock's #Unroll vs. #Stepwise - spock

Say I have a Specification with a single #Unroll test in it
class MySpec extends Specification {
#Unroll
def "some test, executed n times, with n>1"() {
// when, then, where
}
}
Would it be redundant to annotate MySpec to be executed #Stepwise? Is this treated as one test (executed n times in a row) or as n tests (executed in parallel)?

#Stepwise ensures that all test methods run in the order shown in the source file.
#Unroll is useful for parameterized tests because it forces all test scenarios in a single test method to be reported as individual test runs.
So in your case #Stepwise is redundant and all unrolled tests are executed in the order as specified in the where clause.
Generally in Spock 1.x all tests are executed in row and even those unrolled from where clause. Parallelism is planned for Spock 2.0 - as you can see here https://github.com/spockframework/spock/issues/157

Related

Why there is a new instance of object between unit tests in the same XCTestCase?

I have created test class like this:
import XCTest
class ExampleTests: XCTestCase {
private let context = NSManagedObjectContext.mr_()
func testA() {
print(#function)
print(context)
XCTAssertTrue(true)
}
func testB() {
print(#function)
print(context)
XCTAssertTrue(true)
}
}
and the output on console is the following:
Test Suite 'ExampleTests' started at 2021-08-04 16:33:42.426
Test Case '-[PLZ_Tests.ExampleTests testA]' started.
testA()
NSManagedObjectContext: 0x28210a630> //DIFFERENT INSTANCE
Test Case '-[PLZ_Tests.ExampleTests testA]' passed (0.004 seconds).
Test Case '-[PLZ_Tests.ExampleTests testB]' started.
testB()
NSManagedObjectContext: 0x282109c70> //DIFFERENT INSTANCE
Test Case '-[PLZ_Tests.ExampleTests testB]' passed (0.000 seconds).
Test Suite 'ExampleTests' passed at 2021-08-04 16:33:42.431.
Does it instantiate a whole class again for every unit test in that class?
Does it instantiate a whole class again for every unit test in that class?
Pedantic, but no. No classes are instantiated. New instances (objects) are instantiated, from the class.
Why there is a new instance of object between unit tests in the same XCTestCase?
Because it’s not the same XCTestCase. The XCTest framework will instantiate one ExampleTests object for every test method. Just like any other Swift type, instance properties with default values will have those evaluated and set before the initializer gets called on the new instance.
In all, there will be 2 NSManagedObjectContext objects, each owned by one of 2 XCTestCase objects. testA will be invoked on one of them, and testB on the other.
That’s an international design decision, probably inspired from other xUnit testing frameworks.
It has several benefits/conveniences.
The most important one is that it allows test methods to use instance variables without trampling over each other (if they were to run in parallel, that would be a huge deal).
Another benefit is that it discourages “communication” between test cases, to prevent them from being order dependant (where one test case sets up some state necessary for another test case to pass). Of course, you could still achieve this by storing the state elsewhere (class variables, global variables, the file system, etc.), but doing that would be “wrong” in a more obvious/noticeable way.

Unit Test case fails when whole Test Suite is executed

I am writing the Unit test cases for my sqlite database class. I have five public API in that class.
My test cases goes something like below:
+ (void)setUp { // Note, this is class method and hence called only once during start of this test suite.
// Code to delete the existing sqlite DB file.
}
- (void)testDBManagerSingletonInstance {
DBManager *dbMgr = [DBManager getSharedInstance];
DBManager *dbMgr1 = [[DBManager alloc] init];
XCTAssertEqualObjects(dbMgr, dbMgr1);
}
- (void)testSaveAndDeleteNicknameAPI {
// Multiple Assert statements in this test.
}
- (void)testAllAccountStatusAPIs {
// Multiple Assert statements in this test.
}
Each of the single unit test is executed without any errors. But it fails when whole test suite is executed.
Probably, I know the Root Cause for failing. It is because when entire test suite is executed then all test runs in parallel and there is simultaneous update-delete happens in the database. Hence, when all unit tests runs it will fail.
But I don't know how can I fix that, because this is not Async, and hence I cannot use XCExpectation class.
Need assistance to resolve & understand the problem.
Tests based on XCTests don't run in parallel - they are run sequentially. To quote the docs :
Tests execute synchronously because each test is invoked independently one after another.
Since you've shown very little code, it's hard to say what is the real problem. It is very likely that you were close with your assumption - you should either improve your setUp (maybe switch to instance version from class version) and tearDown methods or introduce mocks, and perform your tests on a mocked database if possible.
You shouldn't work with singletons in a codebase you want to test. I think that your code uses the sharedSingleton instance of DBManager internally. Each unitest will change the internal state of that instance, all following unitests are therefore corrupted. You need to reset all changes after each test case in teardown.
But my suggestion is to avoid singletons in your base code by dependency injection. When creating an instance of your class inject the singleton instance of your DBManager. That makes unit testing easier. If you inject a protocol instead of an object you may even test your code with a protocol fake implementation.

Run SpecFlow test infinity times

I have a SpecFlow test that fails if you run it enough times. How can I take the existing SpecFlow test and make it run infinity times until it fails? (Ideally I'd like to count how many times it takes.)
My initial guess was to just call the binding methods that the test script ultimately calls - but I keep getting null pointer exceptions. Apparently SpecFlow is initialising something that I'm not.
My next guess was to try to launch the auto-generated code for the test feature, but it seems to want all sorts of data from the SpecFlow framework that I don't know how to generate.
All I want to do is run the test multiple times. Surely there must be some way to accomplish this utterly trivial task?
It seems I was trying too hard. Here's what I came up with:
using System;
using NUnit.Framework;
[TestFixture]
public sealed class StressTest
{
[Test]
public void Test()
{
var thing = new FoobarFeature();
thing.FeatureSetup();
thing.TestInitialize();
var n = 0;
while (true)
{
Console.WriteLine("------------ Attempt {0} ----------------", n);
thing.Scenario1();
thing.Scenario2();
thing.Scenario3();
n++;
}
}
}
I've got a Foobar.feature file, which autogenerates a Foobar.feature.cs file containing the FoobarFeature class. The names of the scenario methods obviously change depending what's in your feature file.
I'm not 100% sure this works for tests having complex setup / teardown, but it works for my specific case...

how to troubleshoot intermittent junit test failures?

I am dealing with a case where my tests pass or fail based on the order of declaration. This of-course points to not properly isolated tests. But I am stumped about how to go about finding the issue.
The thing is my junit tests derive from a class that is belongs to a testing framework built on junit and has some dependency injection container. The container gets reset for every test by the base class setup and so there are no lingering objects at least in the container since the container itself is new. So I am leaning toward the following scenario.
test1 indirectly causes some classA which sets up classA.somestaticMember to xyz value. test obj does not maintain any references to classA directly- but classA is still loaded by vm with a value xyz when test1 ends.
test2 access classA and trips up on somestaticmember having xyz value.
The problem is a) I dont know if this is indeed the case- how do I go about finding that ? I cannot seem to find a reference to a static var in the code...
b) is there a way to tell junit to dump all its loaded classes and do it afresh for every test method ?
You can declare a method with #Before, like
#Before public void init()
{
// set up stuff
}
and JUnit will run it before each test. You can use that to set up a "fixture" (a known set of fresh objects, data, etc that your tests will work with independently of each other).
There's also an #After, that you can use to do any cleanup required after each test. You don't normally need to do this, as Java will clean up any objects you used, but it could be useful for restoring outside objects (stuff you don't create and control) to a known state.
(Note, though: if you're relying on outside objects in order to do your tests, what you have isn't a unit test anymore. You can't really say whether a failure is due to your code or the outside object, and that's one of the purposes of unit tests.)

Grails dynamic finder where field name contains reserved words

So I have a grails domain class:
class Message
{
Inbox inbox
Boolean hasBeenLogicallyDeletedByRecipient
...
static belongsTo = [
inbox:Inbox,
...
]
static constraints = {
hasBeenLogicallyDeletedByRecipient(nullable:false)
...
}
}
I would like to use a dynamic finder as follows:
def messages = Message.findAllByInboxAndHasBeenLogicallyDeletedByRecipient(
inbox, false, [order:'desc',sort:'dateCreated'])
This works fine running a unit test case in STS 2.6.0.M1 against grails 1.2.1;
Spinning up the web app, it fails because of the By in hasBeenLogicallyDeletedByRecipient (I'm guessing it has confused the dynamic finder parsing when building up the query).
I can use a criteria builder which works in the app:
def messages = Message.withCriteria {
and {
eq('inbox', inbox)
eq('hasBeenLogicallyDeletedByRecipient', false)
}
order('dateCreated', 'desc')
}
But since withCriteria is not mocked, it doesn't immediately work in unit tests, so I could add the following to the unit test:
Message.metaClass.static.withCriteria = { Closure c ->
...
}
Is the criteria/unit test mocking the best/accepted approach? I don't feel completely comfortable with mocking this, as it sidesteps testing the criteria closure.
Ideally, I'd rather use the dynamic finder - is there a succinct way to make it work as is?
If there is no way around it, I suppose the field name could be changed (there is a reason why I don't want to do this, but this is irrelevant to the question)...
UPDATE:
Here's the stacktrace when I try to use findAllByInboxAndHasBeenLogicallyDeletedByRecipient() inside the app - notice how it appears to get the last By and treat everything else between it and findAll as a property. i grazed on http://grails.org/OperatorNamesInDynamicMethods but it doesn't mention anything about By being verboten.
org.codehaus.groovy.grails.exceptions.InvalidPropertyException: No property found for name [byInboxAndHasBeenLogicallyDeleted] for class [class xxx.Message]
at xxx.messages.yyyController$_closure3.doCall(xxx.messages.yyyController:53)
at xxx.messages.yyyController$_closure3.doCall(xxx.messages.yyyController)
at java.lang.Thread.run(Thread.java:662)
Testing the database querying is really an Integration test, not a unit test. Is your test in /test/unit or /test/integration ? - I'd expect the 'withCriteria' to be fully functional in the integration tests, but not in unit tests.
From the grails docs ( http://grails.org/doc/latest/ ), section 9.1:
Unit testing are tests at the "unit"
level. In other words you are testing
individual methods or blocks of code
without considering for surrounding
infrastructure. In Grails you need to
be particularity aware of the
difference between unit and
integration tests because in unit
tests Grails does not inject any of
the dynamic methods present during
integration tests and at runtime.

Resources