iOS Testing: Is there a way to skip tests? - ios

I don't want to execute certain tests if the feature is currently disabled. Is there a way to "skip" a test (and to get appropriate feedback on console)?
Something like this:
func testSomething() {
if !isEnabled(feature: Feature) {
skip("Test skipped, feature \(feature.name) is currently disabled.")
}
// actual test code with assertions here, but not run if skip above called.
}

You can disable XCTests run by Xcode by right clicking on the test symbol in the editor tray on the left.
You'll get this menu, and you can select the "Disable " option.
Right clicking again will allow you to re-enable. Also, as stated in user #sethf's answer, you'll see entries for currently disabled tests in your .xcscheme file.
As a final note, I'd recommend against disabling a test and committing the disabling code in your xcscheme. Tests are meant to fail, not be silenced because they're inconvenient.

Another possible solution which I found in some article: prefix your skipped tests with something like "skipped_"
Benefits:
XCode will not treat them as tests
You can easily find them using search
You can make them tests again, replacing "skipped_" to ""

Beginning with Xcode 11.4 you'll be able to using XCTSkipUnless(_:_:file:line:).
The release notes read,
XCTest now supports dynamically skipping tests based on runtime
conditions, such as only executing some tests when running on certain
device types or when a remote server is accessible. When a test is
skipped, Xcode displays it differently in the Test Navigator and Test
Report, and highlights the line of code where the skip occurred along
with an optional user description. Information about skipped tests is
also included in the .xcresult for programmatic access.
To skip a test, call one of the new XCTSkip* functions from within a
test method or setUp(). For example:
func test_canAuthenticate() throws {
try XCTSkipIf(AuthManager.canAccessServer == false, "Can't access server")
// Perform test…
}
The XCTSkipUnless(::file:line:) API is similar to
XCTSkipIf(::file:line:) but skips if the provided expression is
false instead of true, and the XCTSkip API can be used to skip
unconditionally. (13696693)

I've found a way to do this by modifying my ui test .xcscheme file and adding a section called SkippedTests under TestableReference, then adding individual Test tags with an 'Identifier' attribute with the name of your class and test method. Something like:
<SkippedTests>
<Test Identifier="ClassName/testMethodName" />
</SkippedTests>
Hope this helps

From Xcode 11.4+, you can use XCTSkipIf() or XCTSkipUnless().
try XCTSkipIf(skip condition, "message")
try XCTSkipUnless(non-skip condition, "message")
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests#overview

This is what test schemes are meant to do.
You can have different schemes targeting different testing situations or needs.
For example, you may want to create a scheme that runs all your tests (full regression scheme), or you may want to select a handful of them to do a quick smoke test on your app when small changes are made.
This way, you can select different schemes according to how much testing you need to do.
Just go to
Product >> Scheme

It's not that universal, but you can override invokeTest in XCTestCase and avoid calling super where necessary. I'm not sure about the appropriate feedback in console though.
For instance the following fragment makes the test run only on iOS Simulator with iPhone 7 Plus/iPad Pro 9.7"/iOS 11.4:
class XXXTests : XCTestCase {
let supportedModelsAndRuntimeVersions: [(String, String)] = [
("iPhone9,2", "11.4"),
("iPad6,4", "11.4")
]
override func invokeTest() {
let environment = ProcessInfo().environment
guard let model = environment["SIMULATOR_MODEL_IDENTIFIER"], let version = environment["SIMULATOR_RUNTIME_VERSION"] else {
return
}
guard supportedModelsAndRuntimeVersions.contains(where: { $0 == (model, version) }) else {
return
}
super.invokeTest()
}

If you use Xcode 11 and TestPlan, you can tweak your configuration to skip or allow specific tests. Xcode TestPlan is a JSON format after all.
By default, all tests are enabled, you can skip a list of tests or test file.
"testTargets" : [
{
"skippedTests" : [
"SkippedFileTests", // skip the whole file
"FileTests\/testSkipped()" // skip one test in a file
]
...
On the opposite, you can also skip all tests by default and enable only few.
"testTargets" : [
{
"selectedTests" : [
"AllowedFileTests", // enable the whole file
"FileTests\/testAllowed()" // enable only a test in a file
]
...
I'm not sure if you can combine both configurations though. It flips the logic based on Automatically includes new tests.

Unfortunately, there is no build-in test case skipping. The test case either passes or fails.
That means you will have to add that functionality by yourself - you can add a function to XCTestCase (e.g. XCTestCase.skip) via a category that will print the information into console. However, you will have to put a return after that to prevent the other asserts from running.

While there are answers which covered almost similar logic, if you don't want to have an extra file to manage conditions then you can mark your test function with throws then use XCTSkip with a nice description to explain why it is skipped. Note a clear message is important as it will make it easy for you to just read it on Report Navigator and understand why it is skipped without having to open a related XCTestCase.
Example:
func test_whenInilizedWithAllPropertiesGraphQLQueryVariableDict_areSetCorrectly() throws {
// Skip intentionally so that we can remember to handle this.
throw XCTSkip("This method should be implemented to test equality of NSMutableDictoinary with heterogenious items.")
}

Official iOS documentation
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests
Use XCTSkipIf() or XCTSkipUnless() when you have a Boolean condition that you can use to evaluate when to skip tests.
Throw an XCTSkip error when you have other circumstances that result in skipped tests. For example:
func testSomethingNew() throws {
guard #available(macOS <#VersionNumber#>, *) else {
throw XCTSkip("Required API is not available for this test.")
}
// perform test using <#VersionNumber#> APIs...
}

There is no test case skipping. You can use if-else block:nested and run/print your desired output.

Related

Create post status notification method for all pipelines

thanks in advance for the help.
For our project, for which we use Jenkins, I'm working on a shared library to house all methods for our different pipelines (3 different ones).
Currently I'm facing an issue, that I don't really know how to solve.
I configured the jenkins-mattermost-plugin for the project. Ideally, I'd like to create one method for the post task in Jenkins that takes currentBuild.currentResult as an argument and returns the corresponding condition block and its message.
Ex:
if success:
success {
mattermostSend(...success build notification)
}
}
if failure:
failure {
mattermostSend(...failure build notification)
}
}
etc.
The problem is not writing the method itself, but when to call it in the pipeline.
I can't call it in a stage block, not without any block wrapper.
I hope this makes sense what I'm trying to say.
You can call the function in always post-condition:
post {
always {
mattermostSend("${currentBuild.currentResult}")
}
}
Then inside mattermostSend create condition which depending on the build result will send the required message.

I want Canopy web testing results to show in VS 2013 test explorer... and I'm SO CLOSE

I'm trying to figure out how to get the test results for Canopy to show in the VS test explorer. I can get my tests to show up and it will run them but it always shows a pass. It seems like the Run() function is "eating" the results so VS never sees a failure.
I'm sure it is a conflict between how Canopy is nicely interpreting the exceptions it gets into test results because normally you'd want Run() to succeed regardless of the outcome and report its results using its own reports.
Maybe I should be redirecting output and interpreting that in the MS testing code?
So here is how I have it set up right now...
The Visual Studio Test Runner looks at this file for what it sees as tests, these call the canopy methods that do the real testing.
open canopy
open runner
open System
open Microsoft.VisualStudio.TestTools.UnitTesting
[<TestClass>]
type testrun() =
// Look in the output directory for the web drivers
[<ClassInitialize>]
static member public setup(context : TestContext) =
// Look in the output directory for the web drivers
canopy.configuration.ieDir <- "."
canopy.configuration.chromeDir <- "."
// start an instance of the browser
start ie
()
[<TestMethod>]
member x.LocationNoteTest() =
let myTestModule = new myTestModule()
myTestModule.all()
run()
[<ClassCleanup>]
static member public cleanUpAfterTesting() =
quit()
()
myTestModule looks like
open canopy
open runner
open System
type myTestModule() =
// some helper methods
member x.basicCreate() =
context "The meat of my tests"
"Test1" &&& fun _ ->
// some canopy test statements like...
url "http://theURL.com/"
"#title" == "The title of my page"
//Does the text of the button match expectations?
"#addLocation" == "LOCATION"
// add a location note
click ".btn-Location"
member x.all() =
x.basicCreate()
// I could add additional tests here or I could decide to call them individually
I have it working now. I put the below after the run() for each test.
Assert.IsTrue(canopy.runner.failedCount = 0,results.ToString())
so now my tests look something like:
[<TestMethod>]
member x.LocationNoteTest() =
let locationTests = new LocationNote()
// Add the test to the canopy suite
// Note, this just defines the tests to run, the canopy portion
// of the tests do not actually execute until run is called.
locationTests.all()
// Tell canopy to run all the tests in the suites.
run()
Assert.IsTrue(canopy.runner.failedCount = 0,results.ToString())
Canopy and the UnitTesting infrastructure have some overlap in what they want to take care of. I want the UnitTesting infrasturcture to be the thing "reporting" the summary of all tests and details so I needed to find a way to "reset" the canopy portion so that I didn't have to track the last known state from canopy and then compare. So for this to work your canopy suite can only have one test but we want to have as many as we want at the UnitTesting level. To adjust for that we do the below in the [].
runner.suites <- [new suite()]
runner.failedCount <- 0
runner.passedCount <- 0
It might make sense to have something within canopy that could be called or configured when the user wants to use a different unit testing infrastructure around canopy.
Additionally I wanted the output that includes the error information to appear as it normally does when a test fails so I capture the console.out in a stringBuilder and clear that in []. I set it up in by including the below [] where common.results is the StringBuilder I then use in the asserts.
System.Console.SetOut(new System.IO.StringWriter(common.results))
Create a mutable type to pass into the 'myTestModule.all' call which can be updated accordingly upon failure and asserted upon after 'run()' completes.

Grails spock database locking

I have a service method that locks a database row.
public String getNextPath() {
PathSeed.withTransaction { txn ->
def seed = PathSeed.lock(1)
def seedValue = seed.seed
seed.seed++
seed.save()
}
}
This is how my spock test looks like:
void "getNextPath should return a String"() {
when:
def path = pathGeneratorService.getNextPath()
then:
path instanceof String
}
It's just a simple initial test. However I get this error when I run the test:
java.lang.UnsupportedOperationException: Datastore [org.grails.datastore.mapping.simple.SimpleMapSession] does not support locking.
at org.grails.datastore.mapping.core.AbstractSession.lock(AbstractSession.java:603)
at org.grails.datastore.gorm.GormStaticApi.lock_closure14(GormStaticApi.groovy:343)
at org.grails.datastore.mapping.core.DatastoreUtils.execute(DatastoreUtils.java:302)
at org.grails.datastore.gorm.AbstractDatastoreApi.execute(AbstractDatastoreApi.groovy:37)
at org.grails.datastore.gorm.GormStaticApi.lock(GormStaticApi.groovy:342)
at com.synacy.PathGeneratorService.getNextPath_closure1(PathGeneratorService.groovy:10)
at org.grails.datastore.gorm.GormStaticApi.withTransaction(GormStaticApi.groovy:712)
at com.synacy.PathGeneratorService$$EOapl2Cm.getNextPath(PathGeneratorService.groovy:9)
at com.synacy.PathGeneratorServiceSpec.getNextPath should return a String(PathGeneratorServiceSpec.groovy:17)
Does anyone have any idea what this is?
The simple GORM implementation for Unit tests does not support some features, such as locking. Moving your test to an integration test will use the full implementation of GORM instead of the simple implementation used by unit tests.
Typically when you find yourself using anything more than the very basic features of GORM you will need to use integration tests.
Updated 10/06/2014
In more recent versions of Grails and GORM there is now the HibernateTestMixin which allows you to test/use such features in Unit tests. Further information can be found in the documentation.
As a workaround, I was able to get it working by using Groovy metaprogramming. Applied to your example:
def setup() {
// Current spec does not test the locking feature,
// so for this test have lock call the get method
// instead.
PathSeed.metaClass.static.lock = PathSeed.&get
}

Grails - How to make a piece of code run only during functional test

I have a piece of code in my application, that should run only during functional testing. It should not be execute while unit-testing or integration-testing or during "run-app".
if( Environment.current == Environment.TEST )
Is there anything similar to the above check, that would check for functional-testing?
We had a similar requirement in one of our projects.
The approach was to set a system property with the current test phase. To do so create a file scripts/Events.groovy with the following content:
eventTestPhaseStart = { args ->
System.properties["grails.test.phase"] = args
}
Now you can perform logic depending on the content of this property:
if (System.properties["grails.test.phase"] == "functional") {
// do something
}
This is very similar to this answer or this blog entry.
You can use this
GrailsUtil.environment == 'test'

Output a text file from Ranorex to include just a pass/fail result and a number

I am trying to get Ranorex to output a text file which will look like the following:
Pass
74
The pass/fail result will be obtained based on whether the test running has passed or failed. The number will be hardcoded to all I need to do is store that in a variable and include it in the output.
I would have thought it would have been simple but I'm struggling to get any help from Ranorex. I though I might be able to use the reporting function, change the output file type and alter the report structure but that didn't work either.
Although I am used to Ranorex and writing my own user code, I am new to adapting it in this way.
All my user code is written in C#
Can anyone offer any assistance?
Thanks!
Edit: So I've now managed to get Ranorex to output a text file and I can put any text into it, including a string stored in a variable.
However I'm struggling to store the pass/fail result of my test in a string that I can output.
I've discovered a way to do this however it relies on the following:-
The user code must be in separate test
This separate test must exist in a sibling test case to the one your main test is in
Both this test case and the case containing your main test must both be part of a parent test case
For example:
Parent TC
.....-AddUser TC
.........-MAIN TEST
.....-AddUser FailCheck
.........-USER CODE
You can then set your AddUser TC to 'Continue with sibling on fail'
The user code is as follows:
public static void Output()
{
string result = "";
ITestCase iCase = TestSuite.Current.GetTestCase("Add_User_Test"); // The name of your Test Case
if(iCase.Status == Ranorex.Core.Reporting.ActivityStatus.Failed){
result = "Failed"; }
if(iCase.Status == Ranorex.Core.Reporting.ActivityStatus.Success){
result = "Passed"; }
int testrunID = 79;
using (StreamWriter writer =
new StreamWriter("testresult.txt"))
{
writer.WriteLine(testrunID);
writer.WriteLine(result);
}
}
This will take the testrunID (specific to each test case) and the result of the test and output it to a text file.
The idea is then to read in the file with a custom java application I've developed and push the data into a test case management program such as QA Complete which can mark tests as Passed/Failed automatically
You can run the test suite directly using the TestSuiteRunner.Run() method. This will allow you to look at the return value of that directly and output pass or failure based on the return value.
http://www.ranorex.com/Documentation/Ranorex/html/M_Ranorex_Core_Testing_TestSuiteRunner_Run.htm
if(TestSuiteRunner.Run(typeof({testSuiteclass}),{Command Line Arguments})==0)
{
File.WriteLine("success");
}
else
{
File.WriteLine("failure");
}

Resources