Automatically Running a Test Case Many Times in Xcode - ios

In Xcode, is there a way for me run a single test case n times automatically?
Reason for doing this is that some of my beta testers are encountering random crashes in my app. I see the crash logs in TestFlight, along with the stack trace, but I can't reproduce the crash.
The crash happens infrequently but when it does, it always happens when users are trying to create a DB record, which then gets uploaded to a server. The problem with the crash logs is that my code does not make an appearance in their stack traces (all UIKit & CoreFoundation stuff - and different each time).
My solution is to run the test for that part of the app 100s of times, with the exception breakpoint set, to try to trigger the bug in my dev environment. But I don't know how to do this automatically.

It took 7 years, but as of Xcode 13, support for test repetition is now built-in.
From the Xcode 13 release notes:
Enable test repetition in your test plan, xcodebuild, or by running your test from the test diamond by Control-clicking and selecting Run Repeatedly to bring up the test repetition dialog.

You can read my answer here.
Basically you need to override invokeTest method
override func invokeTest() {
for time in 0...15 {
print("this test is being invoked: \(time) times")
super.invokeTest()
}
}

In Xcode as such, no.
You can create an XCTestCase class that hooks into the test-running methods it inherits to return multiple runs, but that tends to be annoying and mostly undocumented.
It's probably easier to instead have a "meta-test" that calls out to your other test method repeatedly:
func testOnce() {}
func testManyTimes() {
for _ in 0..<1000 { testOnce() }
}
You might need to call out to some per-test setup/teardown methods. You could perhaps work around that by instead making the loop body be something like:
let test = XCTestCase(selector: #selector(testOnce))
test.invokeTest()
This would lean on the XCTest machinery that your standard tests use, but it might gripe about not being wired into an XCTestCaseRun (or not).

Related

XCTests canceling prematurely

I am running a few unit tests inside Xcode with XCTests. When I run them, all the tests either pass and fail and show the little green check or red cross. The main issue seems to be that my tests are prematurely canceling and they are not being run properly.
Here's some more information:
When the tests are run, there is no console output whatsoever which was not happening before. By that I mean I did have the expected console output before, now I don't.
When the tests are run, it does not require me to unlock my iPhone so that the app can run. Moreover, even if my iPhone is unlocked, it doesn't run the application which it would do before.
When running my tests, it finishes compiling and then very quickly says "Canceling..." in the information bar in Xcode (the top and centered bar that shows warnings and "Building " when you build your project).
I am using the .xcworkspace file (becasue I am using Pods)
I have one build scheme which is the project.
I had these same issues plus the issues of the green checks and red crosses not showing up on my tests after they ran and the sometimes my tests not showing up at all in the testing tab. I fixed these issues by deleting the Podfile.lock file, Pods directory, and the .xcworkspace directory and then running Pod install. After doing this, I had the same issues, except those two issues.
I tried running new tests on a different branch, same issues.
Before running into any of these issues, I was just writing tests and trying different methods of writing tests. I've reverted all those changes and I still get these errors.
I have also found the same issues when running the tests no the simulator.
When I compile and run the app from Xcode, it runs fine.
Any advice on how to solve this would be great! Please ask for more details if needed.
Here's a video of me running my tests. I set up a new project and new unit test target to show what's going on. Note: There's an empty test in there and a test that should always pass:
As you can see, the issue is the tests are not running properly. The green and red crosses are not showing up anymore, the test that's supposed to always pass is failing, there is no console output, and in the top information bar it's saying "canceling..." almost immediately. Any advice on why this canceling is happening would be great!
Here are some screenshots of how the code is set up:
Here's the code as well (it's just the basic code after creating a new unit test, except I added in one assert that should always pass):
import XCTest
#testable import example
class exampleTests: XCTestCase {
override func setUp() {
// Put setup code here. This method is called before the invocation of each test method in the class.
}
override func tearDown() {
// Put teardown code here. This method is called after the invocation of each test method in the class.
}
func testExample() {
// This is an example of a functional test case.
// Use XCTAssert and related functions to verify your tests produce the correct results.
XCTAssertTrue(true)
}
func testPerformanceExample() {
// This is an example of a performance test case.
self.measure {
// Put the code you want to measure the time of here.
}
}
}
Edit:
So, as per #CPR's suggestion, I tried it again on the simulator and the errors were basically all gone. I tried it again on the physical device, but the errors persisted. Note: I had tried it before on the simulator and got the same errors. I didn't change anything since then so I am unsure of why it worked now. Here's the console output from running the tests on the sim (not sure if that will help, but I'll add it anyway):
Test Suite 'All tests' started at 2018-12-06 03:59:53.045
Test Suite 'exampleTests.xctest' started at 2018-12-06 03:59:53.046
Test Suite 'exampleTests' started at 2018-12-06 03:59:53.046
Test Case '-[exampleTests.exampleTests testExample]' started.
Test Case '-[exampleTests.exampleTests testExample]' passed (0.001 seconds).
Test Case '-[exampleTests.exampleTests testPerformanceExample]' started.
/Users/akashkundu/Documents/example/exampleTests/exampleTests.swift:30: Test Case '-[exampleTests.exampleTests testPerformanceExample]' measured [Time, seconds] average: 0.000, relative standard deviation: 115.251%, values: [0.000004, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001], performanceMetricID:com.apple.XCTPerformanceMetric_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
Test Case '-[exampleTests.exampleTests testPerformanceExample]' passed (0.348 seconds).
Test Suite 'exampleTests' passed at 2018-12-06 03:59:53.396.
Executed 2 tests, with 0 failures (0 unexpected) in 0.348 (0.350) seconds
Test Suite 'exampleTests.xctest' passed at 2018-12-06 03:59:53.396.
Executed 2 tests, with 0 failures (0 unexpected) in 0.348 (0.350) seconds
Test Suite 'All tests' passed at 2018-12-06 03:59:53.397.
Executed 2 tests, with 0 failures (0 unexpected) in 0.348 (0.351) seconds
Can anyone shed some light on why running the tests on the physical device causes these errors?
Edit:
Here's the error/log message from the "Messages" tab in Xcode found by #CPR:
Here's the error/log code in text:
example.app encountered an error (Failed to establish communication with the test runner. (Underlying error: Unable to connect to test manager on 8b441d96d063b3b6abf55b06115441d160e85e67. (Underlying error: kAMDMuxConnectError: Could not connect to the device.)))
-----------------------------------------------------------------------
Solution:
Thanks to #CPR, the solution is to simply restart the device. Another lesson learned is to use the "Messages" tab to see logs and errors which is the right most tab next to the "Breakpoint" tab.
His full answer here.
Problem resolved in comments. For future readers, here are some good things to check if you encounter similar issues:
Check Xcode messages tab to look for build errors.
Restart device/simulator as appropriate. This fixed the problem in this case.
Uninstall/reinstall app on device/simulator.
Are certs/provisioning profiles valid? Try generating new ones and re-running.
Does it work on the simulator? If yes, this is likely an issue with the device itself.
Are breakpoints hit when you run the code? If yes, then this is a test failure. If no, the problem is happening before the tests are even run (as was the case here).

XC UITesting flickering with finding UIElements

I have a section of code that runs if the user needs to re-auth after logging in. During UI tests, this popover is sometimes displayed, so I have a check for it existing
if (XCUIApplication().staticText["authLabel"].exists) {
completeAuthDialog()
}
When this runs locally, it is fine, completes and the framework finds the element no problem. But when the nightly job on the CI is ran, it fails the first time, but once the same build is set to be rebuilt, the test passes. authLabel is the UILabel's accessibility identifier(btw), so I have been trying to figure out what is causing the flickering.
Yesterday I spent time on the issue, and it seems that the framework just doesn't find the elements sometimes? I have used the accessibility inspector in ensure I am query for the same time it sees.
I even expanded that if check with 4 or 5 additional || to check for any element inside of the popover. The Elements all have accessibility identifiers, I have also used the record feature to ensure that it passes back the same element "names" I am using.
I am kind of stuck, I don't know what else to try/could be causing this issue. The worst part is it ran fine for couple of months, but it seems to fail every night now, and as I said when the tests are ran locally inside xcode they pass fine. Could this be a issue with building from command line?
It is often slower when your tests execute on a different machine, this problem seems particularly prevelant with CI machines as they tend to be under-powered.
If you just do a single check for an element existing, the test only has one point in time to get it right and if the app was slow to present the element then the test will fail.
You can defend against having a flaky test by using a waiter to check a few times over a few seconds to ensure that you've given the app enough time to show the authentication dialog before continuing.
let authElement = XCUIApplication().staticText["authLabel"]
let existsPredicate = NSPredicate(format: "exists == true")
let expectation = XCTNSPredicateExpectation(predicate: existsPredicate, object: authElement)
let result = XCTWaiter().wait(for: [expectation], timeout: 5)
if (result == .completed) {
completeAuthDialog()
}
You can adjust the timeout to suit your needs - a longer timeout will result in the test waiting a longer time to continue if the auth dialog doesn't appear, but will give the dialog more time to appear if the machine is slow. Try it out and see how flaky the tests are with different timeouts to optimise.

how to stop Xcode iOS unit tests if a fatalerror is hit?

How can I stop Xcode iOS unit tests if a fatalerror is hit?
That is in case I have 10 unit tests, but it happens that the code it calls for unit test number 5 has a coding problem (** coding issue in this case is in the test case and setup code **) and is throwing a fatalError. So in this case the unit testing stops there and does not continue to other test cases in that test class.
(not sure if this is the intended operational / process for good unit testing or not? )
Try
override func setUp() {
super.setUp()
// Put setup code here. This method is called before the invocation of each test method in the class.
continueAfterFailure = false
}
A related problem I had was stopping the unit test with a breakpoint when it fails, so that I can see the issue, without having to click or scroll through tons of test output.
Making a unit test failure breakpoint is simple, but wasn't easy to find.
Xcode 8+ there is a new breakpoint that you can add called a "Test Failure Breakpoint".
Click on Breakpoints (Left Panel ... Command + 7)
Click on + (Bottom left corner)
Click on "Test Failure Breakpoint"
References
Debugging Tests with Xcode
There is no easy fix for this. In case of an uncaught (objc) exception or a failed assert the process that runs the unit tests did receive either a mach exception or a unix signal.
Matt Gallagher has a nice solution for this, which he presents in this blog post.
It is good practice to write safe code everywhere, including your unit tests.
Regarding your problem, as Oleg Danu replied, you should set continueAfterFailure = false.
The next step is to add following before your test can crash.
var optionalVariable: Int?
XCTAssertNotNil(optionalVariable)
I recommend to add it into setUp()
In this way your test will stop before crashing Xcode, and you don't need to set any breakpoints.

Does print() / println() slow execution?

I have an app with a couple of thousand lines and within that code there are a lot of println() commands. Does this slow the app down? It is obviously being executed in the Simulator, but what happens when you archive, submit and download the app from the app store/TestFlight. Is this code still "active", and what about code that is "commented out"?
Is it literally never read or should I delete commented out code when I submit to test flight/app store?
Yes it does slow the code.
Both print and println decrease performance of the application.
Println problem
println is not removed when Swift does code optimisation.
for i in 0...1_000 {
println(i)
}
This code can't be optimised and after compiling Assembly code would perform a loop with 1000 instructions that actually don't do anything valuable.
Analysing Assembly code
The problem is that Swift compiler can't do optimal optimisation to the code with print and println commands.
You can see it if you have a look on generated Assembly code.
You can do see assembly code with Hopper Disassembler or by compiling Swift code to the Assembly with by using swiftc compiler:
xcrun swiftc -emit-assembly myCode.swift
Swift code optimisation
Lets have a look on few examples for better understanding.
Swift compiler can eliminate a lot of unnecessary code like:
Empty function calls
Creating objects that are not used
Empty Loops
Example:
class Object {
func nothing() {
}
}
for i in 0...1_000 {
let object = Object3(x: i)
object.nothing()
object.nothing()
}
In this example Swift complier would do this optimisation:
1. Remove both nothing method calls
After this the loop body would have only 1 instruction
for i in 0...1_000 {
let object = Object(x: i)
}
2. Then it would remove creating Object instance, because it's actually not used.
for i in 0...1_000 {
}
3. The final step would be removing empty loop.
And we end up with no code to execute
Solutions
Comment out print and println
This is definitely not the best solution.
//println("A")
Use DEBUG preprocessor statement
With this solution you can simple change logic of your debug_print function
debug_println("A)
func debug_println<T>(object: T) {
#if DEBUG
println(object)
#endif
}
Conclusion
Always Remove print and println from release application!!
If you add print and println instruction, the Swift code can't be optimised in the most optimal way and it could lead to the big performance penalties.
Generally you should not leave any form of logging turned on in a production app, it will most likely not impact performance but it is poor practice to leave it enabled and unneeded.
As for commented code, this is irrelevant as it will be ignored by the compiler and not be part of the final binary.
See this answer on how to disable println() in production code, there is a variety of solutions, Remove println() for release version iOS Swift
As you do not want to have to comment out all your println() calls just for a release, it is much better to just disable them, otherwise you'll be wasting a lot of time.
printLn shouldn't have much of an impact at all as the bulk of the operation has already been carried out before that point.
Commented out code is sometimes useful, although it can make your source difficult to read it has absolutely no bearing on performance whatsoever and I've never had anything declined for commented out code and my stuff is full of it.

Discovering blocking erlang threads

I have a project which has lots of modules, each one has different running threads. I wrote a little script which goes through each one and safely reloads the code (for hot swaps):
reload_all() ->
?MODULE:reload_all(?MODULE_LIST).
reload_all([]) -> ok;
reload_all([T|C]) ->
io:fwrite("Purging ~w\n",[T]),
try_purge(T),
{module,T} = code:load_file(T),
?MODULE:reload_all(C).
try_purge(T) -> try_purge(T,1).
try_purge(T,Wait) ->
case code:soft_purge(T) of
true -> ok;
false ->
io:fwrite("* Waiting ~w seconds for ~w module\n",[Wait,T]),
timer:sleep(Wait*1000),
try_purge(T,Wait+1)
end.
It uses the soft_purge() function which only purges the code if there are no threads running the "old" code that would be killed by the normal purge command. It will wait in increasing intervals and keep trying. I've designed the project so that the wait should never be more then a minute total, but realistically it should always be more or less instant.
The problem I'm running into is that sometimes a module will have a bug causing it to block indefinitely for one reason or another, and my reload_all() script never completes. This is the desired behavior, it lets me know that something is wrong. The problem is that to track down the bug involves lots and lots of testing and analyzing of the code, which sometimes doesn't even work because the bug only shows up in the production environment and not in the testing one.
My question is: Is there a way to identify which threads are running the "old" code in a module, and see which function they are currently stuck in?
You can check if you are using the old or the new version of the module using erlang:check_old_code/1 and erlang:check_process_code/2. Just see Erlang manual.

Resources