Xcode test navigator stuck on spinner using KIF tests - ios

I am writing integration tests for my iOS app using KIF with the latest Xcode 5. When I run a test, a suite of tests, or all of them, the tests pass with no errors according to the console log, but the test navigator either takes many minutes to show the green pass icon for simple tests like Login, or keeps the spinner running indefinitely. I frequently have to Force Quit Xcode in order to clear the test results. I see this both on the simulator and the device.
I have tried using [tester waitForTimeInterval:3.0]; at the end of each test to no avail.
I have not found any discussions or solutions in all my searches, so I'm hoping to get some answers on this one.
Thanks in advance.

Thanks to Scott Anderson of Walmart Labs for this tip.
The cause of the slow test resolution was NSLog(). I have my own macro version that activates logs when compiled for Debug, which is the case for the Test builds. I log the output of all my server calls, which adds up especially during the registration process. When I disabled that, my tests came back green right after finishing, and no more hanging spinner.
The test navigator must be parsing the console output for XCTest result, slowly. This is my speculation, but would explain the slowness.

Related

Bypass or Mock Apple Permission Dialogs during Testing

I'm having trouble getting my test suite to pass in full when it runs on CircleCI. Everything will pass locally when run in the suite and when run individually. I eventually came across the issue that was causing problems, the Apple Notification/Location permission dialog. The tests still run, but any test that is expecting an alert to show fails because the Apple dialog is still on the screen. This basically happens every time you run the full suite on a new device or delete the app.
My question, what is the best way to deal with these dialogs? I'm pretty sure it is impossible to write the UI tests around it since all UI tests are pretty strict about ordering and what is on screen. The tests are randomized and the dialog only shows for the first test that calls for it, the rest won't need to worry about it.
My current thinking, but unsure how to proceed:
Mock the request for location/push notifications so it never triggers the dialog. Difficult for me so far as I have a class that does all the work for push and location. Do I mock against Apple classes or my own?
Write a seperate target for tests that only has 1 test that triggers the dialog and can close it. This would only be used when run on the CI server. This may not work as the same simulator may not be used between test runs. I would think it would, but no guarantees.
Add debug code to live code to bypass some of these dialogs and permissions. I already have code when run in DEBUG for notifications since you never actually get a successful token on the simulator. I just stub a fake return when run in debug so I can continue testing. What's a a little more? I'm not honestly considering this one if I can absolutely help it. There is already enough "testing" code in the live codebase that I'd like to prevent any more if at all possible.
Would love some feedback or examples on how to proceed if anyone has any ideas.
Setup Details:
ObjC
Latest version of Xcode, supports iOS 8 and up
Using KIF and OCMock for testing
We had a couple ways to workaround this:
In beforeEach/setUp, check the existence of the alert box and then use KIF's API to ack.
https://github.com/plu/JPSimulatorHacks just works!
Use a XCUITest to trigger permission flows and turn all permission on before moving to the target test bundle.
This answer is woefully late to my own question, but we did manage to solve this by using wix/AppleSimulatorUtils. Locally this was easy to resolve by running the app and approving any pop-ups for Location or Notifications. They won't come back until you run on a new sim. For command line/CI runs, we would use this as part of the setup. Pre-approve any permissions for the device and they won't show up during the test run. This tool has saved me many countless hours and I recommend it completely.

Tests on Bitrise (some consistently, some inconsistently) failing with "AX action" errors

I have a number of tests that I've written for an pretty substantial app (the app's been around for a few years longer than I have at this company) in the XCUITest framework. All of the tests pass consistently on my laptop and also the laptops of the other engineers.
When running tests on Bitrise, the first UI test fails every time on the setup phase with the following message:
testFixtureAttachment, UI Testing Failure - Failed to perform AX action for monitoring the animations of (app), error: Error -25204 performing AXAction 2043
Other tests usually pass but sometimes fail with errors such as:
UI Testing Failure - Failed to perform AX action for monitoring the event loop of (app), error: Error -25204 performing AXAction 2042
UI Testing Failure - Failed to scroll to visible (by AX action) TextField 0x7fe800f9fa20: traits: ... error: Error -25204 performing AXAction 2003
How can I resolve this so that, at a minimum, I don't have my first test always failing on setup?
Per feedback from the XCUITest team, this is happening because the simulator is CPU-starved and AX Action is timing out when spinning up the app. I was able to reproduce this on my own machine (which did not normally exhibit the failure) by using Power Management to step down my CPU.
Increasing the CPU allocation in Bitrise is the obvious solution. However, there is another, bizarre solution!
Immediately before the setup function for the tests, I have the following line:
let app = XCUIApplication()
This allows my various tests to call app. instead of the longer, full syntax.
Removing this line was found to prevent the error from occurring. This was found via this Apple Developer forum thread:
https://forums.developer.apple.com/thread/4472
So... hopefully that fixes this for other people, but it fixed it for me.
This is usually related to slower environments. Xcode is not really good to guarantee UI Test execution on older & slower machines. This is true for virtualized environments (like the one Bitrise.io uses), as well as for older machines or machines with HDD storage instead of SSD.
There are workarounds which might or might not help, depending on your project. You can find a list of related issues & possible solutions at: https://bitrise-io.github.io/devcenter/ios/known-xcode-issues/.
From the link, the solutions which work in most of the cases:
Try another Xcode version.
Try another Simulator device (e.g. instead of running the test in "iPhone 6" try it with "iPhone 6s Plus")
Some users had success with splitting the tests into multiple Schemes, and running those separately, with separate Test steps.
A great article about splitting tests into multiple Schemes: http://artsy.github.io/blog/2016/04/06/Testing-Schemes
Others reported that if you add a delay after app.launch() it can leave enough time for Xcode / the iOS Simulator to initialize the Accessibility labels, so that UI Tests can properly find the elements by the Accessibility labels.
Related StackOverflow & other forum links:
iOS 9 UI Testing - Test Fails Because Target Control Isn't Available (yet)
https://forums.developer.apple.com/thread/31312
https://github.com/fastlane/fastlane/issues/3874

How well does the Xcode 7 UI Tests integrate with Xcode Bots? Does it display the UI testing steps?

Has anyone integrated the new Xcode UI Tests (XCUITest) with Xcode Bots yet? I'm specifically interested in how the test results are displayed. When the tests are run in Xcode itself, the Test Reports section lays out step-by-step what happened in each test case, complete with screenshots. This applies for both cases that passed, and failed. Do the Xcode Bots results do anything similar?
You won't see much on the web interface:
But you'll see similar reports on Xcode for each bot run: all the steps + screenshots were applicable.

How do I automatically create performance reports for an iOS app?

For some of my iOS application projects, I would like my CI server to be able to report the following properties:
startup time
frame rate
both as a graph over time, and with "low water marks" so the build fails if the measured results aren't within certain criteria. I've already found some of the things I need.
The CI server will be Jenkins.
I can use Transporter Chief to get the built app onto an iPad.
To measure the startup time I could find the duration between launching main() and leaving application:didFinishLaunchingWithOptions:.
To measure frame rate I can put a CADisplayLink into the app and sample its duration property.
If those tests output JMeter XML, then Jenkins can display the output via the Performance plugin.
What I haven't worked out is, how should I embed those tests into my app and launch it on the iPad? As described above I can deploy the app to the iPad, but then I don't know how I would launch it to gather the results of the tests. My unit tests are running on the simulator - I don't want to run the performance tests there obviously :-).
I imagine that there's a solution involving jailbreaking the iPad and controlling the app over SSH, I'd prefer not to go down that route if it's possible. If you have done that and can explain how it works, I'd still like to hear about it.
I'm also using fruitstrap to install apps on the device in CI. In terms of booting the app, I know of two ways:
Use fruitstrap with the debugger attached
I know teams that have done this to run KIF integration tests on devices in CI. I have played around with fruitstrap to get it booting apps on the device, but haven't myself taken the extra step of automating the whole thing
shameless plug for my post on fruitstrap: http://www.stewgleadow.com/blog/2011/11/05/installing-ios-apps-on-the-device-from-the-command-line/
Use the instruments command line tool with UIAutomation
I know the instruments tool can boot apps on the device automatically in CI (I wish it also installed them, but we have fruitstrap for that until Apple fixes it). So you could write a really simple little UIAutomation test that gave your app enough time to do its performance analysis.
Jonathan Penn has a nice little demo project for UIAutomation and build script that could be integrated with an 'install' step using fruistrap to try it out on the device
In both cases, I uses a little wrapper around libusb to give me the device ID of attached devices, so the more devices I plug into my CI machine, the more devices it runs tests on, https://github.com/sgleadow/iphone_detect
Can you launch the app on the device using lldb?
If so, it may also be able to capture the log output.

What does "Timed out trying to acquire capabilities data" mean coming from an iOS app?

This is a pretty silly situation: I'm using GHUnit to test an app and I'm running those tests outside the simulator according to the instructions.
Everything was great for a long time, but we're getting in to a situation now where I'm getting this mysterious log message in the console coinciding with a pause of several seconds rather frequently in my test suite:
Timed out trying to acquire capabilities data.
This is a little disconcerting if only because it's only happening on one machine; everything is as smooth as can be everywhere else I run this test suite. I can totally believe that there's hardware missing or failing on this machine, but does anybody have any idea where to go next in debugging this? Google has never heard the phrase before.
We found this to be a problem when the unit tests are the first thing run after reboot.
While it would be nice to know the cause more fundamentally, we were able to fix it by having the simulator run on login on our continuous integration machine.

Resources