I'm having trouble getting my test suite to pass in full when it runs on CircleCI. Everything will pass locally when run in the suite and when run individually. I eventually came across the issue that was causing problems, the Apple Notification/Location permission dialog. The tests still run, but any test that is expecting an alert to show fails because the Apple dialog is still on the screen. This basically happens every time you run the full suite on a new device or delete the app.
My question, what is the best way to deal with these dialogs? I'm pretty sure it is impossible to write the UI tests around it since all UI tests are pretty strict about ordering and what is on screen. The tests are randomized and the dialog only shows for the first test that calls for it, the rest won't need to worry about it.
My current thinking, but unsure how to proceed:
Mock the request for location/push notifications so it never triggers the dialog. Difficult for me so far as I have a class that does all the work for push and location. Do I mock against Apple classes or my own?
Write a seperate target for tests that only has 1 test that triggers the dialog and can close it. This would only be used when run on the CI server. This may not work as the same simulator may not be used between test runs. I would think it would, but no guarantees.
Add debug code to live code to bypass some of these dialogs and permissions. I already have code when run in DEBUG for notifications since you never actually get a successful token on the simulator. I just stub a fake return when run in debug so I can continue testing. What's a a little more? I'm not honestly considering this one if I can absolutely help it. There is already enough "testing" code in the live codebase that I'd like to prevent any more if at all possible.
Would love some feedback or examples on how to proceed if anyone has any ideas.
Setup Details:
ObjC
Latest version of Xcode, supports iOS 8 and up
Using KIF and OCMock for testing
We had a couple ways to workaround this:
In beforeEach/setUp, check the existence of the alert box and then use KIF's API to ack.
https://github.com/plu/JPSimulatorHacks just works!
Use a XCUITest to trigger permission flows and turn all permission on before moving to the target test bundle.
This answer is woefully late to my own question, but we did manage to solve this by using wix/AppleSimulatorUtils. Locally this was easy to resolve by running the app and approving any pop-ups for Location or Notifications. They won't come back until you run on a new sim. For command line/CI runs, we would use this as part of the setup. Pre-approve any permissions for the device and they won't show up during the test run. This tool has saved me many countless hours and I recommend it completely.
Related
I am writing integration tests for my iOS app using KIF with the latest Xcode 5. When I run a test, a suite of tests, or all of them, the tests pass with no errors according to the console log, but the test navigator either takes many minutes to show the green pass icon for simple tests like Login, or keeps the spinner running indefinitely. I frequently have to Force Quit Xcode in order to clear the test results. I see this both on the simulator and the device.
I have tried using [tester waitForTimeInterval:3.0]; at the end of each test to no avail.
I have not found any discussions or solutions in all my searches, so I'm hoping to get some answers on this one.
Thanks in advance.
Thanks to Scott Anderson of Walmart Labs for this tip.
The cause of the slow test resolution was NSLog(). I have my own macro version that activates logs when compiled for Debug, which is the case for the Test builds. I log the output of all my server calls, which adds up especially during the registration process. When I disabled that, my tests came back green right after finishing, and no more hanging spinner.
The test navigator must be parsing the console output for XCTest result, slowly. This is my speculation, but would explain the slowness.
I need to know if I can do Continuous Integration with XCode server. In example: set run the tests every night or when someone commit changes, and more..
I am trying to decide one iOS Ui Automation tool to integrate with my Xcode server
Thanks
There are a few problems here:
UIAutomation has no built in support in Xcode Server. I've filed bugs, I've chased down people at WWDC. Most I've ever gotten on this problem is basically "shrug." I'm not sure UIAutomation is a priority for Apple right now. So you're not going to get any official support.
As was noted, you might be able to use a trigger. The trigger won't be able to add anything to the Xcode Server report, besides possibly the error logging. But you're not going to get anything added to the nice report table.
Running on actual devices has traditionally been a problem (if you care about that.) The loading the app part has been a problem for us, but Xcode Server might be able to preload the app for you. In addition, it seems like this might have changed in the iOS 8 SDK.
There is just a lot of uncertainty around this sort of workflow. I'm hoping Apple eventually makes an announcement or adds a new tool, but the best answer I've gotten is if you want to go down this path, use UI Unit Tests. That's a shame because it requires knowledge of Obj-C or Swift, and means interacting with the app at an API level instead of an abstract level, but if you're looking for the direction Apple wants to see people go, that's it.
Edit 7/4/2015: As of WWDC 2015, there is a new UI Testing component as part of Xcode 7 that, in my experience, seems totally supported, and is promising Xcode Server support. I would very strongly recommend using that, and not using the Instruments UIAutomation tool.
With Xcode6 right around the corner they are adding some features to XCode Server specifically it looks like "Triggers" will be helpful for running iOS UIAutomation tools. Since you can run UI automation scripts from cmd line it should be possible to utilize triggers to run your scripts post builds. This along side the logic for when a bot should run will let you decide if it should be nightly or on every commit.
https://developer.apple.com/library/prerelease/ios/documentation/DeveloperTools/Conceptual/WhatsNewXcode/Articles/xcode_6_0.html#//apple_ref/doc/uid/TP40014509-SW1
I wrote a framework around UIAutomation called Illuminator to handle tasks like nightly test runs, pull request tests, and other automated conveniences.
It provides a flexible and extensible command line that can execute any particular subset of tests that you'd like, and produces reports in JUnit format (used by Jenkins).
For some of my iOS application projects, I would like my CI server to be able to report the following properties:
startup time
frame rate
both as a graph over time, and with "low water marks" so the build fails if the measured results aren't within certain criteria. I've already found some of the things I need.
The CI server will be Jenkins.
I can use Transporter Chief to get the built app onto an iPad.
To measure the startup time I could find the duration between launching main() and leaving application:didFinishLaunchingWithOptions:.
To measure frame rate I can put a CADisplayLink into the app and sample its duration property.
If those tests output JMeter XML, then Jenkins can display the output via the Performance plugin.
What I haven't worked out is, how should I embed those tests into my app and launch it on the iPad? As described above I can deploy the app to the iPad, but then I don't know how I would launch it to gather the results of the tests. My unit tests are running on the simulator - I don't want to run the performance tests there obviously :-).
I imagine that there's a solution involving jailbreaking the iPad and controlling the app over SSH, I'd prefer not to go down that route if it's possible. If you have done that and can explain how it works, I'd still like to hear about it.
I'm also using fruitstrap to install apps on the device in CI. In terms of booting the app, I know of two ways:
Use fruitstrap with the debugger attached
I know teams that have done this to run KIF integration tests on devices in CI. I have played around with fruitstrap to get it booting apps on the device, but haven't myself taken the extra step of automating the whole thing
shameless plug for my post on fruitstrap: http://www.stewgleadow.com/blog/2011/11/05/installing-ios-apps-on-the-device-from-the-command-line/
Use the instruments command line tool with UIAutomation
I know the instruments tool can boot apps on the device automatically in CI (I wish it also installed them, but we have fruitstrap for that until Apple fixes it). So you could write a really simple little UIAutomation test that gave your app enough time to do its performance analysis.
Jonathan Penn has a nice little demo project for UIAutomation and build script that could be integrated with an 'install' step using fruistrap to try it out on the device
In both cases, I uses a little wrapper around libusb to give me the device ID of attached devices, so the more devices I plug into my CI machine, the more devices it runs tests on, https://github.com/sgleadow/iphone_detect
Can you launch the app on the device using lldb?
If so, it may also be able to capture the log output.
I would like to perform the following iPad/iPhone testing scenario automatically:
Tap Edit box A
Type text "abcd"
Verify button B is high-lightened
I understand UIAutomation 4.0 allow you to write a simple JavaScript to perform the above steps. However, UIAutomation does not have test infrastructure ready. For example it lacks testing macros to show if any tests failed and does not have a clear way to run setup and shutdown for each test cases.
That is why I look back to XCode unit testing. Logic tests won't work for me. How about Application tests?
Basically, I am looking for something that can do GUI testing and at the same time has test infrastructure. It is even better if it can be integrated to continuous build environment.
Check out Zucchini. It's just come out and I saw a demo at a recent YOW! conference. It's basically a BDD testing framework that uses coffeescript for scripting and runs against an actual device. It's also fully runnable from CI servers which makes it perfect for agile teams.
I haven't run it myself yet, but it seems to exactly what I'm looking for and No I don't work for PlayUp :-)
This is a pretty silly situation: I'm using GHUnit to test an app and I'm running those tests outside the simulator according to the instructions.
Everything was great for a long time, but we're getting in to a situation now where I'm getting this mysterious log message in the console coinciding with a pause of several seconds rather frequently in my test suite:
Timed out trying to acquire capabilities data.
This is a little disconcerting if only because it's only happening on one machine; everything is as smooth as can be everywhere else I run this test suite. I can totally believe that there's hardware missing or failing on this machine, but does anybody have any idea where to go next in debugging this? Google has never heard the phrase before.
We found this to be a problem when the unit tests are the first thing run after reboot.
While it would be nice to know the cause more fundamentally, we were able to fix it by having the simulator run on login on our continuous integration machine.