How to run Simperium unit tests on iOS - ios

The Simperium Android Github tells how to run the Android tests, but I can't find how to run the iOS tests. I tried opening Simperium.xcodeproj but Product->Test is grayed out.
Eventually I'd like to write my own unit tests that use Simperium, but I thought I'd start by looking into how Simperium structures their tests.
Thanks.

The process you describe adds Simperium's Integration Tests target to your own app's schema.
Normally, you would want to switch to the 3rd party library scheme first, and run the tests right there. To do so, please, click the Scheme picker (right by the Play / Stop buttons), and select 'Simperium'.
Make sure to select a simulator as well, since Tests are not supported in the real device.
Regarding failures, the Integration Tests simulate real interaction with the backend, and have several Timeouts.
Would it be possible you're running them on a slow internet connection?.
Thanks!

I figured out how to run the tests. In Xcode, I selected the Integration Tests scheme and edited that scheme. I selected 'Test' on the left side, then clicked the little plus at the bottom of the main pane. I added the 'Integration Tests' target. The list of tests to run appeared in the pane, and Product->Test could then be used to run the tests.
Unfortunately, 9 of the integration tests failed when I ran them.

Related

Jenkins Pipeline to test iOS app on multiple simulators and sdk versions

I've build Jenkinsfile for multibranch-pipeline as on this gist: https://gist.github.com/nysander/0911f439bca7e046c765c0dc79e35e9f
My problem is that I want to automate testing on multiple simulators and multiple iOS SDK versions. To make this work I make a lot of duplication in attached code.
Is there any way to make this work in loop and pull list of simulators / SDK's to test from some library, array, etc?
The other thing is that testing as in gist is made in sequence (when I made it parallel it broke - something like Xcode database locked)
Other issue is that tests on tests results summary are shown now 3 times every test, and if some fail on one simulator/SDK I have no idea how to know on which SDK it failed.
Any comments and help appreciated, also if such workflow is bad from the beginning.
I used to develop unit tests on Jenkins, running on multiple simulators. I came up with this https://github.com/plu/pxctest allow me to run testing in parallel, saving time as well. In your case it should be multiple simulators with different SDKs.
Regarding the summary, maybe you can export environment variables to tag every test.
Hope it helps!

How to leave simulator open after test run?

When running a UI Test, how do I keep the simulator open so I can manually test additional steps?
After a UI Test completes, the simulator will shut down. I'd like to use the UI Test as an automation to reach a certain point in the app. Then I'll do a few additional things not covered by the test.
I don't think there's an option for that. You should separate your automatic and manual testing. Automatic testing ideally should be done on a CI. You should do your manual testing separately from UI tests.

Android Test Sharding with Spoon

I am using Spoon and Espresso to automate UI/Functional instrumentation tests on our android app.
I would like to know if there is a way to distribute instrumentation tests across the multiple connected devices and/or emulators so that i can reduce the test execution time.
Ex: I have say 300 tests that take 15 mins to run on 1 emulator. Is there a way i can add more emulators (say 4), distribute 75 tests to each emulator and reduce the test execution time?
Appreciate your inputs on this.
What you are looking for is called auto-sharding. You have to call the spoon-runner with --shard and add the serials from all connected devices with -serial. You can find the serials with adb devices.
You can choose more than one device in Choose dialog. Press Shift or CTRL key when click.
Another solution is to use Gradle. On the right side of Android Studio choose Gradle, then verification, finally connectedAndroidTest. It would give you the same effect as you would put in console:
./gradlew connectedAndroidTest or gradlew.bat connectedAndroidTest
I mean I would run all test cases on all available devices (physical and emulators). For choosing exactly which test classes you should make tasks in build.gradle.
Learn basics of Groovy programming language to make writing Gradle task scripts more effectively. Here's an example of task written in Groovy: Run gradle task X after "connectedAndroidTest" task is successful
You may also learn about Continuous Integration and its tools like Jenkins, or Travis, which you can configure to run specific test cases on every commit.
As an example please take a look at this build log of my Android project: https://travis-ci.org/piotrek1543/WeatherforPoznan/builds/126944044
and here is a configuration of Travis: https://github.com/piotrek1543/WeatherforPoznan/blob/master/.travis.yml
Have any more question? Please free to ask.
Hope it help

Triggering breakpoints from an Xcode 5 CI bot

I have an Xcode 5 CI server running my XCTest unit tests.
My test cases rely on breakpoints to trigger specific actions. These actions are essential to the running of the tests.
Everything passes if I run the tests locally. The problem is: when a bot runs the tests on the server it seems as though breakpoints are ignored.
I tried a sample breakpoint, with an alert sound, just for testing. I shared the breakpoint and committed the shared breakpoint to the project's git repository. The bot correctly checks out the project with the breakpoint included (I can verify this by examining the project in /Library/Server/Xcode/Data/BotRuns/Cache/...).
However, when the bot runs the breakpoint is NOT triggered. I don't hear the sound and execution does not pause.
This behaviour obviously makes sense for most cases, but in my specific case - is there any way to configure the bot so that breakpoints are not ignored?
Whether you can enable this or not having your tests rely on something external to the system under test, like a breakpoint, to ensure that your tests pass seems like a broken design to me.
Ideally your tests should be able to run on any machine either in an interactive or non interactive way. As you cant guarantee that Breakpoints have the "Automatically continue after evaluating" flag set then they would seem to definitely not be suited to a non interactive run.
Using breakpoints for testing also then adds a dependency on Xcode for running the tests as other build systems like xcodebuild and xctool might not even understand breakpoints in the project file.
I would refactor your tests to remove this dependency on breakpoints. If you need help with that it sounds like a great stack overflow question ;)

iPhone app UI automation test design

Hi Automation/iOS Experts,
We have launched a new iPhone app project recently and would like to automate some basic acceptance tests using Apple's UIAutomation Instruments. Now we have a very basic framework for this task. This framework simply encapsulates the underlying JS functions provided by Apple in Java(to provide some debugging abilities) and drives the tests by Junit. The tests run in iPhone Simulator.
So the background is Instruments + Eclipse + Java + Junit + iPhone Simulator.
In writing the tests, I found the tests are greatly affected by the app's "states". For example,
The app shows some kind of "Terms of use" page when first run, but never again until the iPhone simulator is reset. After the user accepts the "Terms of use", she will be taken to a "Home" page, where she can input some search criteria and hit "search" and taken to search result page. Then she can be taken to a "View detail" page.
TOU -> Home -> Search Result -> View Detail.
Please keep in mind this is only a very simplified version of my real app.
Now here is the question:
In order to automatically test the view detail function, shall my test go through all the previous steps(assuming that the app is always started afresh without any states saved)?
or should my tests assume some pre-conditions(like "View Detail" tests assuming that my app is at "Search Result")?
Please give your reasons. And sorry if my English is hard to understand as it's not my mother tongue.
Thanks!
Vince
"Pre-conditions" / "known baseline" / "known state" are golden for automation. Even more so for UI testing since they have more variations that can cause test failures for issues unrelated to what your test was doing.
So from an automation perspective - go directly to the 'View Detail' test. The majority of your automated test scripts will be in those types of functional areas. TOU, etc. are one-time per reset/install. So two options:
First run an automated script to clear the TOU and exit, followed by all other tests that deal with home page, searching, etc. Or...
Clear the TOU manually, followed by all other tests.
Bonus option: you could also test that the TOU doesn't appear more than once per reset, since it shouldn't. That could be first and second test you always run each time. Then run the remaining tests.
If your automated always rely on the TOU appearing, then after the first test, the others will fail since the TOU won't appear until the next reset/test cycle. You could put a 'handler' at the start of all your automated tests to conditionally manage the TOU page. In this situation, I would go with option #1 above.

Resources