Hi Automation/iOS Experts,
We have launched a new iPhone app project recently and would like to automate some basic acceptance tests using Apple's UIAutomation Instruments. Now we have a very basic framework for this task. This framework simply encapsulates the underlying JS functions provided by Apple in Java(to provide some debugging abilities) and drives the tests by Junit. The tests run in iPhone Simulator.
So the background is Instruments + Eclipse + Java + Junit + iPhone Simulator.
In writing the tests, I found the tests are greatly affected by the app's "states". For example,
The app shows some kind of "Terms of use" page when first run, but never again until the iPhone simulator is reset. After the user accepts the "Terms of use", she will be taken to a "Home" page, where she can input some search criteria and hit "search" and taken to search result page. Then she can be taken to a "View detail" page.
TOU -> Home -> Search Result -> View Detail.
Please keep in mind this is only a very simplified version of my real app.
Now here is the question:
In order to automatically test the view detail function, shall my test go through all the previous steps(assuming that the app is always started afresh without any states saved)?
or should my tests assume some pre-conditions(like "View Detail" tests assuming that my app is at "Search Result")?
Please give your reasons. And sorry if my English is hard to understand as it's not my mother tongue.
Thanks!
Vince
"Pre-conditions" / "known baseline" / "known state" are golden for automation. Even more so for UI testing since they have more variations that can cause test failures for issues unrelated to what your test was doing.
So from an automation perspective - go directly to the 'View Detail' test. The majority of your automated test scripts will be in those types of functional areas. TOU, etc. are one-time per reset/install. So two options:
First run an automated script to clear the TOU and exit, followed by all other tests that deal with home page, searching, etc. Or...
Clear the TOU manually, followed by all other tests.
Bonus option: you could also test that the TOU doesn't appear more than once per reset, since it shouldn't. That could be first and second test you always run each time. Then run the remaining tests.
If your automated always rely on the TOU appearing, then after the first test, the others will fail since the TOU won't appear until the next reset/test cycle. You could put a 'handler' at the start of all your automated tests to conditionally manage the TOU page. In this situation, I would go with option #1 above.
Related
I am trying to use XCUI and XC tests together. I found this twitter post saying it was possible. However, which section in the build settings do I put those new attributes?
I ask because I tried the method and put those settings in the user defined section of the project target and it would not let me run my tests because those settings were defined.
UI tests operate like this:
The app is launched.
Tests control another process external to the app, telling the app what to do.
Unit tests operate like this:
The app is launched.
The test code is injected into the running app.
The tests are executed.
These are radically different. UI tests operate strictly from the outside. They have no access to the internals of the program. In the end, UI tests boil down to simulating user actions.
Unit tests, on the other hand, operate from the inside. They can reach anything that is non-private.
The only way for UI tests to perform something like a unit test would be to build the test functionality into the production code, accessible through gestures. There are better ways to unit test than that, namely using unit testing frameworks.
So… no. They shouldn't live together.
What I need Actually?
We create the iPhone application for Mobile & iPad and the code is always checked in to the repository.
1) When ever the code is checked in to the code repository, that has to under go the automation testing and confirm the build does not failed or the app itself will works as per teh functional test scripts.
2) If there is any Build failure, mail has to be triggered to the developers.
3) The build is sucess and automation scripts are executed and that is also passed, next step is to deploy to the apple store and submit for review, necessary information for apple store is made available in configuration files.
Existing reference in stack overflow:
Continuous Integration for Xcode projects?
**Reference**: http://stackoverflow.com/questions/212999/continuous-integration-for-xcode-projects/17097018#17097018
Continuous integration for iphone xcode
**Reference**: http://stackoverflow.com/questions/1544119/continous-integration-for-iphone-xcode
Some of other references also was checked, which just give me the idea of how to execute functional script during code checkin, which is actual works like any CI tools likes Jenkins etc.
Above said reference are also discussed during 2009/2013, which are evry old.
What is available when researched?
I came to know about using using Hudson on the mac, which is very old version and not much supportive and also found Xcode OS X Server which is a product of apple itself where the reviews are not good and implementation is not feasible for my requirement.
Please share me the the approach of how to achieve this, also is that is possible to do CI process a one touch go for IOS, I found something similar to android with few confirmation from user.
At-least execution of Tests and creating an .ipa file in ios will be great.
When running a UI Test, how do I keep the simulator open so I can manually test additional steps?
After a UI Test completes, the simulator will shut down. I'd like to use the UI Test as an automation to reach a certain point in the app. Then I'll do a few additional things not covered by the test.
I don't think there's an option for that. You should separate your automatic and manual testing. Automatic testing ideally should be done on a CI. You should do your manual testing separately from UI tests.
I am using Spoon and Espresso to automate UI/Functional instrumentation tests on our android app.
I would like to know if there is a way to distribute instrumentation tests across the multiple connected devices and/or emulators so that i can reduce the test execution time.
Ex: I have say 300 tests that take 15 mins to run on 1 emulator. Is there a way i can add more emulators (say 4), distribute 75 tests to each emulator and reduce the test execution time?
Appreciate your inputs on this.
What you are looking for is called auto-sharding. You have to call the spoon-runner with --shard and add the serials from all connected devices with -serial. You can find the serials with adb devices.
You can choose more than one device in Choose dialog. Press Shift or CTRL key when click.
Another solution is to use Gradle. On the right side of Android Studio choose Gradle, then verification, finally connectedAndroidTest. It would give you the same effect as you would put in console:
./gradlew connectedAndroidTest or gradlew.bat connectedAndroidTest
I mean I would run all test cases on all available devices (physical and emulators). For choosing exactly which test classes you should make tasks in build.gradle.
Learn basics of Groovy programming language to make writing Gradle task scripts more effectively. Here's an example of task written in Groovy: Run gradle task X after "connectedAndroidTest" task is successful
You may also learn about Continuous Integration and its tools like Jenkins, or Travis, which you can configure to run specific test cases on every commit.
As an example please take a look at this build log of my Android project: https://travis-ci.org/piotrek1543/WeatherforPoznan/builds/126944044
and here is a configuration of Travis: https://github.com/piotrek1543/WeatherforPoznan/blob/master/.travis.yml
Have any more question? Please free to ask.
Hope it help
The Simperium Android Github tells how to run the Android tests, but I can't find how to run the iOS tests. I tried opening Simperium.xcodeproj but Product->Test is grayed out.
Eventually I'd like to write my own unit tests that use Simperium, but I thought I'd start by looking into how Simperium structures their tests.
Thanks.
The process you describe adds Simperium's Integration Tests target to your own app's schema.
Normally, you would want to switch to the 3rd party library scheme first, and run the tests right there. To do so, please, click the Scheme picker (right by the Play / Stop buttons), and select 'Simperium'.
Make sure to select a simulator as well, since Tests are not supported in the real device.
Regarding failures, the Integration Tests simulate real interaction with the backend, and have several Timeouts.
Would it be possible you're running them on a slow internet connection?.
Thanks!
I figured out how to run the tests. In Xcode, I selected the Integration Tests scheme and edited that scheme. I selected 'Test' on the left side, then clicked the little plus at the bottom of the main pane. I added the 'Integration Tests' target. The list of tests to run appeared in the pane, and Product->Test could then be used to run the tests.
Unfortunately, 9 of the integration tests failed when I ran them.