I have a set of automated test cases set up in instruments using tuneup.js to test an app. I decided to use tuneup.js as it allowed me to separate my tests into individual test cases and run the whole set from one individual script, this works fine if all the tests run ok, however if one fails, all the tests fail as the simulator is left in an unknown state (I have written my tests so they all start and end on the same login screen) Is there a way to reset the simulator, or restart the app between test cases?
Try to launch tests from the command line. UI Automation allows to execute only one test in one run. After the test will be completed (does not matter if it was failed or passed) - application will be kicked by the system (UIAutomation). At least it works with real devices.
Your command line launch script will work in the following manner:
1. Reads configuration file (can be any file txt or xml) with path to your tests. At this point you will have an array with path to your tests and total tests count.
2. Then using simple 'for' loop (from 1 to 'testcount') it will launch UIAutomation with required parameters. One of the parameters will be the path to your test script that was read from the configuration file.
You can also put the path to the 'configuration file' as a parameter for your command line launch script. This will allow you to run any test set simply calling the launch script with required configuration file.
I wrote a script that will reset the contents & settings of all versions and devices for the iOS Simulator. It grabs the device names and version numbers from the menu, so it will include any new devices or iOS versions that Apple releases simulators for.
It's easy to run manually or use in a build-script. I would suggest adding it as a Pre-Action Run Script before the build.
https://github.com/michaelpatzer/ResetAllSimulators
Having failed tests leave your app in an unknown state is one of the main problems with using Apple's instruments tool as-is. We solved this in a framework called Illuminator (on GitHub, and inspired by tuneup.js) in two ways.
First, we wrote an automation bridge -- a channel for RPC with the app being tested, which allows us to reset our app before each test.
In cases where that's not sufficient, the Illuminator test runner has an option to re-run each failed test in its own pristine launch of the simulator (e.g. with --retest 1x,solo).
Related
I have a set of XCUItests that exercise a bit of functionality in my app. Currently, I have the following function that pulls up a keyboard, types some stuff in, then taps "Search" (which is the equivalent in Enter in this context.
func clickSearchOnKeyboard(_ app: XCUIApplication) {
XCTAssert(app.staticTexts["Search"].waitForExistence(timeout: 10))
app.textFields["SearchItemView.SearchTextFieldID"].clearAndEnterText(testData.productData.valid.styleColor)
XCUIApplication().buttons["Search"].tap()
}
However, after updating the simulators to iOS 13, this test will fail because now when the keyboard is first pulled up we get a "What's New" fly-out explaining the new swipe functionality.
I think I can just add an If clause to my test code to handle this the first time it pops up, but I'm wondering if anyone has found out a way to disable these sorts of things for simulator testing:
Something in the Init() method that would disable "What's New" type of pop ups?
Some clever function that could always intercept this event and click "Continue"?
EXTRA BONUS POINTS: These automated tests run as part of an automated pipeline. As part of that, an assumption is that these tests run against a completely brand new simulator set (so we can't reuse existing simulators). Specifically, we blow away simulators (using Erase All Contents & Settings) prior to every run. So whatever solution will need to be completely portable and require 0 manual intervention.
Something else?
I think just testing for the dialog and dismissing it if it appears should be the best approach.
You could also launch an apple tool and trigger the popup after the erase & reset operation is complete as part of your initial tooling setup.
e.g.
xcrun simctl erase deviceuuid
xcodebuild test-without-building test name:DismissKeyboardTour destination=deviceuuid
xcodebuild test-without-building mytestsuite destination=deviceuuid
I would think you would only need to do this once per test suite run (eg all the tests)
You can run tests on clones of an already configured simulator (use xcrun simctl clone ...)
If you want to create simulators from scratch, then add a git repository in simulator folder, configure it as you like (skip keyboard onboarding in your case) and use git status to know what shall be changed in order to configure your simulators in scripts.
Detailed:
Create new simulator
Create a new git repository from its folder
Configure simulator the way you like
Watch changes in the simulator with git
Based on these changes add steps to your CI/CD runner script (prior test running)
I have an iOS app in Xcode 8.2. It has a test target / scheme, for which “Gather coverage data” is checked in the scheme’s Test / Info settings. Coverage data is not gathered. I see how many times a line was iterated over in the gutter as usual, but the Report navigator’s test runs don’t indicate any coverage at all.
I’m wondering if this is because I’ve set the tests to run hostless, i.e. without needing to actually fire up my app – they’re pure logic tests.
Is this possible?
Yes a hostless XCTest target should gather code coverage data.
'iOS Unit Testing' (aka XCTest) bundles, which test a dynamic framework or something else which doesn't require the application environment to exist to run, should happily collect code coverage data and display it in Xcode. Even of the Host Application is set to None. This works either when running Xcode > Product > Test on the Scheme for the framework under test or on the Scheme for the unit tests themselves (if the test bundle is listed in the Test pane of the Scheme editor).
Your problem must be somewhere else, sorry. Its hard for me to guess what the problem is, I suggest you try making a fresh project and see if you can reproduce the problem.
So, i have already existing project at my hands and i’m trying to create some UI tests by using this new fancy UI Testing Bundle provided by apple. The problem is that test target doesn't have access to any external framework (and i need to do some setup with one of them). Adding framework in build phases and coping framework search path from main target doesn't do anything.
After day of browsing i found out only one thing, that ”makes things kinda different”. By setting up Bundle Loader and Test Host to $(BUILT_PRODUCTS_DIR)/App.app/App , i still couldn't import external frameworks to test.m, but i could import classes that do that for them self. And it all would be fine and dandy unless it didn't break some stuff. By setting Bundle and Host now my UI test is unable to execute launch method:
[[[XCUIApplication alloc] init] launch];
It crashes with error: Assertion Failure: UI Testing Failure - App state is still not terminated.
In the end i could remove launch method from setup and trigger every single test manually, so it restarts application every time before executing, but this solution seems so wrong (especially for some bigger projects). Does anyone know proper way to handle this problem?
What I've done for this is add an environment variable to XCUIApplication to specify UI tests are being run. I then have a pre-processor check for #DEBUG in a main part of the application, and then check if the test environment variable has been set; if it has, do the necessary steps for the UI tests.
Essentially, this will allow you to configure your app to how you need it for the UI tests to run. It also means the pre-processor check will strip out that setup code entirely for the release build.
I am using UIAutomation to test an app, and I would like to find out my code coverage. But since javascript has no preprocessor, that means that gcov and similar are not an option. Has anyone come up with a solution for this?
For Xcode version 4.5 and newer:
Set the “Generate Test Coverage Files” build setting to Yes.
Set the “Instrument Program Flow” build setting to Yes.
This will generate code coverage files every time you run your application in the simulator and exit the application. A detailed explanation of these two steps can be found at the beginning of http://qualitycoding.org/xcode-code-coverage/.
For any unit tests, code coverage files will be generated every time you hit the test button and the tests complete. For UIAutomation, it is a little bit more tricky. You have to ensure the application exits at the conclusion of your tests. The easiest way I found to do this is to turn off multitasking. Add UIApplicationExitsOnSuspend in your MyAppName-Info.plist file and set this option to 'YES'. Run your UI automation test and at the end of it you can exit the app either by manually pressing the home button in the simulator or using the UIATarget.localTarget().deactivateAppForDuration() method.
Note: if your app has any UI Automation tests that rely on the deactivateAppForDuration() method, the tests will terminate upon running the command.
Code Coverage is only used for Unit Testing, there is no Code Coverage for UIAutomation because there is no way to tell how many elements on screen has been "touched" by UIAutoamtion
I have enabled test coverage with no problem using the Generate Test Coverage Files and Instrument Program Flow with the fopen$UNIX2003 and fwrite$UNIX2003 hack.
But the problem is, when you use XCode to run tests it ends up launching the simulator which launches your app. When that happens the output for the test coverage is not truly correct because it thinks certain parts of the code are touched because they are executed when the app launches, not because a test touched them.
Is there a better way to see what code was actually touched by a test?
EDIT:
So not really an answer to my question, but a somewhat "solution" that will at least get better coverage numbers can be found here: Run logic tests in Xcode 4 without launching the simulator